text
stringlengths
3
744k
summary
stringlengths
24
154k
Winter returned with a vengeance Monday, dumping heavy lake-effect snow across portions of the Buffalo Niagara region, with a particular fury in the Southtowns and the Southern Tier. Frigid temperatures also are forecast, especially for today and Wednesday, when area highs probably won’t reach 15 degrees. “We’re finally back to the dead of winter, and it looks like it’s going to last for a while,” said National Weather Service meteorologist Jon Hitchcock. “This probably will be the coldest we’ve seen in a couple of years.” Snowfall totals were still mounting late Monday, especially throughout southern Erie, Chautauqua and Cattaraugus counties, as law enforcement in those areas closed down some roadways because of hazardous travel. They also warned motorists to travel the Thruway “at their own risk.” National Weather Service spotters reported that a foot of snow fell Monday as of Monday night in Ripley. In Perrysburg in nearby northern Cattaraugus County, 10 inches of snow was already on the ground at about 7:30 p.m. – with more piling up. One of the highest readings reported Monday night in Erie County was in Colden, which had 8 inches. Meanwhile, only 1.5 inches had fallen at Buffalo Niagara International Airport in Cheektowaga. “It’s kind of like winter finally decided to show up,” said Jeff Wood, meteorologist with the National Weather Service in Buffalo, who explained that a large “upper-level” trough was ushering extremely cold air into the area. When that cold air crosses the warmer, open waters of the Great Lakes, that can mean big-time snow, measured by the yardstick. “It’s a very cold air mass setting up over the lakes, and the lakes are still open,” Wood said. “It’s certainly not out of the ordinary for January.” “The highest accumulation areas are going to be mainly east of Lake Ontario and, of course, in our area, from the Southtowns on south,” he added. Chautauqua County authorities reported 36 weather-related vehicular accidents Monday. Heavy snow closed the westbound lanes of Interstate 86 between Sherman and the Pennsylvania state line because of “multiple accidents” related to the weather. Tow truck operators reported to deputies that there were “no guarantees” they could reach motorists stranded because of the heavy snowfall. Earlier Monday, the westbound Thruway was closed for about an hour between exit 57A, Eden-Angola, and the Pennsylvania line due to a series of accidents, including a multi-vehicle pileup. The Thruway reopened shortly after 6 p.m., with the left lane still blocked to traffic, according to the authority. Crashes also were reported earlier between exits 59 in Dunkirk and 60 in Westfield, and between Eden and Silver Creek, as well as on the eastbound Thruway near the Pennsylvania border. Other weather-related mishaps on roadways included: • A single-vehicle crash on West Perimeter Road in Steamburg at 1:15 p.m. The vehicle spun off the icy road and struck a building. The unidentified driver was not injured, Cattaraugus County sheriff’s deputies reported. • A Niagara County crash on Lockport-Olcott Road at about 1:40 p.m. in which at least one person had to be extricated from a vehicle. The crash at Dale Road involved two vehicles and a tractor-trailer, sheriff’s deputies said. One person was taken to the Eastern Niagara Hospital in Newfane for treatment of injuries that were not life-threatening. The storm started Monday morning by hugging the Lake Erie shoreline in southern Erie and Chautauqua counties. The Weather Service calls for snow totals of up to one to two feet in southern Erie and Chautauqua counties through this evening. Cattaraugus, Wyoming and Allegany counties also are expected to be hit. This lake-effect snow band, though, is expected to leave northern Erie, Niagara, Orleans and Genesee counties relatively unscathed, with those areas expected to receive only a few inches of snow. The record book shows Feb. 10, 2011, was the last time the region had a high temperatures of 15 degrees or colder. Hitting that mark again “looks like a pretty good bet [today] and Wednesday,” Hitchcock said. The frigid forecast prompted the Buffalo City Mission to issue a Code Blue alert, which is put in effect on nights when temperatures are expected to drop below 10 degrees – or below zero with wind chill. During those times, the mission opens its warming center at 150 E. North St., offering food, shelter and clothing for those in need. Fortunately, the forecast also calls for clouds, which will insulate the region a bit at night. Lows are expected into the single digits, but not below zero. Still, strong winds could send wind chills down to as low as 15 degrees below zero in some areas the next couple days. News Staff Reporter Matt Gryta contributed to this report. email: tpignataro@buffnews.com and gwarner@buffnews.com ||||| Coldest air in nearly two years hits the Midwest U.S. A blast of Arctic air more intense than anything experienced during the winter of 2011 - 2012 has descended over the Midwest U.S., bringing the coldest temperatures in nearly two years. The low hit -2°F Monday morning in Des Moines, Iowa, marking the first day since February 10, 2011 that Des Moines had dropped below zero. The 710 consecutive days the city had gone without reaching 0°F was the longest such streak on record (previous record: 368 straight days, beginning January 23, 1954.) In Minneapolis, the mercury dropped to -10°F Monday morning, the coldest day since February 10, 2011. With the high temperature not expected to get above zero Monday, the city will likely snap its record-long streak of just over four years without a high temperature above 0°F. The last time the high temperature at the Minneapolis airport was below zero was on January 15, 2009, when the thermometer climbed to only -6°F. The previous longest such streak since record keeping began in 1872 was a 3.1 year streak that ended in January 2004. Strong winds accompanying today's cold blast have dropped the wind chill to a dangerously cold -40 to -50°F across much of Minnesota and North Dakota. The wind chill bottomed out at -51°F at Langdon, North Dakota at 4:35 CST Monday morning, thanks to a temperature of -22° combined with a wind of 17 mph. The wind chill hit -46°F at nearby Devils Lake and -51° at Hamden. The lowest wind chill in Minnesota was at Le Center: -43°F. Brr! Jeff Masters ||||| Last Hurrah for Bitter Cold Play Video Do you want the video to start automatically? Autoplay On Off Boiling Water Trick Boiling Water Trick The Science Behind Wind Chill The Science Behind Wind Chill Forecast for the Next 3 Months Forecast for the Next 3 Months TOO Cold for Ice Castles? TOO Cold for Ice Castles? Rescuer Jumps Into Burning Home Rescuer Jumps Into Burning Home Widespread Ice Possible Widespread Ice Possible Elk on the Loose on Icy Highway Elk on the Loose on Icy Highway Would You Jump From This?! Would You Jump From This?! Solar Eruption Headed to Earth Solar Eruption Headed to Earth Blast from the Past Blast from the Past Thousands of Crocodiles Escape Thousands of Crocodiles Escape Tigers Attack Snowmen Tigers Attack Snowmen Ponies Beat Cold In Style! Ponies Beat Cold In Style! See Car Explode at Gas Station See Car Explode at Gas Station Nice Warm-up Next Week Nice Warm-up Next Week Owl Rescued by Emergency Crews Owl Rescued by Emergency Crews Boy Hangs On in Rushing Water Boy Hangs On in Rushing Water Live: Jim Cantore in Switzerland Live: Jim Cantore in Switzerland Ice Causes Hundreds of Wrecks Ice Causes Hundreds of Wrecks Farmers Welcoming Heavy Snow Farmers Welcoming Heavy Snow Sickening Fog Covers City Sickening Fog Covers City Latest forecast for severe weather Latest forecast for severe weather Crazy Cold Wet T-Shirt Crazy Cold Wet T-Shirt Dog's Eye View: Playing Fetch Dog's Eye View: Playing Fetch Freezing Rain Grounds Travelers Freezing Rain Grounds Travelers A National Look at the Next 3 Days A National Look at the Next 3 Days Great Balls of Ice!!! Great Balls of Ice!!! Protecting from the Freeze Protecting from the Freeze Burning Warehouse Turns to Ice Burning Warehouse Turns to Ice Frigid Cell Tower Rescue Frigid Cell Tower Rescue Forecast for the Next 3 Months Forecast for the Next 3 Months Smart Phone Traffic Jam Smart Phone Traffic Jam Keeping Zoo Animals Warm Keeping Zoo Animals Warm Penguin-Cam! Penguin-Cam! Skiing With Dog Power Skiing With Dog Power A Water Crisis A Water Crisis Train Slams Into Tractor-Trailer Train Slams Into Tractor-Trailer Toddler Tossed from Car Toddler Tossed from Car Stay in Car after Icy Crash? Stay in Car after Icy Crash? Country to Get Rid of Cats? Country to Get Rid of Cats? Epic Battle: Cat vs. Cat Epic Battle: Cat vs. Cat Plane Barely Misses People Plane Barely Misses People Living in the Icebox of America Living in the Icebox of America January Deep Freeze January Deep Freeze 3 Missing in Antarctica 3 Missing in Antarctica Working in Frigid Temps Working in Frigid Temps 5-Year-Old Operates Bulldozer 5-Year-Old Operates Bulldozer Green Ideas to Protect NYC from Storms Green Ideas to Protect NYC from Storms Bill Gates at World Economic Forum Bill Gates at World Economic Forum Frozen Rivers Flood Threat Frozen Rivers Flood Threat What 1-Million Degrees Looks Like What 1-Million Degrees Looks Like Fishermen Dangle Over Waterfall Fishermen Dangle Over Waterfall Cruising With Manatees Cruising With Manatees Watch Diver Rescue Dolphin Watch Diver Rescue Dolphin Goat Gets Day in Court Goat Gets Day in Court 76-Year-Old Message in a Bottle 76-Year-Old Message in a Bottle Climate Change on World Stage Climate Change on World Stage Giant Whale Shocks Divers Giant Whale Shocks Divers Ultimate Test: Will Truck Fly? Ultimate Test: Will Truck Fly? Snow Strands Travelers Snow Strands Travelers Rising Sea Level Concerns Rising Sea Level Concerns Dam Causes Icy Wonderland Dam Causes Icy Wonderland Boiling Water Trick Boiling Water Trick Changing Attitudes on Warming Changing Attitudes on Warming Puppies Attack Feet! Puppies Attack Feet! Crazy Twist on Ice Climbing Crazy Twist on Ice Climbing Still Waiting for Big Snow Still Waiting for Big Snow Raw: 50-Car Pileup in Ohio Raw: 50-Car Pileup in Ohio TOO Cold for Ice Castles? TOO Cold for Ice Castles? Weather Wizard: Freezing Rain Weather Wizard: Freezing Rain Will Climate Change Impact Davos? Will Climate Change Impact Davos? The Science Behind Wind Chill The Science Behind Wind Chill SO CUTE: Snow Covered Pug! SO CUTE: Snow Covered Pug! The Problems with Going Green The Problems with Going Green Snakes Seen From Space Snakes Seen From Space Growth Tops Davos Agenda Growth Tops Davos Agenda Girl Killed in Pileup Girl Killed in Pileup One Way to Tackle a Shark One Way to Tackle a Shark Obese More Likely to Die in Crashes Obese More Likely to Die in Crashes Snow Closes Many UK Schools Snow Closes Many UK Schools Skier Gets it in the Face! Skier Gets it in the Face! Lion Versus Camera Lion Versus Camera Watch Car Flip on Icy Interstate Watch Car Flip on Icy Interstate Seal Hops in Family Boat Seal Hops in Family Boat Bats Behaving Badly! Bats Behaving Badly! Man on Mars? Man on Mars? Helmet Cam: Intense Avalanche Rescue Helmet Cam: Intense Avalanche Rescue See Snow Crash Down on Fireman See Snow Crash Down on Fireman We Want Your Winter Weather Videos! We Want Your Winter Weather Videos! My Friends' Weather My Friends' Weather Learn about My Friends' Weather Learn about My Friends' Weather NFL's 5 Worst Weather Cities NFL's 5 Worst Weather Cities Safest Weather Cities Current Wind Chill Bitterly cold wind chills increase the risk of frostbite and hypothermia. Current Wind Chill The Arctic air first plunged into the Upper Midwest on Saturday, Jan. 19, reaching northern parts of North Dakota and Minnesota where temperatures tumbled throughout the day. By Sunday, Jan. 20, parts of this region stayed below zero all day, including Grand Forks, N.D., and International Falls, Minn. The bitter cold spread south and east on Monday, Jan. 21. Minneapolis-St. Paul logged a high of -2 that day, the first subzero high there since Jan. 2009; the streak of four years and six days without a subzero high was the longest on record there, and nearly a year longer than the previous record streak. Des Moines, Iowa, also ended a record streak that day, reaching a low of 2 below zero and ending a 710-day streak without a subzero low, the longest such streak for Iowa's capital. (MORE: Arctic Blast Won't Let Go) On Tuesday, Jan. 22, both Chicago and Rockford, Ill., fell below zero for the first time in 711 days. For Rockford, this was the longest at-or-above-zero spell on record, and it was the fourth-longest in the Windy City. Other lowlights Tuesday included lows of -30 in International Falls; -33 in Ely and Bigfork, Minn.; and -37 in Big Black River, Maine. The big chill hit the East Coast on Tuesday as well; New York, Philadelphia, and Washington, D.C., all recorded their highs for the day between midnight and 1 a.m. On the morning of Wednesday, Jan. 23, Reagan National Airport in the nation's capital fell to 15 degrees, its coldest reading since March 3, 2009. Philadelphia (12 degrees) and New York (11 degrees) noted their lowest temperatures since Jan. 24, 2011. Other frigid readings Wednesday morning included -36 at Estcourt Station, Maine, and -24 at Saranac Lake, N.Y. While winds were light over the lower elevations, the mountains of New England experienced high winds and brutal wind chills. The feels-like temperature bottomed out at 86 below zero on top of Mount Washington in New Hampshire, and 63 below zero on Mount Mansfield in Vermont. Follow Nick on Twitter ||||| America Brrr! 'Dead Of Winter' Sets In; Coldest Air In Nearly Two Years It felt like -51 degrees Fahrenheit in Langdon, N.D., on Monday and brutal wind chills like that are going to continue across northern states as winter really sets in. Even in the "warmer" places, it's not going to feel like it's much above zero for the next few days. And "lake effect" snows are expected to pile up around the Great Lakes. Weather Underground's Jeff Masters says this big freeze is bringing some of the coldest temperatures the nation has seen in nearly two years. And the low, low readings will continue through the week from the upper Plains to New England and down into the mid-Atlantic. The Weather Channel puts on its Mom hat to tell folks that "scarves, stocking caps, long underwear — all of it will come in handy for millions of you this week." National Weather Service meteorologist Jon Hitchcock tells The Buffalo News that "we're finally back to the dead of winter, and it looks like it's going to last for a while. This probably will be the coldest we've seen in a couple of years." At 6:15 a.m. ET today in Cleveland it was 11 degrees and with the wind it felt like -14, The Plain Dealer reports. Schools across Northeast Ohio (and in many other places in the grip of this cold snap) are closed today because of the weather. Feel free to answer that classic question — "How cold was it?" — in the comments thread. By the way, it's pretty cold in Europe this week too. Update at 11:55 a.m. ET. Most Liked Quip So Far: Commenter "Fred Flintstone" offered an old classic — "It was so cold that I saw a lawyer with his hands in his own pockets!" — that's so far been "voted up" 45 times. Can anyone beat that?
– If it finally feels like winter has arrived, well, that's because it has. We are, in fact, in the "dead of winter" now, NPR reports. But at least you're not in Langdon, North Dakota, where it felt like minus-51 degrees Fahrenheit yesterday thanks to wind chill. The rest of the northern states are seeing similar wind chills, and even in places that are usually fairly warm, the next few days will feel pretty close to zero. According to Weather Underground, an Arctic air blast yesterday brought some of the lowest temperatures seen in almost two years to the Midwest. And whether you live in the upper Plains, New England, or the mid-Atlantic, you can expect to see the deep freeze continue for the rest of the week, NPR notes. Pretty much everyone is in agreement: The Weather Channel says millions of us will need our warm clothes this week to ward off the season's coldest air, and a National Weather Service meteorologist tells the Buffalo News that conditions will "last for a while."
SEOUL When South Korean President Lee Myung-bak left on a state visit for Japan last week, North Korean leader Kim Jong-il had been dead for about four hours, indicating that neither Seoul nor Tokyo -- or Washington -- had any inkling of his death. North Korean state media announced Kim's death two days later, on Monday, apparently catching governments around the world by surprise and plunging the region into uncertainty over the stability of the unpredictable state that is trying to build a nuclear arsenal. Lee held talks in Tokyo with Prime Minister Yoshihiko Noda and returned home on Sunday afternoon, apparently still unaware of the cover-up by the North, with which South Korea is still technically at war. If Washington had known, it appears likely it would have tipped off South Korea and Japan, its closest allies in Asia. "It seems everyone learned about Kim Jong-il's death after (the announcement)," said Kim Jin-pyo, head of the intelligence committee for South Korea's parliament after discussions with officials from the National Intelligence Service. "The U.S., Japan and Russia knew after North Korea's announcement," he told reporters. South Korea put its troops on emergency alert after the announcement of Kim's death; Japan said it had to be prepared for "unexpected" developments. South Korea's main spy agency and the defense ministry were completely in the dark until they saw the announcement on television, Yonhap news agency said. Other countries were similarly caught off-guard. "We knew he had an increased risk of a coronary event for some time now, but clearly no one can know exactly when something like this will happen," a U.S. official said on condition of anonymity. "No one was on death watch," said Ralph Cossa, president of the U.S. think tank Pacific Forum CSIS. "If anyone thought they had a good handle on North Korea, the sources were not that good." Western spy agencies use satellites, electronic eavesdropping and intelligence from Asian allies with greater access to try and deduce what is going on inside the hermetic country. There is some human intelligence, but obviously it was not up to the task. "There is all sorts of human intelligence at all sorts of levels in North Korea," said Cossa. "But certainly, the inner circle has not been breached. "North Korea is very good at keeping secrets, it probably had procedures in place which it was implementing." When founder Kim Il-sung died in 1994, the state kept it a secret for more than a day. Some reports say North Korea's solitary ally, China, may have been tipped off about Kim's death, and it did not share that information. "One would assume China would be the first to be notified," Cossa said. "On the other hand, with North Korea, there is no such thing as a safe assumption." (Editing by Alex Richardson) ||||| In many countries, that would involve intercepting phone calls between government officials or peering down from spy satellites. And indeed, American spy planes and satellites scan the country. Highly sensitive antennas along the border between South and North Korea pick up electronic signals. South Korean intelligence officials interview thousands of North Koreans who defect to the South each year. And yet remarkably little is known about the inner workings of the North Korean government. Pyongyang, officials said, keeps sensitive information limited to a small circle of officials, who do not talk. “This is a society that thrives on its opaqueness,” said Christopher R. Hill , a former special envoy who negotiated with the North over its nuclear program. “It is very complex. To understand the leadership structure requires going way back into Korean culture to understand Confucian principles.” On Monday, the Obama administration held urgent consultations with allies but said little publicly about Mr. Kim’s death. Senior officials acknowledged they were largely bystanders, watching the drama unfold in the North and hoping that it does not lead to acts of aggression against South Korea. None of the situations envisioned by American officials for North Korea are comforting. Some current and former officials assume that Kim Jong-un is too young and untested to step confidently into his father’s shoes. Some speculate that the younger Mr. Kim might serve in a kind of regency, in which the real power would be wielded by military officials like Jang Song-taek, Kim Jong-il’s brother-in-law and confidant, who is 65. Such an arrangement would do little to relieve the suffering of the North Korean people or defuse the tension over its nuclear ambitions. But it would be preferable to an open struggle for power in the country. Video “A bad scenario is that they go through a smooth transition, and the people keep starving and they continue to develop nuclear weapons,” said Jeffrey A. Bader, a former Asia adviser to President Obama . “The unstable transition, in which no one is in charge, and in which control of their nuclear program becomes even more opaque, is even worse.” Advertisement Continue reading the main story As failures go, the Central Intelligence Agency ’s inability to pick up hints of Mr. Kim’s death was comparatively minor. But as one former agency official, speaking on condition of anonymity about classified matters, pointed out: “What’s worst about our intel is our failure to penetrate deep into the existing leadership. We get defectors, but their information is often old. We get midlevel people, but they often don’t know what’s happening in the inner circle.” The worst intelligence failure, by far, came in the middle of the Iraq war. North Korea was building a nuclear reactor in Syria, based on the design of its own reactor at Yongbyon. North Korean officials traveled regularly to the site. Newsletter Sign Up Continue reading the main story Please verify you're not a robot by clicking the box. Invalid email address. Please re-enter. You must select a newsletter to subscribe to. Sign Up You will receive emails containing news content , updates and promotions from The New York Times. You may opt-out at any time. You agree to receive occasional updates and special offers for The New York Times's products and services. Thank you for subscribing. An error has occurred. Please try again later. View all New York Times newsletters. Yet the United States was ignorant about it until Meir Dagan, then the head of the Mossad, Israel ’s intelligence service, visited President George W. Bush ’s national security adviser and dropped photographs of the reactor on his coffee table. It was destroyed by Israel in an airstrike in 2007 after the United States turned down Israeli requests to carry out the strike. While the C.I.A. long suspected that North Korea was working on a second pathway to a bomb — uranium enrichment — it never found the facilities. Then, last year, a Stanford University scientist was given a tour of a plant, in the middle of the Yongbyon complex, which American satellites monitor constantly. It is not clear why satellite surveillance failed to detect construction on a large scale at the complex. The failure to pick up signs of turmoil are especially disconcerting for people in South Korea. The South’s capital, Seoul , is only 35 miles from the North Korean border, and the military is on constant alert for a surprise attack. Yet in the 51 hours from the apparent time of Mr. Kim’s death until the official announcement of it, South Korean officials appeared to detect nothing unusual. During that time, President Lee Myung-bak traveled to Tokyo , met with the Japanese prime minister, Yoshihiko Noda , returned home and was honored at a party for his 70th birthday. At 10 a.m. local time on Monday, even as North Korean media reported that there would be a “special announcement” at noon, South Korean officials shrugged when asked whether something was afoot. The last time Pyongyang gave advance warning of a special announcement was in 1994, when they reported the death of Mr. Kim’s father, Kim Il-sung, who also died of a heart failure . (South Korea was caught completely off guard by the elder Mr. Kim’s death, which was not disclosed for 22 hours.) “ ‘Oh, my God!’ was the first word that came to my mind when I saw the North Korean anchorwoman’s black dress and mournful look,” said a government official who monitored the North Korean announcement. “This shows a big loophole in our intelligence-gathering network on North Korea,” Kwon Seon-taek, an opposition South Korean lawmaker, told reporters. Advertisement Continue reading the main story Kwon Young-se, a ruling party legislator and head of the intelligence committee at the National Assembly, said the National Intelligence Service, the main government spy agency, appeared to have been caught off guard by the North Korean announcement. “We will hold them responsible,” he said.
– Kim Jong Il reportedly died on a train at 8:30am on Saturday. Guess when American and South Korean officials learned about it? Some 51 hours later, from North Korea's own media reports. Though we have spy planes and satellites trained on the country, we intercepted no phone calls and observed no hubbub around his train, making the two-day secret yet another North Korea-related intelligence failure, reports the New York Times. "It seems everyone learned about Kim Jong Il's death after" the press reports, a South Korean lawmaker who leads parliament's intelligence committee told Reuters. "The US, Japan, and Russia knew after North Korea's announcement." The list of similar failures with the North is long—notably, a uranium enrichment facility existed there for a year and a half before being discovered, and then only because the North showed it off to an American scientist. "What’s worst about our intel is our failure to penetrate deep into the existing leadership," says one former CIA official. "We get defectors, but their information is often old. We get midlevel people, but they often don’t know what’s happening in the inner circle."
SECTION 1. SHORT TITLE. This Act may be cited as the ``Prescription Drug Competition Act of 2001''. SEC. 2. FINDINGS. Congress finds that-- (1) prescription drug costs are increasing at an alarming rate and are a major concern of senior citizens and American families; (2) there is a potential for drug companies owning patents on brand-name drugs to enter to private financial deals with generic drug companies in a manner that could tend to restrain trade and greatly reduce competition and increase prescription drug costs for American citizens; and (3) enhancing competition between generic drug manufacturers and brand name manufacturers can significantly reduce prescription drug costs to American families. SEC. 3. PURPOSE. The purposes of this Act are-- (1) to provide timely notice to the Food and Drug Administration and the Federal Trade Commission regarding agreements between companies owning patents on branded drugs and companies who could manufacture generic or bioequivalent versions of such branded drugs; and (2) by providing timely notice, to-- (A) ensure the prompt availability of safe and effective generic drugs; (B) enhance the effectiveness and efficiency of the enforcement of the antitrust laws of the United States; and (C) deter pharmaceutical companies from engaging in anticompetitive actions or actions that tend to unfairly restrain trade. SEC. 4. DEFINITIONS. In this Act: (1) Agreement.--The term ``agreement'' means an agreement under section 1 of the Sherman Act (15 U.S.C. 1) or section 5 of the Federal Trade Commission Act (15 U.S.C. 45). (2) Antitrust laws.--The term ``antitrust laws'' has the same meaning as in section 1 of the Clayton Act (15 U.S.C. 12), except that such term includes section 5 of the Federal Trade Commission Act (15 U.S.C. 45) to the extent that such section applies to unfair methods of competition. (3) ANDA.--The term ``ANDA'' means an Abbreviated New Drug Application, as defined under section 505(j) of the Federal Food, Drug and Cosmetic Act. (4) Brand name drug company.--The term ``brand name drug company'' means a person engaged in the manufacture or marketing of a drug approved under section 505(b) of the Federal Food, Drug and Cosmetic Act. (5) Commission.--The term ``Commission'' means the Federal Trade Commission. (6) FDA.--The term ``FDA'' means the United States Food and Drug Administration. (7) Generic drug.--The term ``generic drug'' means a product that is the subject of an ANDA. (8) Generic drug applicant.--The term ``generic drug applicant'' means a person who has filed or received approval for an ANDA under section 505(j) of the Federal Food, Drug and Cosmetic Act. (9) Secretary.--The term ``Secretary'' means the Secretary of Health and Human Services. SEC. 5. NOTIFICATION OF AGREEMENTS AFFECTING THE SALE OR MARKETING OF GENERIC DRUGS. A brand name drug company and a generic drug applicant that enter into an agreement regarding the sale or manufacture of a generic drug that the Secretary has determined is the therapeutic equivalent of a brand name drug that is manufactured or marketed by that brand name drug company, or for which the generic drug applicant seeks such a determination of therapeutic equivalence, and which agreement could have the effect of limiting the research, development, manufacture, marketing, or selling of a generic drug that has been or could be approved for sale by the FDA pursuant to an ANDA, shall file with the Commission and the Secretary the text of the agreement, an explanation of the purpose and scope of the agreement, and an explanation of whether the agreement could delay, restrain, limit, or in any way interfere with the production, manufacture, or sale of the generic version of the drug in question. SEC. 6. FILING DEADLINES. Any notice, agreement, or other material required to be filed under section 5 shall be filed with the Commission and the Secretary not later than 10 business days after the date the agreement is executed. SEC. 7. ENFORCEMENT. (a) Civil Fine.--Any person, or any officer, director, or partner thereof, who fails to comply with any provision of this Act shall be liable for a civil penalty of not more than $20,000 for each day during which such person is in violation of this Act. Such penalty may be recovered in a civil action brought by the United States, or brought by the Commission in accordance with the procedures established in section 16(a)(1) of the Federal Trade Commission Act (15 U.S.C. 56(a)). (b) Compliance and Equitable Relief.--If any person, or any officer, director, partner, agent, or employee thereof, fails to comply with the notification requirement under section 5 of this Act, the United States district court may order compliance, and may grant such other equitable relief as the court in its discretion determines necessary or appropriate, upon application of the Commission or the Assistant Attorney General. SEC. 8. RULEMAKING. The Commission, in consultation with the Secretary, and with the concurrence of the Assistant Attorney General and by rule in accordance with section 553 of title 5, United States Code, consistent with the purposes of this Act-- (1) may require that the notice described in section 5 of this Act be in such form and contain such documentary material and information relevant to the agreement as is necessary and appropriate to enable the Commission and the Assistant Attorney General to determine whether such agreement may violate the antitrust laws; (2) may define the terms used in this Act; (3) may exempt classes of persons or agreements from the requirements of this Act; and (4) may prescribe such other rules as may be necessary and appropriate to carry out the purposes of this Act. SEC. 9. EFFECTIVE DATES. This Act shall take effect 90 days after the date of enactment of this Act.
Prescription Drug Competition Act of 2001 - Requires brand name drug companies and generic drug applicants to file with the Federal Trade Commission and the Secretary of Health and Human Services specified information regarding any agreement regarding the sale or manufacture of a generic drug which the Secretary has determined is the therapeutic equivalent of the brand name drug or for which the applicant seeks a determination of therapeutic equivalence, if such agreement could have the effect of limiting the research, development, manufacture, marketing, or selling of a generic drug product.
extended structures resembling young dwarf galaxies are found in tidal debris from galaxy interactions ( mirabel et al . 1992 ; duc & mirabel 1994 ; hunsberger , charlton , & zaritsky 1996 ) . star clusters form in abundance in the central regions of interacting galaxy pairs ( schweizer et al . 1996 ; miller et al . 1997 ; whitmore et al . 1999 ; zepf et al . 1999 ) . what physical conditions drive the formation of stars and determine the nature of structure that forms in different environments ? can star clusters also form in tidal debris , how widespread is star formation in the debris , and how similar is it between different tidal environments ? the image in figure 1 is a 1000 second exposure in the f555w filter obtained on 1999 march 24 . a f814w image was also obtained . point sources with @xmath0 are indicated , those in the tail with white circles and those out of the tail with white squares . although this field is crowded with foreground stars due to its low galactic latitude , it is apparent that the numerous point sources are preferentially in the regions containing tidal debris . 0.38 in -0.4 in -0.8 in the colors , magnitudes , and sizes of these sources allow us to distinguish foreground and background contamination from star clusters in the debris . figure 2 illustrates that the brighter sources are relatively young star clusters ( i.e. hundreds of millions of years old ) , but some of the fainter ones could be individual stars . the reddest sources are most certainly foreground stars with @xmath1 apparent mostly from wf4 . there is an enhancement of point sources with @xmath2 and @xmath3 . the relatively large spread of the @xmath4 colors indicates either a range of ages , or non uniform extinction by dust . the central region of ngc 3256 also has a large number of young clusters that contribute 20% of the total b band luminosity of the galaxy ( zepf et al . 1999 ) . in figure 3 , for the western tail , the @xmath4 color is plotted vs. the concentration index , defined as the difference between @xmath5 magnitudes measured in a 0.5and in a 3aperture . the solid circles represent sources in the tail regions while the open circles represent sources in regions outside the tail . clearly , the sources in the tail are on average larger , indicating that they are not point sources , and bluer . therefore , we confirm that there are many star clusters in this tail . in the eastern tail of ngc 3256 we also find a significant number of star clusters , but they are not as abundant as in the western tail . we have also obtained hst / wfpc2 v and i band images of the tidal debris of three other mergers : ngc 4038/9 , `` the antennae '' , ngc 7252 , `` atoms for peace '' , and ngc 3921 . we detect several cluster candidates in ngc 7252 and ngc 4038/9 , and several super star clusters in the debris of ngc 7252 and ngc 3921 . however , the debris of the remnant ngc 3256 by far contains the largest number of massive star clusters , both in the eastern and in the western tails . apparently , the conditions in this remnant are more conducive to formation of these clusters . clearly , not all tidal debris is equally conducive to the formation of these clusters . ngc 3256 is not distinguished from the other pairs by age or by total mass . however , its two tidal tails are the only ones of the eight we have studied that _ do not _ contain tidal dwarf galaxies . ngc 7252 , for example , contains in its western tail a bright dwarf with prominent patches of star formation , and in its eastern tail an extended low surface brightness dwarf . in both dwarfs there are point sources with @xmath6 , indicating that star formation continues well after the merger @xmath7 million years ago ( hibbard & mihos 1995 ) . perhaps the formation of small stellar structures ( star clusters ) and large stellar structures ( tidal dwarfs ) are mutually exclusive . detailed comparisons of cluster positions to high resolution 21 cm maps of content may also suggest factors that influence the formation and/or the packaging of stars . a detailed report of results on the tidal debris in all four pairs , ngc 3256 , ngc 7252 , ngc 4038/9 , and ngc 3921 , has been submitted to the astronomical journal ( knierman et al . 2000 ) . this work was supported by nasa / stsci ( grant go-07466.01 - 96a ) .
star clusters can be found in galaxy mergers , not only in central regions , but also in the tidal debris . in both the eastern and western tidal tails of ngc 3256 there are dozens of young star clusters , confirmed by their blue colors and larger concentration index as compared to sources off of the tail . tidal tails of other galaxy pairs do not have such widespread cluster formation , indicating environmental influences on the process of star formation or the packaging of the stars .
SECTION 1. SHORT TITLE. This Act may be cited as the ``Ethanol Modernization and Deficit Reduction Act''. SEC. 2. TERMINATION OF ETHANOL TAX CREDITS. (a) Excise Tax Credit and Direct Payments.--Sections 6426(b)(6) and 6427(e)(6)(A) of the Internal Revenue Code of 1986 are each amended by striking ``December 31, 2011'' and inserting ``June 30, 2011''. (b) Income Tax Credit.--Paragraph (1) of section 40(e) of such Code is amended-- (1) by striking ``December 31, 2011'' in subparagraph (A) and inserting ``June 30, 2011'', and (2) by striking ``January 1, 2012'' in subparagraph (B) and inserting ``July 1, 2011''. (c) Effective Date.--The amendments made by this section shall apply to any sale, use, or removal for any period after June 30, 2011. SEC. 3. EXTENSION AND MODIFICATION OF ALTERNATIVE FUEL VEHICLE REFUELING PROPERTY CREDIT. (a) Extension.--Subsection (g) of section 30C of the Internal Revenue Code of 1986 is amended by striking ``placed in service--'' and all that follows and inserting ``placed in service after the earlier of December 31, 2016, or the date on which the Secretary certifies that at least 53,000 qualified alternative fuel refueling properties (other than properties described in subsection (c)(2)(C)) have been placed in service.''. (b) Only Certain Ethanol Blends Eligible for Credit.--Subparagraph (A) of section 30C(c)(2) of the Internal Revenue Code of 1986 is amended to read as follows: ``(A) Any fuel-- ``(i) at least 85 percent of the volume of which consists of one or more of the following: natural gas, compressed natural gas, liquified natural gas, liquefied petroleum gas, or hydrogen, or ``(ii) at least 85 percent of the volume of which consists of-- ``(I) ethanol, or ``(II) ethanol and gasoline or one or more of the fuels described in clause (i), but only if at least 15 percent and not more than 85 percent of the volume of such fuel consists of ethanol.''. (c) Credit for Dual-Use Refueling Property.--Subsection (e) of section 30C of the Internal Revenue Code of 1986 is amended by adding at the end the following new paragraph: ``(6) Dual-use refueling property.-- ``(A) In general.--In the case of any dual-use refueling property, 100 percent of the cost of such property shall be treated as qualified alternative fuel refueling property if the taxpayer certifies, in such time and manner as the Secretary shall prescribe, that such property will be used in more than a de minimis capacity for the purposes described in section 179A(d)(3)(A) (applied as specified in subsection (c)(2)). ``(B) Recapture.--If at any time within 5 years after the date of the certification under subparagraph (A) the dual-use refueling property ceases to be used as required under such subparagraph, 100 percent of the cost of such property shall be subject to recapture under paragraph (5). ``(C) Dual-use refueling property.--For purposes of this paragraph, the term `dual-use refueling property' means property that is both qualified alternative fuel vehicle refueling property and property used-- ``(i) to store or dispense fuels not described in subsection (c)(2), or ``(ii) to store fuels described in subsection (c)(2) for any purpose other than delivery of such fuel into the fuel tank of a motor vehicle.''. (d) Effective Date.--The amendments made by this section shall apply to property placed in service after June 30, 2011. SEC. 4. EXTENSION OF CELLULOSIC BIOFUEL PRODUCER CREDIT THROUGH 2014. Subparagraph (H) of section 40(b)(6) of the Internal Revenue Code of 1986 is amended by striking ``January 1, 2013'' and inserting ``January 1, 2015''. SEC. 5. EXTENSION OF SPECIAL DEPRECIATION ALLOWANCE FOR CELLULOSIC BIOFUEL PLANT PROPERTY. Subparagraph (D) of section 168(l)(2) of the Internal Revenue Code of 1986 is amended by striking ``January 1, 2013'' and inserting ``January 1, 2015''. SEC. 6. ALGAE TREATED AS A QUALIFIED FEEDSTOCK FOR PURPOSES OF THE CELLULOSIC BIOFUEL PRODUCER CREDIT, ETC. (a) In General.--Subclause (I) of section 40(b)(6)(E)(i) of the Internal Revenue Code of 1986 is amended to read as follows: ``(I) is derived solely by, or from, qualified feedstocks, and''. (b) Qualified Feedstock; Special Rules for Algae.--Paragraph (6) of section 40(b) of the Internal Revenue Code of 1986, as amended by this Act, is amended by redesignating subparagraphs (F) and (G) as subparagraphs (H) and (I), respectively, and by inserting after subparagraph (E) the following new subparagraphs: ``(F) Qualified feedstock.--For purposes of this paragraph, the term `qualified feedstock' means-- ``(i) any lignocellulosic or hemicellulosic matter that is available on a renewable or recurring basis, and ``(ii) any cultivated algae, cyanobacteria, or lemna. ``(G) Special rules for algae.--In the case of fuel which is derived by, or from, feedstock described in subparagraph (F)(ii) and which is sold by the taxpayer to another person for refining by such other person into a fuel which meets the requirements of subparagraph (E)(i)(II)-- ``(i) such sale shall be treated as described in subparagraph (C)(i), ``(ii) such fuel shall be treated as meeting the requirements of subparagraph (E)(i)(II) in the hands of such taxpayer, and ``(iii) except as provided in this subparagraph, such fuel (and any fuel derived from such fuel) shall not be taken into account under subparagraph (C) with respect to the taxpayer or any other person.''. (c) Algae Treated as a Qualified Feedstock for Purposes of Bonus Depreciation for Biofuel Plant Property.-- (1) In general.--Subparagraph (A) of section 168(l)(2) of the Internal Revenue Code of 1986 is amended by striking ``solely to produce cellulosic biofuel'' and inserting ``solely to produce second generation biofuel (as defined in section 40(b)(6)(E))''. (2) Conforming amendments.--Subsection (l) of section 168 of such Code, as amended by this Act, is amended-- (A) by striking ``cellulosic biofuel'' each place it appears in the text thereof and inserting ``second generation biofuel'', (B) by striking paragraph (3) and redesignating paragraphs (4) through (8) as paragraphs (3) through (7), respectively, (C) by striking ``Cellulosic'' in the heading of such subsection and inserting ``Second Generation'', and (D) by striking ``cellulosic'' in the heading of paragraph (2) and inserting ``second generation''. (d) Conforming Amendments.-- (1) Section 40 of the Internal Revenue Code of 1986, as amended by this Act, is amended-- (A) by striking ``cellulosic biofuel'' each place it appears in the text thereof and inserting ``second generation biofuel'', (B) by striking ``Cellulosic'' in the headings of subsections (b)(6), (b)(6)(E), and (d)(3)(D) and inserting ``Second generation'', and (C) by striking ``cellulosic'' in the headings of subsections (b)(6)(C), (b)(6)(D), (b)(6)(H), (d)(6), and (e)(3) and inserting ``second generation''. (2) Clause (ii) of section 40(b)(6)(E) of such Code is amended by striking ``Such term shall not'' and inserting ``The term `second generation biofuel' shall not''. (3) Paragraph (1) of section 4101(a) of such Code is amended by striking ``cellulosic biofuel'' and inserting ``second generation biofuel''. (e) Effective Date.-- (1) In general.--Except as provided in paragraph (2), the amendments made by this section shall apply to fuels sold or used after the date of the enactment of this Act. (2) Application to bonus depreciation.--The amendments made by subsection (c) shall apply to property placed in service after the date of the enactment of this Act. SEC. 7. BUDGETARY EFFECTS. (a) PAYGO Scorecard.--The budgetary effects of this Act (and the amendments made by this Act) shall not be entered on either PAYGO scorecard maintained pursuant to section 4(d) of the Statutory Pay-As- You-Go Act of 2010. (b) Senate PAYGO Scorecard.--The budgetary effects of this Act (and the amendments made by this Act) shall not be recorded on any PAYGO scorecard maintained for purposes of section 201 of S. Con. Res. 21 (110th Congress).
Ethanol Modernization and Deficit Reduction Act - Amends the Internal Revenue Code to advance the termination date of the income and excise tax credits for ethanol from December 31, 2011, to June 30, 2011; (2) extend the tax credit for alternative fuel vehicle refueling property expenditures and the cellulosic biofuel producer tax credit; (3) extend the bonus depreciation allowance for cellulosic biofuel plant property; and (4) revise the definition of cellulosic biofuel for purposes of the cellulosic biofuel producer tax credit. Exempts the budgetary effects of this Act from PAYGO scorecard requirements under the Statutory Pay-As-You-Go Act of 2010.
The Nineties have come calling again, as the Spice Girls are reportedly reuniting for a UK tour in 2019. According to The Sun, the pop group will return with a 13-date tour of the UK culminating in a three day stint at Wembley stadium. The reunion is being masterminded by music mogul and Pop Idol creator Simon Fuller. The group, however, will be one spice short, as Posh Spice – Victoria Beckham – has reportedly refused to join the reunion tour. Beckham hasn't ventured into singing territory since the 2012 London Olympics closing ceremony, when the group reunited to perform a medley of hits. Instead, she now focuses on her career as a designer. Beckham's eponymous fashion label will celebrate its tenth anniversary show at London Fashion Week in September. However the rest of the group have jumped at the opportunity for a reunion tour, which will reportedly earn the women £12 million each if the tour sells out. ||||| Image copyright PA Spice Girl Mel B says she's entering a rehab facility to seek treatment for post-traumatic stress disorder (PTSD). The singer says the past six months have been "incredibly difficult" and claims she's been drinking "to numb my pain". The 43-year-old has been finalising her divorce from Stephen Belafonte, a relationship she says was abusive. The film producer was recently cleared of physically and verbally abusing Mel during their marriage. Mel B retweeted a text message that her mum Andrea had shared about the singer seeking help. In a statement Mel said that her divorce and the death of her father were the main reasons she was seeking treatment. "Sometimes it is too hard to cope with all the emotions I feel. But the problem has never been about sex or alcohol - it is underneath all that," she said. "No-one knows myself better than I do. But I am dealing with it. I love my three girls more than life itself." The Leeds-born star said she was recently diagnosed with PTSD and was speaking about it because it's a "huge issue" for lots of people. "If I can shine a light on the issue of pain, PTSD and the things men and women do to mask it, I will do," she said. Image copyright Getty Images Image caption Mel B and her ex-husband Stephen Belefonte Earlier this month a US judge dismissed claims the singer had made against her ex-husband Stephen Belefonte. In court papers Mel B claimed that the film producer was abusive and that he'd threatened to destroy her career by releasing a sex tape. Stephen Belafonte's legal team described the claims as "outrageous and unfounded". After being cleared of the abuse allegations, a judge refused to grant restraining orders that the pair had both filed against each other. Mel B was ordered to pay her former husband £270,000 in legal fees and more than £3,000 a month in child support payments. A recent video the star posted on Instagram showed her reading a book called Heal Your PTSD. Despite currently working as a judge on America's Got Talent alongside Simon Cowell, Mel says she plans to enter a UK-based treatment centre. Image copyright Getty Images Image caption Mel is currently a judge on America's Got Talent What is PTSD? Post traumatic stress disorder, or PTSD, is an anxiety disorder caused by very stressful, frightening or distressing events. It's often associated with soldiers who have experienced intense combat. The NHS says that someone with PTSD may often relive the traumatic event through nightmares and flashbacks. Sufferers may experience feelings of isolation, irritability and guilt, have problems sleeping and find it hard to concentrate. All these symptoms can have a significant impact on the person's day-to-day life. Help and information on PTSD is available through the BBC advice pages. Follow Newsbeat on Instagram, Facebook and Twitter. Listen to Newsbeat live at 12:45 and 17:45 every weekday on BBC Radio 1 and 1Xtra - if you miss us you can listen back here.
– Mel B has a hurdle to clear before embarking on a Spice Girls tour next year. Admitting to "drinking to numb my pain" from an allegedly abusive marriage and the death of her father last year, the 43-year-old singer and America's Got Talent judge tells the Sun she plans to enter a UK center to address sex and alcohol addictions "in the next few weeks," per the BBC. "Sometimes it is too hard to cope with all the emotions I feel." "But the problem has never been about sex or alcohol—it is underneath all that," she says, citing a recent PTSD diagnosis. "If I can shine a light on the issue of pain, PTSD and the things men and women do to mask it, I will do," she said. The mother of three will then reportedly embark on a 13-date UK 2019 tour with the Spice Girls minus Victoria Beckham, who's busy with her fashion line, the Telegraph reported Tuesday.
OSLO, Norway — A record 259 nominations have been received for this year's Nobel Peace Prize, with candidates including a Pakistani girl shot by the Taliban and a U.S. soldier accused of leaking classified material to WikiLeaks. Fifty of the nominations were for organizations. The secretive committee that awards the prize doesn't identify the nominees, but those with nomination rights sometimes announce their picks. Names put forward this year include Bradley Manning, the U.S. Army private who has admitted sending hundreds of thousands of classified documents to the secrecy-busting website WikiLeaks and 15-year-old Malala Yousafzai, an education activist who was shot in the head by Taliban militants while on her way home from school in Pakistan. "This year's nominations come from all over the world ... well-known names, well-known presidents and prime ministers and also lesser well-known names working in humanitarian projects, human rights activists," said Norwegian Nobel Committee's non-voting secretary Geir Lundestad, who announced the nomination numbers Monday. "In recent years, some of the Nobel Peace Prizes may have been controversial but they have added to the interest of the prize." Last year, the prize went to the European Union for promoting peace and human rights in Europe following the devastation of World War II, but not everyone approved the decision to give it to the bloc, which is dealing with a financial crisis that has led to hardship and suffering for many on the continent. Three peace prize laureates – South African Archbishop Desmond Tutu, Mairead Maguire of Northern Ireland and Adolfo Perez Esquivel from Argentina – insisted the prize money of $1.2 million should not have been paid out in 2012 because they said the EU contradicts the prize's values because it relies on military force to ensure security. The nomination period for 2013 ended on Feb. 1. The previous record of 241 nominations was in 2011. Kristian Berg Harpviken, the director of the Peace Research Institute Oslo, and a prominent voice in the Nobel guessing game, listed Yousafzai as his favorite for this year's award, followed by the Congolese physician and gynecologist Denis Mukweg – a leading figure in the fight against sexual violence worldwide – and three Russian female human rights activists: Lyudmila Alexeyeva, Svetlana Gannushkina and Lilya Shibanova. None of Harpviken's favorites have won the prize since he started guessing in 2009. The Nobel Prizes also include awards in medicine, physics, chemistry and literature. A sixth award, the Nobel Memorial Prize in Economics, was created by the Swedish central bank in 1968 in memory of prize founder Alfred Nobel. The winners are usually announced in October and the awards are always presented on Dec. 10, the anniversary of Alfred Nobel's death in 1896. The peace prize is awarded in Oslo, while the other Nobel Prizes are presented at ceremonies in the Swedish capital, Stockholm. Last year, the Nobel Foundation decided to reduce the prize money of each of the six awards by 20 percent to 8 million kronor ($1.2 million) to help safeguard its long-term capital prospects. ||||| Dennis Rodman to Sports Illustrated: I should be considered for Nobel Peace Prize Sports Illustrated’s annual “Where Are They Now?” issue often feels like an unpredictable real-life version of Mad Libs: [Former athlete] goes to [unexpected place] and [contributes to society in surprising manner]. For 14 years now, those blanks have been filled in, simultaneously prompting trips down memory lane and offering a glimpse at life after sports. This year’s cover story is arguably the maddest of the Mad Libs: Dennis Rodman goes to North Korea in hopes of normalizing relations between the country and the United States and capturing a Nobel Peace Prize. You can subscribe to Sports Illustrated here and purchase the tablet edition here. The eccentric, uninhibited basketball Hall of Famer drew untold headlines this year when he accompanied HBO’s Vice and members of the Harlem Globetrotters on a trip to North Korea, one of the least-accessible countries to Americans and a place with one of the world’s worst records on human rights. Franz Lidz caught up with Rodman for the story behind the story, revealing the former Bulls star’s surreal experiences with North Korea’s new leadership, his plans for international diplomacy and peace, and, yes, his belief that he should be considered for a Nobel, an honor bestowed on the likes of Barack Obama, Jimmy Carter, Nelson Mandela, Elie Wiesel, Mother Teresa and Martin Luther King Jr. Rodman’s very public powwow with Kim Jong-un, the rogue state’s missile-rattling dictator, was the most existential odd coupling since Nobel Prize-winning writer Samuel Beckett met a hulking boy named André Roussimoff in the French countryside and drove the future André the Giant to school in his pickup. More astounding, Rodman got serious face and partying time with the reclusive despot, whose brother, Kim Jong-chul, used to parade around their Swiss prep school in the Worm’s Bulls jersey. At a private dinner reception, Rodman even serenaded the Supreme Leader with Sinatra’s “My Way.” Afterward, the Worm said of Kim, “guy’s really awesome” and a “friend for life.” A headline in Britain’s Daily Mirror pronounced them THE BASKETBALL ACE AND THE BASKETCASE. … Early on this warm, blustery afternoon outside the Jet Blue baggage claim at JFK, the Worm is holding forth — to his limo driver, to anyone who will listen, to the wind — on his foray into geopolitics. “Before I landed in Pyongyang, I didn’t know Kim Jong-un from Lil’ Kim,” he says. “I didn’t know what country he ruled or what went on in the country he ruled.” … “Fact is, he hasn’t bombed anywhere he’s threatened to yet. Not South Korea, not Hawaii, not … whatever. People say he’s the worst guy in the world. All I know is Kim told me he doesn’t want to go to war with America. His whole deal is to talk basketball with Obama. Unfortunately, Obama doesn’t want to have anything to do with him. I ask, Mr. President, what’s the harm in a simple phone call? This is a new age, man. Come on, Obama, reach out to Kim and be his friend.” Rodman plans to return to North Korea in August. “I’m just gonna chill, play some basketball and maybe go on vacation with Kim and his family,” Rodman says. “I’ve called on the Supreme Leader to do me a solid by releasing Kenneth Bae.” The Korean-American missionary was recently sentenced to 15 years of hard labor on charges that he tried to topple the North Korean regime. He’d organized tours into the isolated state. “My mission is to break the ice between hostile countries,” Rodman says. “Why it’s been left to me to smooth things over, I don’t know. Dennis Rodman, of all people. Keeping us safe is really not my job; it’s the black guy’s [Obama's] job. But I’ll tell you this: If I don’t finish in the top three for the next Nobel Peace Prize, something’s seriously wrong.” Vice has posted must-see footage of Rodman’s trip to North Korea, which was highlighted by a basketball exhibition in which he sat side-by-side with Kim. The packed house’s thunderous standing ovation for its Supreme Leader — which saw a number of grown men openly weep — is jaw-dropping and eye-popping, a reminder of just how far outside the realm of normal Rodman ventured in making this trip. Where is Rodman now? Out of this world, just like always, but in ways we never could have imagined. MORE COVERAGE: SI’s 14th annual “Where Are They Now?“: Maurice Clarett Note: This week’s cover recalls a 1995 cover that featured Rodman during his second and final season with the Spurs. The following season, the two-time All-Star and two-time Defensive Player of the Year joined Michael Jordan’s Bulls, reeling off three straight titles from 1996 to 1998. ||||| A nomination for the Nobel Peace Prize may be submitted by any person who meets the nomination criteria. A letter of invitation to submit is not required. The names of the nominees and other information about the nominations cannot be revealed until 50 years later. Process of nomination and selection The Norwegian Nobel Committee is responsible for selecting the Nobel Peace Prize Laureates. A nomination for the Nobel Peace Prize may be submitted by any persons who are qualified to nominate. Qualified nominators Revised September 2016 According to the statutes of the Nobel Foundation, a nomination is considered valid if it is submitted by a person who falls within one of the following categories: • Members of national assemblies and national governments (cabinet members/ministers) of sovereign states as well as current heads of states • Members of The International Court of Justice in The Hague and The Permanent Court of Arbitration in The Hague • Members of Institut de Droit International • University professors, professors emeriti and associate professors of history, social sciences, law, philosophy, theology, and religion; university rectors and university directors (or their equivalents); directors of peace research institutes and foreign policy institutes • Persons who have been awarded the Nobel Peace Prize • Members of the main board of directors or its equivalent for organizations that have been awarded the Nobel Peace Prize • Current and former members of the Norwegian Nobel Committee (proposals by current members of the Committee to be submitted no later than at the first meeting of the Committee after 1 February) • Former advisers to the Norwegian Nobel Committee Unless otherwise stated the term members shall be understood as current (sitting) members. Candidacy criteria The candidates eligible for the Nobel Peace Prize are those persons or organizations nominated by qualified individuals, see above. A nomination for yourself will not be taken into consideration. Selection of Nobel Laureates The Norwegian Nobel Committee is responsible for the selection of eligible candidates and the choice of the Nobel Peace Prize Laureates. The Committee is composed of five members appointed by the Storting (Norwegian parliament). The Nobel Peace Prize is awarded in Oslo, Norway, not in Stockholm, Sweden, where the Nobel Prizes in Physics, Chemistry, Physiology or Medicine, Literature and the Economics Prize are awarded. How are the Nobel Laureates selected? © Nobel Media Below is a brief description of the process involved in selecting the Nobel Peace Prize Laureates. September – The Norwegian Nobel Committee prepares to receive nominations. These nominations will be submitted by members of national assemblies, governments, and international courts of law; university chancellors, professors of social science, history, philosophy, law and theology; leaders of peace research institutes and institutes of foreign affairs; previous Nobel Peace Prize Laureates; board members of organizations that have received the Nobel Peace Prize; present and past members of the Norwegian Nobel Committee; and former advisers of the Norwegian Nobel Institute. February – Deadline for submission. In order to be considered for the award of the year, nominations for the Nobel Peace Prize shall be sent in to the Norwegian Nobel Committee in Oslo before the 1st day of February the same year. Nominations postmarked and received after this date are included in the following year’s discussions. In recent years, the Committee has received close to 200 different nominations for different nominations for the Nobel Peace Prize. The number of nominating letters is much higher, as many are for the same candidates. February-March – Short list. The Committee assesses the candidates’ work and prepares a short list. March-August – Adviser review. October – Nobel Laureates are chosen. At the beginning of October, the Nobel Committee chooses the Nobel Peace Prize Laureates through a majority vote. The decision is final and without appeal. The names of the Nobel Peace Prize Laureates are then announced. December – Nobel Laureates receive their prize. The Nobel Peace Prize Award Ceremony takes place on 10 December in Oslo, Norway, where the Nobel Laureates receive their Nobel Prize, which consists of a Nobel Medal and Diploma, and a document confirming the prize amount. Are the nominations made public? The statutes of the Nobel Foundation restrict disclosure of information about the nominations, whether publicly or privately, for 50 years. The restriction concerns the nominees and nominators, as well as investigations and opinions related to the award of a prize. Submission Submission of nominations The Norwegian Nobel Committee has launched an on-line nomination form that you can use if you are a qualified nominator (see the list ‘Qualified nominators’) above). Deadline for nominations Nomination deadline is 31 January at 12 midnight CET. Nominations which do not meet the deadline are normally included in the following year’s assessment. Members of the Nobel Committee are entitled to submit their own nominations as late as at the first meeting of the Committee after the expiry of the deadline. Submission confirmation A letter or e-mail confirming the receipt and validity of the submitted nomination is normally sent out within a couple of months of the submission deadline. Selection process At the first meeting of the Nobel Committee after the February 1 deadline for nominations, the Committee’s Permanent Secretary presents the list of the year’s candidates. The Committee may on that occasion add further names to the list, after which the nomination process is closed, and discussion of the particular candidates begins. In the light of this first review, the Committee draws up the so-called short list – i.e. the list of candidates selected for more thorough consideration. The short list typically contains from twenty to thirty candidates. The candidates on the short list are then considered by the Nobel Institute’s permanent advisers. In addition to the Institute’s Director and Research Director, the body of advisers generally consists of a small group of Norwegian university professors with broad expertise in subject areas with a bearing on the Peace Prize. The advisers usually have a couple of months in which to draw up their reports. Reports are also occasionally requested from other Norwegian and foreign experts. When the advisers’ reports have been presented, the Nobel Committee embarks on a thorough-going discussion of the most likely candidates. In the process, the need often arises to obtain additional information and updates about candidates from additional experts, often foreign. As a rule, the Committee reaches a decision only at its very last meeting before the announcement of the Prize at the beginning of October. The Committee seeks to achieve unanimity in its selection of the Peace Prize Laureate. On the rare occasions when this proves impossible, the selection is decided by a simple majority vote. 50 year secrecy rule The Committee does not itself announce the names of nominees, neither to the media nor to the candidates themselves. In so far as certain names crop up in the advance speculations as to who will be awarded any given year’s Prize, this is either sheer guesswork or information put out by the person or persons behind the nomination. Information in the Nobel Committee’s nomination database is not made public until after fifty years. Questions and answers about the nomination process for a Nobel Peace Prize Search the nomination archive
– Dennis Rodman is pretty pleased with himself and the important work he did over in North Korea. So pleased, in fact, that he offers up this amazing quote to Sports Illustrated: "My mission is to break the ice between hostile countries. Why it’s been left to me to smooth things over, I don’t know. Dennis Rodman, of all people. Keeping us safe is really not my job; it’s the black guy’s [Obama's] job. But I’ll tell you this: If I don’t finish in the top three for the next Nobel Peace Prize, something’s seriously wrong." Er, there are just a few teensy problems with that plan, like the fact that the Nobel nomination period ended on February 1, and that you can't nominate yourself. But rather than dwell on that, let's consider the rest of Rodman's interview: On his knowledge of North Korea: "Before I landed in Pyongyang, I didn’t know Kim Jong Un from Lil’ Kim. I didn’t know what country he ruled or what went on in the country he ruled." On why Kim is not so bad: "Fact is, he hasn’t bombed anywhere he’s threatened to yet. Not South Korea, not Hawaii, not … whatever. People say he’s the worst guy in the world. All I know is Kim told me he doesn’t want to go to war with America. His whole deal is to talk basketball with Obama." On his plan to return to North Korea next month: "I’m just gonna chill, play some basketball, and maybe go on vacation with Kim and his family. I’ve called on the Supreme Leader to do me a solid by releasing Kenneth Bae."
broad absorption line ( bal ) qsos show very broad blueshifted absorption troughs indicating outflows up to @xmath5 , with covering fractions @xmath6 @xcite , and soft x - ray hydrogen column densities often exceeding @xmath7 @xcite . these outflows carry a significant fraction of the accretion power and there is evidence that they are radiatively driven @xcite , suggesting that bal qso black holes accrete close to the eddington limit . many bal qsos , especially those with low ionization absorption troughs , are reddened @xcite , suggesting that bal qsos may be young qsos ejecting a birth - cocoon of dusty gas to reveal the unobscured optical qso @xcite . with the discovery of many bal qsos at @xmath8 , we are able to explore this idea by comparing eddington accretion ratio ( @xmath2 ) directly with fueling indicators in bal and non - bal qsos . we can estimate @xmath9 and hence @xmath2 for @xmath8 qsos using observed continuum luminosity and fwhm h@xmath0 in the following way : if qso broad line region ( blr ) gas motion is dominated by gravity , the black hole mass ( @xmath9 ) can be calculated from the typical orbital radius of blr clouds and their typical velocity : @xmath10 . blr radius @xmath11 can be represented by blr size ( @xmath12 ) derived from h@xmath0 reverberation mapping : @xmath12 can be estimated from continuum luminosity at rest wavelength 5100 : @xmath13@xmath14 @xcite ; @xmath15 can be represented by h@xmath0 full width at half maximum ( fwhm ) . the @xmath9 so derived agree with the @xmath9 derived independently from the @xmath9 - bulge luminosity and @xmath16 relationships for nearby qsos , where @xmath17 is the velocity dispersion of the bulge @xcite . the fueling could be indicated by correlated emission line properties . one of the strongest sets of relationships among qso properties describes the increasing strength of optical with decreasing [ ] @xmath185007 , increasing steepness of the soft x - ray continuum , and decreasing width and stronger blue wing of the broad h@xmath0 emission line ( * ? ? ? * ; * ? ? ? * ; * ? ? ? * hereinafter bg92 ) . here we call this set of relationships boroson & green eigenvector 1 ( bgev1 ) . it has been suggested that bgev1 is driven by @xmath2 for the following reasons . * for a given luminosity , narrower h@xmath0 corresponds to smaller @xmath9 and higher @xmath2 . * by analogy with galactic black hole binary systems , it has been suggested that steeper x - ray spectra indicate higher @xmath2 @xcite . near - eddington accretion results in geometrically thick accretion disks which produce excess soft x - ray photons ( e.g. * ? ? ? * ) . * strong optical emission and weak [ ] emission indicate high optical depth , high density nuclear gas that may provide an abundant fuel supply for near - eddington accretion . optically thick gas reduces ionizing photons that reach the [ ] narrow line region ( e.g. * ? ? ? strong suggests that this gas may have been metal - enriched by star formation . the few known bal qsos at low @xmath3 tend to have narrow h@xmath0 , weak [ ] and strong emission @xcite suggesting high @xmath2 and an abundant fuel supply . most bals are discovered in qsos with @xmath19 , where the broad troughs are shifted into the optical window . for those redshifts the h@xmath0 region is shifted to the near infrared . we therefore performed near infrared spectroscopy , taking advantage of new qso surveys that do not use uv selection and yield many more bal qsos than previously recognized @xcite . in this letter , we investigate whether bal qsos have extreme bgev1 properties , using fwhm h@xmath0 and continuum luminosity to calculate @xmath2 , and then compare @xmath2 with and [ ] strengths . detailed discussion of the observations , reductions and analysis , with results from a larger data set , will be published later . our high redshift ( @xmath8 ) sample consists of 37 qsos . we observed 11 bal qsos from the large bright qso survey , the first bright qso survey and @xcite . qsos of similar redshift and luminosity were observed by ( * ? ? ? * hereinafter m99a ) . we included their observations having acceptable signal - to - noise ratio , to make a total sample of 17 bal qsos , 13 radio - loud ( rl ) qsos and 7 non - bal radio - quiet ( rq ) qsos . we and m99a chose the brightest optically - selected qsos at @xmath8 . our new spectra were obtained with the cgs4 near infrared spectrograph on the united kingdom infra - red telescope ( ukirt ) . the detector was the 256x256 insb nicmos array . with the 40 l mm@xmath20 low resolution grating , a slit width of 1.2 yielded a resolution of 3 pixels as measured from ar and xe comparison lamp spectra ( @xmath21 km s@xmath20 in j band and @xmath22 km s@xmath20 in h band ) . the two - dimensional spectral images were reduced using ukirt s orac data reduction pipeline . we used standard iraf tasks to optimally extract the spectra and calibrate their wavelength scales . atmospheric features were removed and spectral shape calibration was done by division by the observed spectra of f - g stars using a temperature implied by the spectral type . the f - g stars and the wavelength comparison lamps were observed within 0.1 airmasses of the qsos . our reduced spectra have signal - to - noise ratios of from 10 to 20 per 3-pixel bin near h@xmath0 . measuring line and continuum properties in the h@xmath0 region is complicated by the blending of many broad optical lines . we used specfit @xcite within iraf to deblend spectral components for both our ukirt and the m99a spectra . we decomposed each spectrum into a powerlaw continuum , broad and narrow h@xmath0 components , a broad h@xmath23 component , a narrow - line [ ] doublet and emission blends . the fixed parameters were the power - law continuum parameters ( slope and normalization estimated by eye ) , the [ ] @xmath244959,5007 doublet ratio of 2.94 , the h@xmath23/h@xmath0 ratio of 0.36 , and the narrow h@xmath0 to [ ] @xmath185007 ratio of 0.1 @xcite . except for some m99a spectra with known broad [ ] lines , the narrow h@xmath0 and [ ] lines were represented by single gaussians with width equal to the instrumental resolution ( bg92 ) . the emission blends were represented by the bg92 i zw 1 template . all broad lines including were assumed to have the same gaussian profile . rest wavelength ratios were constrained except for some mcintosh et al . objects with shifted [ ] lines @xcite . free parameters were the intensities of [ ] , broad h@xmath0 , the blends , and broad line widths . in many cases , since a single gaussian profile was inadequate to represent the broad h@xmath0 line , we adopted the following procedure after running specfit . we first subtracted all fitted components except broad h@xmath0 and smoothed the remaining h@xmath0 by fitting multiple gaussian profiles , each at least as wide as the instrumental resolution , then measured the integrated flux and fwhm of the smoothed profile . we tested our measurement method on bg92 spectra , finding no systematic difference from their results . for objects without measurable [ ] or lines , we estimated @xmath25 upper limits based on the signal - to - noise ratio and spectral resolution of each spectrum . the typical rms uncertainties of our measurements for the high redshift qsos are 10% for [ ] @xmath185007 equivalent width ( ew ) for objects with detected [ ] , 10% for fwhm h@xmath0 , and about 20% for /h@xmath0 ratios . figure [ spectra ] shows the fits for three typical spectra . the bal qso spectrum is from our new ukirt data . the rl and rq qso spectra are from our refitting of the m99a data . to calculate @xmath9 and @xmath2 , we need @xmath26 at rest frame wavelengths of 5100 and 3000 to estimate the blr size , and the bolometric luminosity : @xmath27@xmath28 @xcite . bal qsos tend to have reddened continua compared with optically selected non - bal qsos , so the @xmath26 were calculated using h band magnitudes from the the 2mass survey , assuming @xmath29 . using the most recent bal qso colors derived from the sloan digital sky survey @xcite , we estimate a generous upper limit for extinction at the observed h band correponding to @xmath300.8 mag for @xmath8 , corresponding to under - estimates of black hole mass and eddington ratio by factors of @xmath31 and @xmath32 respectively . such small corrections would not change our results significantly . we assumed a cosmological model with @xmath33 , @xmath34 and @xmath35 @xcite . table [ datatable ] lists the line measurements and calculated @xmath36 for our high @xmath3 qsos . the complete table is available electronically . cccccc 0046 + 0104 & bal & 11.3 & 3.0 & 3.3 & 0.78 + 0052 + 0101 & rq & 6.4 & 5.6 & 5.1 & 0.9 + 0052 + 0140 & rq & 5.7 & 2.9 & 0.7 & 3.2 + 0126 + 2559 & rl & 20.9 & 1.7 & 0.4 & 6.76 + 0157 + 7442 & rl & 19.7 & 1.3 & 19.4 & 0.42 + 0228@xmath370337 & rl & 23.9 & 1.2 & 1.9 & 1.07 + 0228@xmath371011 & bal & 7.0 & 3.3 & 9.5 & 0.26 + 0424 + 0204 & rl & 50.6 & 1.3 & 15.5 & 0.30 + 0427@xmath371302 & rl & 29.9 & 1.0 & 8.6 & 0.47 + 0555 + 3948 & rl & 14.9 & @xmath381.8 & 1.9 & 1.71 + 0724 + 4159 & bal & @xmath385.6 & 7.4 & 1.6 & 1.91 + 0841 + 7053 & rl & 4.8 & 5.1 & 4.0 & 1.86 + 0845 + 3420 & bal & 5.6 & 4.2 & 16.4 & 0.18 + 0913 + 3944 & bal & @xmath383.0 & 8.7 & 0.1 & 9.48 + 0934 + 3153 & bal & @xmath381.8 & 3.1 & 5.1 & 0.80 + 1013 + 0851 & bal & @xmath381.3 & 4.6 & 14.3 & 0.39 + 1054 + 2536 & bal & @xmath382.7 & 4.2 & 2.5 & 1.07 + 1106@xmath371821 & rq & 12.9 & 2.3 & 4.4 & 1.0 + 1225 + 2235 & rq & 8.9 & 1.9 & 35.8 & 0.3 + 1228 + 3128 & rl & @xmath380.9 & 5.1 & 40.8 & 0.39 + 1231 + 0725 & rl & 8.3 & @xmath380.7 & 3.5 & 1.04 + 1233 + 1304 & bal & @xmath386.2 & 3.1 & 8.3 & 0.41 + 1234 + 1308 & bal & @xmath383.0 & 2.4 & 2.8 & 1.04 + 1249@xmath370559 & bal & @xmath382.3 & 5.0 & 22.9 & 0.51 + 1250 + 2631 & rq & 10.6 & 1.5 & 11.4 & 1.2 + 1311@xmath370552 & bal & @xmath381.3 & 6.0 & 4.8 & 0.94 + 1333 + 1649 & rl & 18.7 & 1.7 & 17.5 & 0.58 + 1348@xmath370353 & rq & 4.3 & 3.3 & 3.9 & 1.3 + 1418 + 0852 & rq & 8.2 & 2.3 & 4.6 & 1.0 + 1420 + 2534 & bal & @xmath384.9 & 11.1 & 3.0 & 1.07 + 1436 + 6336 & rl & 11.6 & 1.1 & 16.4 & 0.31 + 1445 + 0129 & bal & @xmath383.7 & 2.6 & 1.1 & 1.13 + 1445@xmath370023 & bal & @xmath384.2 & 9.5 & 8.0 & 0.17 + 1451@xmath372329 & rl & 22.4 & 1.4 & 4.8 & 1.20 + 1516 + 0029 & bal & @xmath382.1 & 1.4 & 6.7 & 0.48 + 2215@xmath371744 & bal & 20.3 & 1.7 & 15.4 & 0.25 + 2312 + 3847 & rl & 28.4 & 5.3 & 5.4 & 0.63 + figure [ lbolhbfwhm ] plots fwhm h@xmath0 against bolometric luminosity . lines of constant @xmath9 and @xmath2 are indicated . high redshift qsos ( large symbols ) are compared with the 87 low redshift bg92 qsos from which the original bgev1 relationships were derived ( small symbols ) . different sub - samples of qsos are distinguished . the @xmath2 for our flat - spectrum rl qsos might be overestimated if the continuum is beamed @xcite , and fwhm h@xmath0 may underestimate the true virial speed as a result of viewing disk motions pole - on ( e.g. * ? ? ? the brightest qsos at @xmath39 are the most luminous , introducing a bias toward the highest @xmath2 and @xmath9 . the high @xmath3 bal qsos , while having high @xmath2 , are not clearly different from the high @xmath3 non - bal qsos , even after applying the orientation corrections . figure [ o3fe ] shows that high @xmath3 qsos , in particular the bal qsos , extend the inverse relationship between /h@xmath0 and ew [ ] to stronger and weaker [ ] . thirteen out of seventeen high @xmath3 bal qsos do not even have detectable [ ] emission lines . taking into account the [ ] upper limits , the generalized kendall s tau correlation coefficient indicates that the 1-tailed probability for the versus [ ] correlation to arise by chance is @xmath40 for either the entire sample or the high @xmath3 qsos alone . the trend for low @xmath3 rl and bal qsos to lie at opposite extremes has been noted before ( bg92 ) . the same appears to be true among the high @xmath3 qsos . the 1-tailed probability of high @xmath3 bal qsos and low @xmath3 rq qsos having the same line strength distribution is @xmath41 for /h@xmath0 and @xmath41 for ew [ ] . . figures [ fem ] and [ femdot ] show @xmath9 and @xmath2 versus /h@xmath0 . in fig . [ fem ] , the @xmath8 qsos lie at significantly larger @xmath9 ; in fig . [ femdot ] , these high redshift qsos extend the low @xmath3 relationship to higher @xmath2 . for both high and low @xmath3 qsos , the 2-tailed probability of the @xmath2 versus /h@xmath0 correlation arising by chance is @xmath40 . the @xmath9 versus /h@xmath0 correlation for low @xmath3 qsos may simply reflect the inverse dependence of @xmath9 on @xmath2 , given a limited luminosity range . there is also an inverse correlation between @xmath2 and ew [ ] for both high and low @xmath3 qsos ( not shown ) , with most bal qsos lying at the weak [ ] extreme ( see fig . [ o3fe ] ) . the 2-tailed probability of this correlation arising from unrelated variables is @xmath40 . the above correlations demonstrate that bgev1 is indeed related to @xmath2 rather than @xmath9 . _ if bgev1 properties are @xmath2 indicators , why do high @xmath3 bal and non - bal qsos have different bgev1 properties despite their similar @xmath2 ? _ one possibility is that all our high @xmath3 qsos belong to the same parent population but bal qsos are viewed from special angles . this would imply that bgev1 emission line properties are affected by orientation . more likely , the bgev1 emission line relationships may be only indirectly related to @xmath2 . they may depend more directly on the availability of fuel around the black hole . stronger and weaker [ ] correspond to an abundance of cold gas , which could fuel high eddington ratio accretion . for low @xmath2 objects , which are found mostly in the low @xmath3 sample , increasing the fuel supply increases the accretion rate . at higher luminosities ( high @xmath3 ) bal qsos may be the youngest qsos with the most abundant fuel supplies , as indicated by very strong and very weak [ ] emission , but there is not a corresponding increase in accretion ratio because they are unable to radiate at @xmath42 we thank p. hirst for observing support , d. wills for comments on a draft , d. h. mcintosh for generously providing h band spectra for high redshift qsos , t. a. boroson for spectra of the bg92 sample , and roc cutri for providing unpublished 2mass magnitudes . the united kingdom infra - red telescope is operated by the joint astronomy centre on behalf of the u.k . particle physics and astronomy research council . arav , n. 1997 , in asp conf . 128 , mass ejection from active galactic nuclei , ed . n. aravm , i. shlosman , & r. j. weymann , 264 becker , r. h. , white , r. l. , gregg , m. d. , brotherton , m. s. , laurent - muehleisen , s. a. & arav , n. 2000 , , 538 , 72 boroson , t. a. 2002 , , 565 , 78 boroson , t. a. , & green , r. f. 1992 , , 80 , 109 ( bg92 ) brandt , w. n. , laor , a. & wills , b. j. 2000 , , 528 , 637 corbin , m. 1997 , , 113 , 245 done , c. , pounds , k. a. , nandra , k. & fabian , a. c. 1995 , , 275 , 417 fabian , a. c. 1999 , , 308 , l39 freedman , w. 2002 , int . j. mod . . , a17s1 , 58 gallagher , s. c. , brandt , w. n. , chartas , g. & garmire , g. p. 2002 , , 567 , 37 gebhardt , k. , kormendy , j. , ho , l. c. & bender , r. 2000 , , 543 , 5 grupe , d. , beuermann , k. , mannheim , k. and thomas , h .- c . 1999 , , 350 , 805 haehnelt , m. g. , natarajan , p. & rees , m. j. 1998 , , 300 , 817 hall , p. b. et al . 2002 , , 141 , 267 hewett , p. c. , & foltz , c. b. 2003 , , 125 , 1784 kaspi , s. , smith , p. s. , netzer , h. , maoz , d. jannuzi , b. t. & giveon , u. 2000 , , 533 , 631 kriss , g. 1994 , in asp conf . 61 , astronomical data analysis software and systems iii , ed . d. r. crabtree , r.j . hanisch , & j. barnes , 437 krolik , j. h. 2001 , , 551 , 72 lacy , m. , laurent - muehleisen , s. a. , ridgway , s. e. , becker , r. h. & white , r. l. , 505 , l83 laor , a. , fiore , f. , elvis , m. , wilkes , b. j. & mcdowell , j. c. 1997 , , 477 , l93 laor , a. 1998 , , 505 , l83 laor , a. 2000 , , 543 , l111 low , f. j. , cutri , r. m. , huchra , j. p. & kleinmann , s. g. 1988 , , 327 , l41 mcintosh , d. h. , rieke , m. j. , rix , h .- w . , foltz , c. b. , weymann , r. j. 1999 , , 514 , 40 ( m99a ) mcintosh , d. h. , rix , h .- w . , rieke , m. j. , foltz , c. b. 1999 , , 517 , l73 ( m99b ) mclure , r. j. & dunlop , j. s. 2002 , , 331 , 795 padovani , p. & urry , c. m. 1992 , , 387 , 449 peterson , b. m. & wandel , a. 1999 , , 521 , l95 pounds , k. a. , done , c. & osborne , j. p. 1995 , , 277 , l5 reichard , t. a. et al . 2003 , , 125 , 1711 shang , z. , wills , b. j. , edward , l. r. , wills , d. , laor , a. , xie , b. & yuan , j. 2003 , , 586 , 52 storey , p. j. & zeippen , c. j. 2000 , , 312 , 813 tanaka , y. & shibazaki , n. 1996 , 34 , 607 tolea , a. , krolik , j. h. & tsvetanov , z. 2002 , , 578 , l31 turnshek , d. a. , monier , e. m. , sirola , c. j. , & espey , b. r. 1997 , , 476 , 40 veilleux , s. & osterbrock , d. e. 1987 , , 63 , 295 vestergaard , m. 2002 , , 571 , 733 weymann , r.j . , morris , s.l . , foltz , c.b . , & hewett , p.c . 1991 , , 373 , 23 woo , j .- h . & urry , c. m. 2002 , , 579 , 530
broad absorption line ( bal ) qsos have been suggested to be youthful super - accretors based on their powerful radiatively driven absorbing outflows and often reddened continua . to test this hypothesis , we observed near ir spectra of the h@xmath0 region for 11 bright bal qsos at redshift @xmath12 . we measured these and literature spectra for 6 bal qsos , 13 radio - loud and 7 radio - quiet non - bal qsos . using the luminosity and h@xmath0 broad line width to derive black hole mass and accretion rate , we find that both bal and non - bal qsos at @xmath12 tend to have higher @xmath2 than those at low @xmath3 probably a result of selecting the brightest qsos . however , we find that the high @xmath3 qsos , in particular the bal qsos , have extremely strong and very weak [ ] , extending the inverse relationship found for low @xmath3 qsos . this suggests that , even while radiating near @xmath4 , the bal qsos have a more plentiful fuel supply than non - bal qsos . comparison with low @xmath3 qsos shows for the first time that the inverse [ ] relationship is indeed related to @xmath2 , rather than black hole mass .
the aim of this study is to investigate the algebraic topological properties of heavy tail distributions , relying on a ubiquitous tool in topological data analysis ( tda ) . topological data analysis is a growing research area that broadly refers to the analysis of high - dimensional and incomplete datasets , using concepts from algebraic topology , while borrowing ideas and techniques from other fields in mathematics @xcite . the most typical approach to tda is probably _ persistent homology _ , which originated in computational topology and appears in a wide range of applications , including sensor networks @xcite , bioinformatics @xcite , computational chemistry @xcite , manifold learning @xcite , and linguistics @xcite . a standard approach in tda usually starts with a point cloud @xmath1 of points in @xmath0 , from which more complex sets are constructed . two such examples are the union of balls @xmath2 , where @xmath3 is a closed ball of radius @xmath4 about the point @xmath5 , and the _ ech complex _ , @xmath6 . [ cech : defn ] let @xmath7 be a collection of points in @xmath0 and @xmath4 be a positive number . then , the ech complex @xmath6 is defined as follows . 1 . the @xmath8-simplices are the points in @xmath7 . a @xmath9-simplex @xmath10 $ ] belongs to @xmath6 whenever a family of closed balls @xmath11 has a nonempty intersection . . since three balls with radius @xmath12 centered at @xmath13 have a common intersection , the @xmath14-simplex @xmath15 $ ] belongs to @xmath16 . there also exists a @xmath17-simplex @xmath18 $ ] , which adds a tetrahedron on the right figure . , width=340 ] in addition to the ech complex , there are many other _ simplicial complexes _ , such as the vietoris - rips and alpha complexes ( see , e.g. , @xcite ) . however , throughout the current paper , we concentrate on the ech complex . one reason for doing so is its topological equivalence to the union of balls . indeed , according to the nerve theorem @xcite , the ech complex and the union of balls are homotopy equivalent , and thus , they represent the same topological object . furthermore , ech complexes are regarded as higher - dimensional analogues of _ geometric graphs _ , and therefore , many of the techniques developed thus far in random geometric graph theory ( see , e.g. , @xcite ) are also applicable to random ech complexes . a standard topological argument classifies objects such as ech complexes , usually in terms of homological concepts , etc . given a topological space @xmath19 , the _ @xmath8-th homology group _ @xmath20 consists of elements that represent connected components in @xmath19 , while for @xmath21 , the _ @xmath22-th homology group _ @xmath23 is generated by elements representing @xmath22-dimensional holes " or cycles " in @xmath19 . then , for @xmath24 , the _ @xmath22-th betti number _ @xmath25 is defined as the rank of @xmath23 and is the quantifier of topology that is central to the entire study in this paper . more intuitively , @xmath26 counts the number of connected components in @xmath19 , while @xmath25 , @xmath21 , measures the number of @xmath22-dimensional holes or cycles in @xmath19 . for example , a one - dimensional sphere , i.e. , a circle , has @xmath27 , @xmath28 , and @xmath29 for all @xmath30 . a two - dimensional sphere has @xmath27 , @xmath31 , and @xmath32 , and all others zero . in the case of a two - dimensional torus , the non - zero betti numbers are @xmath27 , @xmath33 , and @xmath32 . at a more formal level , we need a rigorous coverage of homology theory ( see , e.g. , @xcite or @xcite ) ; however , the essence of this paper can be captured without knowledge of homology theory . in the sequel , simply viewing @xmath25 as the number of @xmath22-dimensional holes will suffice . of a two - dimensional sphere is zero ; even if one winds a closed loop around the sphere , the loop ultimately vanishes as it moves upward ( or downward ) along the sphere until the pole . the betti number @xmath34 of a two - dimensional torus is @xmath14 because of two independent closed loops ( one is red and the other is blue ) . , width=566 ] persistent homology keeps track of how topological features dynamically evolve in a filtered topological space . we do not give a formal description of persistent homology , but , alternatively , we present an illustrative example , which helps capture its essence . readers interested in a more rigorous description of persistent homology may refer to @xcite , @xcite and @xcite , while @xcite and @xcite provide an elegant review of the topics in an accessible way for non - topologists . let @xmath35 be a set of random points on @xmath0 , drawn from an unknown manifold @xmath36 . first , we construct a union of balls @xmath37 which defines a random filtration generated by balls with increasing radii @xmath38 , that is , @xmath39 holds for all @xmath40 . by virtue of the nerve theorem , this filtration conveys the same homological information as a collection of ech complexes @xmath41 . utilizing @xmath42 or @xmath41 , we wish to recover the homology of @xmath43 . we expect that , provided that @xmath4 is suitably chosen , the union of balls @xmath44 is homotopy equivalent to @xmath43 and hence its homology is the same as @xmath43 . in general , however , selecting such an appropriate @xmath4 is not easy at all . to make this more transparent , we consider an example for which @xmath43 represents an annulus ( figure [ f : persistence ] ) . in this case , if @xmath4 is chosen to be too small , @xmath44 is homotopy equivalent to many distinct points , implying that we fail to recover the homology of an annulus . on the other hand , if @xmath4 is extremely large , then @xmath44 becomes contractible ( i.e. , can deform into a single point continuously ) and , once again , @xmath44 does not recover the homology of an annulus . of the balls about these random points . , width=434 ] represented by one - dimensional holes . in figure [ f : persistence ] , there exist two small holes @xmath45 and @xmath46 when @xmath47 . the lifetimes of these holes are so short that they are represented by the points @xmath48 and @xmath49 near the diagonal line . on the other hand , @xmath50 is a robust hole , and thus , the corresponding point @xmath51 is placed far from the diagonal line . ( b ) persistence barcode plot for @xmath52 . the vertical line at level @xmath53 intersects horizontal bars three times , meaning that there are three holes when @xmath47 . although two of these quickly vanish , the remaining one has the largest persistence and generates the longest bar . , width=453 ] persistent homology can extract the robust homological information of @xmath43 by treating a possible range of @xmath4 simultaneously . typically , persistent homology can be visualized by two equivalent graphical descriptors known as the _ persistence diagram _ and _ persistence barcode plot_. the persistence diagram consists of a multiset of points in the plane @xmath54 , where each pair @xmath55 describes the birth time and death time of each hole ( or connected component ) . alternatively , if we represent the pair @xmath55 as an interval @xmath56 $ ] , we obtain a set of horizontal bars , called the persistence barcode plot . for the annulus example in figure [ f : persistence ] , as we increase the radius @xmath4 , many small one - dimensional holes appear and quickly disappear ( e.g. , the holes @xmath45 and @xmath46 ) . since the birth time and death time of these non - robust holes are close to each other , they are expressed in the persistence diagram as the points near the diagonal line ( see the points @xmath48 and @xmath49 in figure [ f : diagram - plot ] ( a ) ) . the points near the diagonal line are usually viewed as topological noise . " in contrast , a robust hole for the annulus denoted by @xmath50 in figure [ f : persistence ] has a much longer lifetime than any other small hole , and therefore , it can be represented by the point @xmath51 placed far above the diagonal line . from the viewpoint of the persistence barcode plot in figure [ f : diagram - plot ] ( b ) , the hole @xmath50 generates the longest bar , whereas other small holes generate only much shorter bars . given a set of intervals @xmath56 $ ] , @xmath57 , in the persistence barcode plot for the @xmath22-th homology group @xmath58 ( for short , we call it @xmath22-th persistence barcode plot ) , the quantity we explore in the present paper is the _ lifetime sum _ up to parameter @xmath4 defined by @xmath59 where @xmath60 . a set of horizontal bars represents the birth and death of @xmath22-dimensional holes . in this case , the lifetime sum up to parameter @xmath4 is given by @xmath61 . , width=302 ] utilizing these topological tools developed in tda , we investigate the topological dynamics of _ extreme sample clouds _ lying far away from the origin , which are generated by heavy tail distributions on @xmath0 . the study of the geometric and topological properties of extreme sample clouds in a high - dimensional space belongs to extreme value theory ( evt ) . indeed , over the last decade or so , many studies have provided geometric descriptions of multivariate extremes in view of point process theory , among them @xcite , @xcite , and @xcite . in particular , poisson limits of point processes with a u - statistic structure were discussed in @xcite and @xcite , the latter also including a number of stochastic geometry examples . furthermore , in @xcite a recent extensive study of the general point process convergence of extreme sample clouds , leading to limit theorems for betti numbers of extremes , is reported . the main contribution in @xcite is a probabilistic investigation into a layered structure consisting of a collection of rings " around the origin , with each ring containing extreme random points that exhibit different topological behaviors in terms of the betti numbers . more formally , this ring - like structure is referred to as _ topological crackle _ , which was originally reported in @xcite . we remark also that there has been increasing interest in the limiting behaviors of random simplicial complexes , which are not necessarily related to extremes ; see @xcite , @xcite , @xcite , @xcite , and @xcite . these papers derive various limit theorems for the betti numbers of the random ech complexes @xmath62 , with @xmath63 a random point set in @xmath0 and @xmath64 a threshold radius decreasing to @xmath8 . the organization of this paper is as follows . first we provide a formal setup of our extreme sample clouds and express the lifetime sum of extremes as a simple functional of the corresponding betti numbers . we observe that the nature of limit theorems for the lifetime sum of extremes depends crucially on the distance of the region of interest from the origin . the asymptotics of the lifetime sum exhibits completely different topological features according to the region examined . the persistent homology originated in algebraic topology , and thus , there are only a limited number of probabilistic and statistical studies that have treated it . the present paper contains some of the earliest and most comprehensive results obtained by examining persistent homology from a pure probabilistic viewpoint , two other papers being @xcite and @xcite . the interdisciplinary studies between statistics and persistent homology include , for example , @xcite , @xcite , and @xcite . before commencing the main body of the paper , we remark that all the random points in this paper are assumed to be generated by an inhomogeneous poisson point process on @xmath0 with intensity @xmath65 . in our opinion , all the limit theorems derived in this paper can be carried over to a usual iid random sample setup by a standard de - poissonization " argument ; see section 2.5 in @xcite . this is , however , a little more technical and challenging , and therefore , we decided to concentrate on the simpler setup of an inhomogeneous poisson point process . furthermore , we consider only spherically symmetric distributions . although the spherical symmetry assumption is far from being crucial , we adopt it to avoid unnecessary technicalities . let @xmath66 be an iid sequence of @xmath0-valued random variables with common spherically symmetric density @xmath67 of a regularly varying tail . let @xmath68 be the @xmath69-dimensional unit sphere in @xmath0 . assume that for any @xmath70 ( equivalently for some @xmath70 ) and for some @xmath71 , @xmath72 denoting by @xmath73 a family of regularly varying functions ( at infinity ) with exponent @xmath74 , this can be written as @xmath75 . let @xmath76 be a poisson random variable with mean @xmath77 , independent of @xmath78 , and @xmath79 denote an inhomogeneous poisson point process on @xmath0 with intensity @xmath65 . given a sequence @xmath80 growing to infinity and a non - negative number @xmath81 , we denote by @xmath82 a ech complex built over random points in @xmath83 lying outside a growing ball @xmath84 . then , a family of ech complexes @xmath85 constitutes a random filtration " parametrized by @xmath86 . that is , we have for all @xmath40 , @xmath87 choosing a positive integer @xmath21 , which remains fixed hereafter , we denote the @xmath22-th betti number of the ech complex by @xmath88 where the second equality is justified by homotopy equivalence between the ech complex and the union of balls . , @xmath89 . the betti number @xmath90 counts one - dimensional holes outside @xmath91 , while ignoring holes inside the ball ( e.g. , ( a ) , ( b ) , and ( c ) ) . , width=340 ] here , we provide a key relation between the @xmath22-th betti number and the lifetime sum of the @xmath22-th persistent homology associated with the filtration . denote by @xmath92 the lifetime sum in the @xmath22-th persistence barcode plot up to parameter @xmath4 , as constructed in . then , it holds that @xmath93 the proof of is elementary . in the persistence barcode plot , the betti number @xmath94 represents the number of times the vertical line at level @xmath95 intersects the horizontal bars ( figure [ f : barcode2 ] ) . therefore , the integration of @xmath94 from @xmath8 to @xmath4 equals the sum of the bar lengths @xmath92 . for more formal proof , one may refer to proposition 2.2 in @xcite . clearly , may be viewed as generating a stochastic process in the parameter @xmath96 with continuous sample paths , and its limiting properties are central to this paper , for which we derive various limit theorems in the sequel . -th persistence barcode plot . the lifetime sum up to parameter @xmath4 is @xmath97 . the vertical line at level @xmath95 intersects the horizontal bars three times , implying that @xmath98 . the integration of @xmath94 from @xmath8 to @xmath4 coincides with @xmath99 . , width=302 ] the behavior of splits into three different regimes , each of which is characterized by the growth rate of @xmath100 : @xmath101 with @xmath102 . since @xmath103 in case @xmath104 grows fastest , the occurrence of @xmath22-dimensional holes outside @xmath91 is the least likely of the three regimes . in contrast , the @xmath100 determined by @xmath105 grows most slowly , which implies that the occurrence of @xmath22-dimensional holes outside @xmath91 is the most likely of the three regimes . in the following , we establish the limit theorems for @xmath92 in all three regimes . before proceeding to specific subsections , we need to introduce one important notion . [ def.weak.core ] let @xmath67 be a spherically symmetric density on @xmath0 . a _ weak core _ is a centered ball @xmath106 such that @xmath107 as @xmath108 . weak cores are balls , centered at the origin with growing radii as @xmath77 increases , in which random points are placed so densely that the balls with fixed ( e.g , unit ) radius about these random points become highly connected with one another and form a giant component of a geometric graph . for example , if @xmath67 has a power - law tail @xmath109 for some @xmath71 and normalizing constant @xmath110 ( @xmath111 denotes a euclidean norm ) , then the radius of a weak core is given by @xmath112 . the properties of a weak core , together with those of the related notion of a _ core _ , were carefully explored in @xcite for a wide class of distributions . see also @xcite and @xcite . note that the @xmath100 determined in @xmath105 coincides with the radius of a weak core ( up to multiplicative factors ) . since there are essentially no holes inside the weak core , the case in which @xmath103 satisfies @xmath113 , @xmath114 is expected to lead to the same asymptotic result as that in regime @xmath105 . therefore , all non - trivial results regarding asymptotics of @xmath92 can be completely covered by regimes @xmath115 . first , we assume that @xmath103 satisfies condition @xmath104 , i.e. , @xmath116 it is then elementary to check that @xmath103 is a regularly varying sequence ( at infinity ) with exponent @xmath117 since this exponent depends on @xmath22 , we write @xmath118 whenever it becomes an asymptotic solution to . then , the resulting ech complex lying outside @xmath91 is so sparse that there appear at most finitely many @xmath22-dimensional holes outside @xmath91 . hence , the occurrence of @xmath22-dimensional holes outside @xmath91 is seen to be rare , " and , consequently , the limiting process for @xmath92 is expressed as a natural functional of a certain poisson random measure . to define the limiting process more rigorously , we need some preparation . let @xmath119 this indicator function can be expressed as the difference between two other indicators : @xmath120 this decomposition comes from the fact that @xmath121 if and only if @xmath122 forms an _ empty @xmath123-simplex _ with respect to @xmath4 , i.e. , for each @xmath124 , the intersection @xmath125 is non - empty , while @xmath126 is empty . note that @xmath127 and @xmath128 are non - decreasing functions in @xmath4 : @xmath129 for all @xmath130 and @xmath131 . hereafter , we denote @xmath132 and @xmath133 . next , we give a poissonian structure to the limiting process . let @xmath134 where @xmath135 is a surface area of the @xmath69-dimensional unit sphere in @xmath0 . writing @xmath136 for the lebesgue measure on @xmath137 , the _ poisson random measure _ @xmath138 with intensity measure @xmath139 is defined by the finite - dimensional distributions @xmath140 for all measurable @xmath141 with @xmath142 . furthermore , if @xmath143 are disjoint subsets in @xmath137 , then @xmath144 are independent . we now state the main result of this subsection , the proof of which is , however , deferred to the appendix . in the following , @xmath145 denotes weak convergence . all weak convergences hereafter are basically either in the space @xmath146 of right - continuous functions with left limits or in the space @xmath147 of continuous functions . [ t : sparse.poisson ] suppose that @xmath118 satisfies . then , @xmath148 furthermore , @xmath149 recalling the definition of @xmath150 , one may state that the @xmath22-dimensional holes contributing to the limit are always formed by connected components on @xmath151 vertices , while other components on more than @xmath151 vertices never appear in the limit . since there need to be at least @xmath151 vertices to form a single @xmath22-dimensional hole , all the @xmath22-dimensional holes remaining in the limit are necessarily formed by components of the smallest size . because of the decomposition , we can denote @xmath152 as @xmath153 the following proposition shows that @xmath154 and @xmath155 can be represented as a time - changed poisson process . [ p : diff.poisson ] the process @xmath156 is represented in law as @xmath157 where @xmath158 is a poisson process with intensity @xmath159 . it is straightforward to calculate the moment generating function of @xmath160 for @xmath161 . for @xmath162 , we have @xmath163 exploiting this result , one can easily see that @xmath156 has independent increments , while for @xmath131 , @xmath164 has a poisson law with mean @xmath165 . now , the claim follows . by the moment generating function , it is easy to see that for each @xmath96 , @xmath166 has a poisson distribution with mean @xmath167 . nevertheless , the process @xmath168 can not be represented as a ( time - changed ) poisson process , since the sample paths of @xmath168 allow for both upward and downward jumps . in this subsection , we turn to the second regime , which is characterized by @xmath169 for which @xmath103 exhibits a slower divergence rate than that in the previous regime . thus , we expect that , in an asymptotic sense , there appear infinitely many @xmath22-dimensional holes outside @xmath84 , and accordingly , instead of a poissonian limit theorem , some sort of functional central limit theorem ( fclt ) governs the behavior of @xmath92 . to formulate the limiting process for @xmath92 , we need some preliminary work . as before , let @xmath136 denote the lebesgue measure on @xmath137 and @xmath170 a positive constant given in . denote by @xmath171 a _ gaussian @xmath172-noise _ , such that @xmath173 for measurable sets @xmath141 with @xmath174 , and if @xmath175 , then @xmath176 and @xmath177 are independent . we define a gaussian process @xmath178 by @xmath179 where @xmath150 is given in . this process involves the same indicator function as @xmath168 , which implies that , similarly to the last regime , the @xmath22-dimensional holes affecting @xmath180 must be always formed by connected components on @xmath151 vertices ( i.e. , components of the smallest size ) . we now state the main limit theorem for @xmath92 . the proof is presented in the appendix . [ t : sparse.fclt ] suppose that @xmath103 satisfies . then , @xmath181 [ r : betti.conv.second ] this theorem does not mention anything about a direct result on the fclt for @xmath182 . as can be seen in the proof of the theorem , however , a slight modification of the argument proves the clt for @xmath182 in a finite - dimensional sense . namely , under the assumptions of theorem [ t : sparse.fclt ] , @xmath183 where @xmath184 denotes a finite - dimensional weak convergence . we believe that this holds even in the space @xmath146 of right - continuous functions with left limits , but we are unable to prove the required tightness . in order to further clarify the structure of @xmath180 , we express the process as @xmath185 we claim that @xmath186 and @xmath187 are represented as a time - changed brownian motion . note , however , that , although @xmath180 is a gaussian process , it can not be denoted as a ( time - changed ) brownian motion . the process @xmath188 can be represented in law as @xmath189 where @xmath190 denotes the standard brownian motion , and @xmath191 . it suffices to prove that the covariance functions on both sides coincide . it follows from that for @xmath192 , @xmath193 finally , we turn to the third regime in which @xmath103 is determined by @xmath194 for some @xmath195 . in this case , the formation of @xmath22-dimensional holes drastically varies as compared to the previous regimes . if @xmath103 satisfies , then , by definition , @xmath91 coincides with the weak core ( up to multiplicative factors ) . therefore , many random points become highly connected to one another in the area sufficiently close to the weak core . as a result , connected components on @xmath196 vertices for @xmath197 can all contribute to the limit in the fclt . this phenomenon was never observed in the previous regimes . in order to make the notations for defining the limiting process significantly lighter , we introduce several shorthand notations . first , for @xmath198 , @xmath57 , and @xmath199 , @xmath200 for @xmath201 , @xmath202 , and @xmath96 , we define an indicator @xmath203 by @xmath204 clearly , @xmath205 coincides with the @xmath150 defined in . in particular , we write @xmath206 . + furthermore , for @xmath207 , @xmath208 , and @xmath209 , define an indicator @xmath210 by @xmath211 and , we set , for @xmath207 , @xmath212 , @xmath213 in the special case @xmath214 , we denote @xmath215 . now , we define stochastic processes @xmath216 for @xmath201 and @xmath217 , which function as the building blocks for the limiting process in the fclt . first , define , for @xmath207 , @xmath208 , @xmath218 , and @xmath219 , @xmath220 and @xmath221 } \biggr ] d{{\bf y}}d\rho , \notag\end{aligned}\ ] ] where @xmath222 for @xmath223 , and @xmath224 with @xmath225 etc . these functions are used to formulate the covariance functions of @xmath226 s . more specifically , for @xmath201 and @xmath227 , we define @xmath228 as a zero - mean gaussian process with the covariance function given by @xmath229 for every @xmath201 , there exists @xmath230 , which depends on @xmath196 , such that for all @xmath231 and @xmath96 , @xmath232 is identically zero , in which case , allows us to take @xmath228 as a zero process , i.e. , @xmath233 for all @xmath96 . for example , @xmath234 is a zero process for all @xmath235 . in addition , we assume that the processes @xmath236 are dependent on each other in such a way that for @xmath207 , @xmath208 , @xmath237 where @xmath238 is the kronecker delta . we now define a zero - mean gaussian process by @xmath239 which appears in the limiting process in the fclt . it is shown in the proof of theorem [ t : giant.fclt ] below that the right hand side of almost surely converges for each @xmath96 . we can rewrite @xmath240 as @xmath241 since the covariance function of @xmath228 involves the indicator function @xmath242 , we can consider the process @xmath228 as representing the connected components that are on @xmath196 vertices and possess @xmath243 holes . in particular , the process @xmath244 represents the connected components on @xmath151 vertices with a single @xmath22-dimensional hole . this implies that @xmath244 may share the same property as @xmath180 in the last regime in the sense that both processes represent connected components only of the smallest size . in the present regime , however , we can not ignore the effect of larger components emerging near the weak core , and therefore , many other gaussian processes , except for @xmath244 , will contribute to the limit in the fclt . before presenting the main limit theorem , we add a technical assumption that a constant @xmath245 in is less than @xmath246 , where @xmath247 is the volume of a unit ball in @xmath0 . it seems that the fclt below still holds without any upper bound condition for @xmath245 , but this is needed for technical reasons during the proof . similarly , the domain of functions in the space @xmath110 must be restricted to the unit interval @xmath248 $ ] . the proof of the theorem is deferred to the appendix . [ t : giant.fclt ] suppose that @xmath103 satisfies @xmath249 then , @xmath250.\ ] ] as in remark [ r : betti.conv.second ] , we can also obtain finite - dimensional convergence of @xmath182 . that is , under the conditions of theorem [ t : giant.fclt ] , @xmath251 in this appendix , we provide the proofs of theorems [ t : sparse.poisson ] , [ t : sparse.fclt ] , and [ t : giant.fclt ] . we first introduce the results known as the palm theory " in order to compute the expectations related to poisson point processes . indeed , the palm theory applies many times hereafter in the appendix . in section [ s : proof.first.regime ] , we prove theorem [ t : sparse.poisson ] , and , subsequently , in section [ s : proof.third.regime ] we verify theorem [ t : giant.fclt ] . we give the proof of theorem [ t : sparse.fclt ] in section [ s : proof.second.regime ] , while exploiting many of the results established in the former section [ s : proof.third.regime ] . before proceeding to specific subsections , we introduce some useful shorthand notations to save space . for @xmath252 , @xmath253 , and @xmath254 , @xmath255 denote also by @xmath256 a generic positive constant , which can vary between lines and is independent of @xmath77 . ( palm theory for poisson point processes , @xcite , see also section 1.7 in @xcite ) [ l : palm ] let @xmath78 be iid @xmath0-valued random variables with common density @xmath67 . let @xmath83 be a poisson point process on @xmath0 with intensity @xmath65 . let @xmath257 and @xmath258 be measurable bounded functions defined for @xmath259 , @xmath260 , and a finite subset @xmath261 of @xmath262-dimensional real vectors . then , @xmath263 where @xmath264 is a set of @xmath265 iid points in @xmath0 with density @xmath67 , independent of @xmath83 . furthermore , @xmath266 where @xmath267 is a set of @xmath265 iid points in @xmath0 and @xmath268 is a set of @xmath269 iid points in @xmath0 , such that @xmath270 is independent of @xmath83 , and @xmath271 , that is , there are no common points between @xmath267 and @xmath268 . moreover , let @xmath272 , @xmath273 be measurable bounded functions defined for @xmath274 . then , for every @xmath275 , @xmath276 where @xmath267 and @xmath268 are sets of @xmath9 iid points in @xmath0 with @xmath277 . since immediately follows from by the continuous mapping theorem , we may prove only . the proof of is divided into two parts . in the first , we show that @xmath278 where @xmath279 , @xmath198 , and , in the second , we prove that the difference between @xmath280 and @xmath182 vanishes in probability in the space @xmath146 . let @xmath285 denote a poisson random measure on @xmath286 with finite mean measure @xmath287 ( @xmath288 " represents the usual dirac measure ) . it is then elementary to verify that @xmath289 writing @xmath290 for the space of point measures on @xmath286 , will be complete , provided that we can show the point process convergence @xmath291 indeed , since the functional @xmath292 defined by @xmath293 is continuous on a set of _ finite _ point measures , implies by the continuous mapping theorem . according to @xcite ( or use theorem 2.1 in @xcite ) , in order to establish , it suffices to prove the following results : as @xmath294 , @xmath295 and @xmath296 for the proof of , it follows from the palm theory in lemma [ l : palm ] that @xmath297 changing the variables @xmath298 , @xmath299 , @xmath300 , together with the location invariance of @xmath301 s , @xmath302 the polar coordinate transform @xmath303 , followed by an additional change of variable @xmath304 , yields @xmath305 where @xmath68 is the @xmath69-dimensional unit sphere in @xmath0 and @xmath306 is the usual jacobian , that is , @xmath307 by the regular variation assumption of @xmath67 , we have that for every @xmath308 , @xmath70 , and @xmath309 , @xmath310 therefore , supposing the dominated convergence theorem is applicable , we can obtain @xmath311 to establish an integrable upper bound , we use the so - called potter s bound ( e.g. , proposition 2.6 @xmath312 in @xcite ) ; for every @xmath313 , we have @xmath314 @xmath315 for sufficiently large @xmath77 . since @xmath316 , the dominated convergence theorem applies as required . next , we show the tightness of @xmath318 in the space @xmath146 equipped with the skorohod @xmath319-topology . by theorem 13.4 in @xcite , it suffices to show that for every @xmath320 , there exists @xmath321 such that @xmath322 for all @xmath323 , @xmath324 , and @xmath219 . for typographical ease , define for @xmath324 and @xmath40 , @xmath325 by markov s inequality , we only have to show that @xmath326 for all @xmath323 and @xmath324 . the left hand side above is clearly equal to @xmath327 for @xmath328 , the palm theory yields @xmath329 where @xmath267 and @xmath268 are sets of @xmath330 iid points in @xmath0 sharing @xmath265 common points , that is , @xmath331 . by the same change of variables as in and , together with and potter s bound , we eventually have @xmath332 applying lemma [ l : tightness.lemma ] below , the rightmost term is bounded by @xmath333 , as required . to complete the proof , one needs to show that @xmath337 to this end , we use obvious inequalities @xmath338 where @xmath339 with @xmath340 , @xmath198 . + we have , for every @xmath341 , @xmath342 \biggr\ } \leq \e \bigl\ { \ , \sup_{0 \leq t \leq t } l_{k , n}(t ) \bigr\ } \\ & \leq \frac{n^{k+3}}{(k+3)!}\ , \p \bigl\ { \check{c}(x_1,\dots , x_{k+3 } ; t ) \text { is connected } , \ \| x_i \| \geq r_{k , n } , \ , i = 1,\dots , k+3\ , \bigr\}\end{aligned}\ ] ] the same change of variables as in and , together with potter s bound , concludes that the rightmost term above turns out to be @xmath343 thus , follows . first , we define for @xmath201 , @xmath227 , @xmath96 , and @xmath350 , @xmath351 where @xmath352 is given in , and @xmath353 , @xmath354 , @xmath284 . next , define for @xmath201 , @xmath202 , @xmath86 , @xmath355 , and a finite subset of @xmath262-dimensional real vectors @xmath356 @xmath357 and @xmath358 throughout the proof , we rely on a useful representation for the @xmath22-th betti number adopted in @xcite @xmath359 let ann@xmath360 be an annulus of inner radius @xmath361 and outer radius @xmath362 . for @xmath363 , @xmath284 , define max@xmath364 as the function selecting an element with largest distance from the origin . that is , max@xmath365 if @xmath366 . if multiple @xmath367 s achieve the maximum , we choose an element with the smallest subscript . the following quantity is associated with the @xmath22-th betti number and plays an important role in our proof . for @xmath368 , @xmath369 clearly , @xmath370 . furthermore , we sometimes need a truncated betti number @xmath371 analogously , we can also define @xmath372 by the truncation . [ l : giant.cov ] for every @xmath373 and @xmath374 , we have , as @xmath375 , @xmath376 @xmath377 with @xmath378 @xmath379 } \biggr ] d{{\bf y}}d\rho . \notag\end{aligned}\ ] ] in terms of notations and , we have @xmath380 and @xmath381 . to prove lemma [ l : giant.cov ] , we require the results for lemmas [ l : conv.upper.lemma ] and [ l : geo.lemma ] below , for which we refine the ideas and techniques used in @xcite and @xcite . without any loss of generality , we may prove only the case @xmath382 . by the monotone convergence theorem , together with the palm theory in lemma [ l : palm ] , we have @xmath383 where @xmath264 is a set of iid points in @xmath0 with density @xmath67 , independent of @xmath83 . + it follows from lemma [ l : conv.upper.lemma ] @xmath104 that @xmath384 we need to justify the application of the dominated convergence theorem , for which we apply lemma [ l : conv.upper.lemma ] @xmath312 , stating that there exists a positive integer @xmath385 so that for all @xmath201 , @xmath227 , and @xmath86 , @xmath386 where @xmath387 is a positive constant satisfying @xmath388 . + appealing to lemma [ l : geo.lemma ] @xmath104 , together with stirling s formula @xmath389 for sufficiently large @xmath196 , we have @xmath390 thus , we can apply the dominated convergence theorem . as for @xmath394 , note first that if @xmath395 and @xmath264 share at least one point , @xmath396 therefore , it must be that @xmath397 ( i.e. , no common points exist between @xmath395 and @xmath264 ) whenever @xmath398 . it then follows from the palm theory that @xmath399 where @xmath267 and @xmath268 are sets of iid points in @xmath0 with density @xmath67 , such that @xmath271 , and @xmath270 is independent of @xmath83 . let @xmath400 be an independent copy of @xmath83 , which itself is independent of @xmath401 . then , one more application of the palm theory yields @xmath402 combining this with , @xmath403 by virtue of lemma [ l : conv.upper.lemma ] @xmath105 , while supposing temporarily that the dominated convergence theorem is applicable , the expression on the right hand side converges to @xmath404 and thus , @xmath405 , @xmath294 follows , as required . to establish a summable upper bound , we use lemma [ l : conv.upper.lemma ] @xmath406 and lemma [ l : geo.lemma ] @xmath312 . we have that @xmath407 at the last inequality , we used stirling s formula , i.e. , @xmath389 for sufficiently large @xmath196 . @xmath104 for @xmath201 , @xmath208 , and @xmath218 , @xmath408 @xmath312 there exists a positive integer @xmath385 such that for all @xmath201 , @xmath208 , and @xmath409 , @xmath410 where @xmath411 satisfies @xmath412 . moreover , throughout @xmath105 and @xmath406 below , @xmath267 and @xmath268 denote sets of iid points in @xmath0 with density @xmath67 such that @xmath271 and @xmath270 is independent of @xmath83 . let @xmath400 be an independent copy of @xmath83 , which is independent of @xmath401 . @xmath105 for @xmath207 , @xmath413 , and @xmath409 , @xmath414 @xmath406 there exists a positive integer @xmath385 such that for all @xmath207 , @xmath208 , and @xmath218 , @xmath415 where @xmath387 is the same positive constant as in @xmath312 . conditioning on @xmath264 , we have that @xmath416 let @xmath417 denote the last integral . changing the variables in the same way as in and yields @xmath418 where @xmath68 is the @xmath69-dimensional unit sphere in @xmath0 and @xmath306 is the jacobian . + by the regular variation assumption of @xmath67 , we have that for every @xmath308 , @xmath70 , and @xmath419 , @xmath420 appealing to potter s bound as in and , for every @xmath308 , @xmath70 , and @xmath421 , @xmath422 for an application of the dominated convergence theorem , we employ potter s bound once again . first , we choose @xmath387 , as in the statement of the lemma , so that @xmath412 , and then , fix @xmath423 . then , there exists a positive integer @xmath424 , which is independent of @xmath196 , such that @xmath425 and @xmath426 for all @xmath427 . the integrand in is now bounded above by @xmath428 , and , @xmath429 therefore , the dominated convergence theorem concludes that @xmath430 , @xmath294 , as required . + _ proof of @xmath312 _ : note first that there exists a positive integer @xmath431 so that @xmath432 because of and , we have , for all @xmath433 , @xmath434 _ proof of @xmath105 _ : first , we write @xmath435 observing that @xmath436 one can rewrite @xmath437 as @xmath438 next , we split @xmath439 into two parts . @xmath440 where @xmath441 by the spacial independence of the poisson point process , the first term on the right hand side of equals zero . rearranging the terms in @xmath442 and @xmath443 , we obtain @xmath444 where @xmath445 conditioning on @xmath401 , we have @xmath446 proceeding as in the proof of @xmath104 , while suitably applying potter s bound , we can obtain , as @xmath294 , @xmath447 similarly , we have @xmath448 } d{{\bf y}}d\rho,\end{aligned}\ ] ] and , therefore , @xmath449 _ proof of @xmath406 _ : note first that @xmath450 changing the variables in the same manner as in @xmath104 , the last expression above equals @xmath451 using the upper bound and @xmath452 and applying , we can complete the proof . since every connected component built on a set of @xmath196 points can contribute to the @xmath22-th betti number at most @xmath456 times , we have that @xmath457 it is well known that there exist @xmath458 spanning trees on a set of @xmath196 points , and thus , @xmath459 now , the claim is proved . + _ proof of @xmath312 _ : @xmath460 if @xmath461 is connected , there exist @xmath458 spanning trees constructed from @xmath462 . similarly , there are @xmath463 spanning trees built on the points @xmath464 whenever @xmath465 is connected . in addition , if @xmath466 is connected , two sets of points @xmath467 and @xmath468 must be at a distance of at most @xmath14 , implying that @xmath469 for some @xmath470 and @xmath471 ( take @xmath472 ) . therefore , @xmath473 subsequently , we establish the fclt for the truncated betti number , for which , as its limit , we need to define a truncated " limiting gaussian process . for @xmath474 , we define @xmath475 it is worthwhile noting that there is no need to restrict the range of @xmath245 as in . further , we do not need to restrict the domain of functions in the space @xmath110 . our proof is closely related to that in theorem 3.9 in @xcite . to prove finite - dimensional weak convergence , we apply the cramr - wold device , for which we need to establish the central limit theorem for @xmath479 for every @xmath480 , @xmath161 , and @xmath481 . + we first decompose this term into two parts in the following manner . for @xmath482 , we write @xmath483 define @xmath484 where @xmath485 is a truncated version of @xmath486 given by @xmath487 moreover , @xmath488 . it then follows from lemma [ l : giant.cov ] that @xmath489 for the required finite - dimensional weak convergence , we need to show that for every @xmath474 , @xmath490 by the standard approximation argument given on p. @xmath491 64 in @xcite , it suffices to show that for every @xmath482 , @xmath492 equivalently , as @xmath294 , @xmath493 let @xmath494 be unit cubes covering @xmath0 . let @xmath495 then , we see that @xmath496 . + subsequently , we partition @xmath497 as follows . @xmath498 we define a relation @xmath499 on a vertex set @xmath500 by @xmath501 if and only if the distance between @xmath502 and @xmath503 is less than @xmath504 . in this case , @xmath505 constitutes a _ dependency graph _ , that is , for any two vertex sets @xmath506 with no edges connecting them , @xmath507 and @xmath508 are independent . by virtue of stein s method for normal approximation ( see theorem 2.4 in @xcite ) , the proof will be complete , provided that for @xmath509 , @xmath510 for @xmath511 , we denote by @xmath512 the number of points in @xmath83 lying in @xmath513 clearly , @xmath512 possesses a poisson law with mean @xmath514 . using potter s bound , we see that @xmath512 is stochastically dominated by another poisson random variable with a constant mean @xmath256 . + observe that @xmath515 and , accordingly , we have @xmath516 therefore , for @xmath509 , @xmath517 which completes the proof of the finite - dimensional weak convergence . next , we turn to verifying the tightness of @xmath518 in the space @xmath147 . according to theorem 12.3 in @xcite , we only have to show that , for any @xmath320 , there exists @xmath321 such that @xmath519 for all @xmath520 and @xmath350 . we see that @xmath521 ( @xmath264 and @xmath400 are defined in the statement of lemma [ l : conv.upper.lemma ] ) . + combining lemma [ l : conv.upper.lemma ] @xmath312 , @xmath406 and lemma [ l : geo.lemma ] @xmath104 , @xmath312 , the integrands in the last expression can be bounded above by a positive and finite constant , which does not depend on @xmath95 , @xmath4 , and @xmath77 . we now conclude that @xmath522 and , thus , the tightness follows . by lemma [ l : giant.truncated.clt ] and theorem 3.2 in @xcite , it suffices to verify that for every @xmath523 , @xmath524 and @xmath525 by chebyshev s inequality , immediately follows , provided that @xmath526 by cauchy - schwarz inequality , we only have to show that @xmath527 one can decompose the integrand as follows . @xmath528 ( @xmath264 and @xmath400 are defined in the statement of lemma [ l : conv.upper.lemma ] ) . + combining lemma [ l : conv.upper.lemma ] @xmath312 , @xmath406 and lemma [ l : geo.lemma ] @xmath104 , @xmath312 proves that this is bounded by @xmath529 since @xmath530 , the claim has been proved . since the proof of is almost the same as that of , we omit it . the proof of theorem [ t : sparse.fclt ] somewhat parallels that of theorem [ t : giant.fclt ] , for which we need to recall the notations of several indicator functions and variants of the betti numbers defined at the beginning of section [ s : proof.third.regime ] . as in lemma [ l : giant.cov ] , we begin with computing the asymptotic mean and covariance of the scaled @xmath22-th betti numbers . in the following , let @xmath531 . recall that , in the last subsection , lemmas [ l : conv.upper.lemma ] and [ l : geo.lemma ] play a crucial role in proving lemma [ l : giant.cov ] . in the present subsection , however , one needs to replace lemma [ l : conv.upper.lemma ] with lemma [ l : sparse.conv.upper.lemma ] below in order to show lemma [ l : sparse.cov ] . since the proof of lemma [ l : sparse.conv.upper.lemma ] is analogous to that of lemma [ l : conv.upper.lemma ] , we omit the proof . moreover , @xmath267 and @xmath268 denote sets of iid points in @xmath0 with density @xmath67 such that @xmath271 and @xmath270 is independent of @xmath83 . let @xmath400 be an independent copy of @xmath83 , which is independent of @xmath401 . as in the proof of lemma [ l : giant.cov ] , we may prove only the case @xmath382 . moreover , we compute only the limit of scaled covariance by @xmath539 . proceeding as in the proof of lemma [ l : giant.cov ] , one can write @xmath540 by lemma [ l : sparse.conv.upper.lemma ] @xmath115 , it now suffices to show that , as @xmath375 , @xmath541 and @xmath542 it follows from lemma [ l : geo.lemma ] @xmath104 that @xmath543 where the last convergence is obtained by @xmath544 , @xmath294 . + similarly , by lemma [ l : geo.lemma ] @xmath312 , @xmath545 the next lemma claims the fclt for the integral process associated with the truncated @xmath22-th betti number . the proof is almost the same as that of lemma [ l : giant.truncated.clt ] , and therefore , we do not state it here . it is then straightforward to complete the proof of theorem [ t : sparse.fclt ] by combining lemma [ l : sparse.truncated.clt ] and theorem 3.2 in @xcite , as in the last subsection .
topological data analysis ( tda ) refers to an approach that uses concepts from algebraic topology to study the shapes " of datasets . the main focus of this paper is persistent homology , a ubiquitous tool in tda . basing our study on this , we investigate the topological dynamics of extreme sample clouds generated by a heavy tail distribution on @xmath0 . in particular , we establish various limit theorems for the sum of bar lengths in the persistence barcode plot , a graphical descriptor of persistent homology . it then turns out that the growth rate of the sum of the bar lengths and the properties of the limiting processes all depend on the distance of the region of interest in @xmath0 from the weak core , that is , the area in which random points are placed sufficiently densely to connect with one another . if the region of interest becomes sufficiently close to the weak core , the limiting process involves a new class of gaussian processes .
the recent discovery of discrete , multiple stellar populations within several among the most massive globular clusters ( gc ) in the milky way ( e.g. , bedin et al . 2004 : piotto et al . 2007 ) has brought new interest and excitement on gc research . further excitement was added by the realization that some of these stellar populations are selectively enriched in helium to very high values ( @xmath2 ) , without such enrichment being accompanied by a corresponding increase in the heavy element abundance of the expected size , if at all ( e.g. norris 2004 ; piotto et al . 2005 , 2007 ) . thus , two main closely interlaced questions arise : how did such multiple populations form ? and , which kind of stars have selectively produced the fresh helium , without contributing much heavy elements ? therefore , understanding the origin of multiple populations in gcs will need to make major steps towards a better knowledge of a variety of processes such as gc formation , star formation , as well as of some long standing issues in stellar evolution , e.g. , the asymptotic giant branch ( agb ) phase or the effect of rotation in massive stars . moreover , it is quite possible that some of the massive gcs with multiple populations are the compact remnants of nucleated dwarf galaxies ( bekki & norris 2006 ) , an aspect that widens even further the interest for this subject . all this together makes multiple populations in gcs an attractive , interdisciplinary subject of astrophysical investigation . in this paper the main observational evidences are used to constrain the proposed possible scenarios for the origin of the multiple populations in gcs , and their associated chemical enrichment processes . in section 2 the main relevant observational facts are briefly reviewed , and in section 3 agb stars and fast rotating massive stars are discussed as possible helium producers , along with population iii stars and the possible role of deep mixing during the red giant branch ( rgb ) phase of low mass stars . in section 4 various proposed scenarios for the origin of the multiple populations are confronted with the relevant observational facts , arguing that only massive agb stars appear to remain viable as helium producers . a general discussion and the main conclusions are presented in section 5 . the main observational facts concerning the evidence for multiple stellar populations in galactic gcs are briefly presented in this section , separately for photometric and spectroscopic observations . further collective information and references can be found in a recent review on these topics by piotto ( 2008 ) . it was recognized in the early seventies that stars in the gc @xmath0 cen span a wide range of metallicities ( cannon & stobie 1973 ; freeman & rodgers 1975 ) and since then it has been considered as a unique , exceptional cluster . over the last several years there has been a surge of interest on this cluster , starting with the discovery that its _ broad _ rgb actually resolves into several distinct rgbs ( lee et al . 1999 ; pancino et al . then came the discovery that also its main sequence ( ms ) splits into two parallel sequences , with the bluer one being more metal rich by nearly a factor of two , hence indicating a higher helium abundance ( bedin et al . 2004 ; norris 2004 ; piotto et al . from the color - magnitude diagram ( cmd ) of the subgiant branch ( sgb ) region we now know that this cluster includes at least 4 , possibly 5 distinct stellar populations ( sollima et al 2005 ; lee et al . 2005 ; villanova et al . 2007 ) . the majority of the cluster stars ( @xmath3 ) populate the _ red _ ms , have [ fe / h ] = 1.7 and are assumed to have primordial helium abundance , @xmath4 ; @xmath5 belong to the _ blue _ , helium enriched ms with [ fe / h ] = 1.4 and @xmath6 , which implies a huge helium enrichment ratio @xmath7 ( piotto et al . 2005 ) . the residual @xmath8 of the stars belong to a metal rich component for which discrepant spectroscopic estimates exist , ranging from [ fe / h ] = 1.1 @xmath9 ( villanova et al . 2007 ) to [ fe / h ] = 0.6 ( pancino et al . 2002 ) . part of the discrepancy may be due to the small number of stars in this group that have been observed at high resolution . for these stars we have no direct hint on the helium abundance , but values as high as @xmath10 have been proposed ( sollima et al . 2005 ; lee et al . 2005 ) . how these three ms components map into the many sgb , rgb and horizontal branch ( hb ) components of this cluster remains partly conjectural . however , on the rgb there is well established evidence for sodium and aluminium being anticorrelated with oxygen and magnesium ( norris & da costa 1995 ; smith et al . 2000 ) , indicative that in a fraction of the stars material is present that was processed through hydrogen burning at high temperatures . note that the @xmath11 overall fraction of the blue ms population takes into account the observed radial gradient in this fraction ( bellini et al . in preparation ) , the helium rich population being more centrally concentrated . with a mass of @xmath12 , @xmath0 cen is the most massive gc in the galaxy ( pryor & meylan 1993 ) . on this basis , it is worth estimating the amount of fresh helium and iron that was produced and is now incorporated in the minority populations . given its mass ( @xmath13 ) and helium abundance , the intermediate metallicity , helium rich population includes @xmath14 of fresh helium ( i.e. , helium of stellar origin ) and @xmath15 of fresh iron , having adopted @xmath16 . similarly , the most metal rich sub - population includes @xmath17 of fresh iron but we can not presently estimate its helium enrichment , if any . based on the multimodal hb morphology of this cluster , dantona & caloi ( 2004 ) speculated that ngc 2808 harbours three main populations , each with a distinct helium abundance . this has been nicely confirmed by the discovery of a triple ms in this cluster ( piotto et al . 2007 ) , with @xmath18 of the stars being assigned the primordial helium abundance ( @xmath4 ) , @xmath19 having helium enhanced to @xmath20 , and @xmath21 being up to @xmath22 . the residual @xmath23 of the stars are likely to be binaries . given the narrow rgb sequence , no iron abundance differences appear to be associated with these helium differences . however , there appears to be a multimodal distribution of [ o / fe ] ratios among the rgb counterparts of the three ms populations ( carretta et al . 2006 ) , suggesting that most oxygen has been turned to nitrogen in the helium - enriched populations.thus , this cluster represents an even more extreme case as far as the helium enrichment parameter @xmath24 is concerned , as there is no detectable increase of the overall metallicity in the helium enriched populations . the mass of this cluster is @xmath25 ( pryor & meylan 1993 ) , and therefore the intermediate helium - rich , and the very helium - rich populations contain @xmath26 and @xmath27 of fresh helium , respectively , with no appreciable extra iron . no multiple main sequences have yet been detected in the cmd of this cluster , which clearly shows a double sgb , with the two components being nearly equally populated ( @xmath28 and @xmath29 ) , and separated by @xmath30 mag ( milone et al . if interpreted in terms of age , this luminosity difference would imply an age difference of @xmath31 gyr . alternatively , it has been proposed that the double sgb may be due to one of the two populations being enhanced in cno elements by a factor of @xmath32 relative to the other ( cassisi et al . 2008 ) . the cmd of this cluster and of the superimposed core of the tidally disrupted sagittarius dwarf is extremely complex , with evidence of multiple ms turnoffs ( siegel et al . 2007 ) . however , there is no evidence of multiple populations within the cluster itself . much observational and theoretical efforts have been dedicated to these two metal - rich bulge clusters , since the discovery that their hbs exhibit a remarkable blue extension ( rich et al . unique among all gcs , they contain a number of rr lyraes with an average period as long as 0.75 days , making them a third oosterhoff type ( pritzl et al . the similarity of their blue hbs with those of @xmath0 cen and ngc 2808 has been emphasized by busso et al . ( 2007 ) who further explore helium enrichment as the cause of their unusual hb ( as originally suggested by sweigart & catelan 1999 ) . based on cmds that include hst ultraviolet bands ( f225w and f336w ) , busso et al . advocate a very high helium enhancements ( up to @xmath33 in ngc 6388 and @xmath34 in ngc 6441 ) ) for @xmath19 of the stars in each cluster . a more moderate helium enrichment ( @xmath35 is suggested by yoon et al . 2007 ) , but their hb data do not include the very hot extension revealed by the uv observations . both clusters have a mass of @xmath25 ( pryor & meylan 1993 ) , implying that the fresh helium content of the two clusters is @xmath36 and @xmath37 , respectively for ngc 6388 and ngc 6441 . given their low galactic latitude , these gcs are affected by fairly high reddening ( hence differential reddening ) which has so far prevented checking for the presence of multiple mss and sgbs as would be expected if the interpretation of their hb morphology in terms of helium enhancement is correct . however , this situation may improve soon , and there is already an indication for a split in the sgb of ngc 6388 ( piotto 2008 ) . departures from chemical homogeneity among stars within individual gcs are known since a long time ( see e.g. , the reviews by kraft 1979 , 1999 and gratton , sneden & carretta 2004 ) . generally referred to as _ abundance anomalies _ such departures include a variety of elements and molecules , such as cn and ch bimodality , the na - o , al - o , and al - mg anticorrelations , all indicative of contamination by materials having been exposed to hydrogen burning at high temperatures ( @xmath38 k ) . some or all such anomalies are also exhibited by the massive clusters discussed in the previous section , but to some extent affect virtually all well studied gcs , irrespective of their mass . for example , from the strength of the nh bands among rgb stars in ngc 6752 yong et al . ( 2008 ) infer than the nitrogen abundance among the sample stars spans almost two orders of magnitude , with no obvious bimodality . moreover , yong et al . argue that a similar spread must exist in many other clusters , given that their whole rgb shows a spread in the nh - sensitive strmgren s @xmath39 index of similar size to that of ngc 6752 . unfortunately , no similar @xmath39 data are presented for the clusters with multiple stellar populations discussed in the previous section . among them , ngc 2808 exhibits the canonical na - o anticorrelation , with three distinct peaks in oxygen abundance ( carretta et al . 2006 ) , that are likely to be associated with the three distinct populations revealed by the main sequence photometry ( piotto et al . 2007 ) . among the clusters mentioned in section 2.1 , ngc 6441 ( gratton et al . 2007 ) and ngc 6388 ( carretta et al . 2007 ) exhibit the na - o anticorrelation and so does ngc 1851 , which exhibits also al - o anticorrelation and variations of the s - process elements ( yong & grundahl 2008 ) . in @xmath0 cen star - to - star variations of cno elements exist as well , but their overall [ c+n+o / fe ] abundance ratio appear to be constant within a factor @xmath32 , as typical of all gcs ( e.g. , pilachowski et al . 1988 ; norris & da costa 1995 ; smith et al . 2000 , 2005 ) . in particular , this holds for each of the various metallicity groups in @xmath0 cen , which also exhibit large dispersions in the al abundance ( johnson et al . 2008 ) the main observational constraints that will be used in the following to narrow down the options on the origin of the multiple stellar populations in gcs can be summarized as follows : all gcs with confirmed ( or highly probable ) multiple stellar populations ( @xmath0 cen , ngc 2808 , ngc 1851 , ngc 6388 , and ngc 6441 ) belong to the sample of the 10 most massive gcs in the galaxy ( with @xmath40 ) . massive gcs with @xmath40 exist that do not show evidence for multiple mss and/or sgbs , nor multimodal hbs ( e.g. , 47 tuc , sirianni et al . 2005 ) . some sub - populations can only be understood in terms of high helium content , up to @xmath41 or more . multiple stellar populations within each gcs are characterized by discrete values of the helium and iron abundances , i.e. , there appears to be no composition spread within individual sub - populations as the width of the sequences on the cmds is consistent with being due only to known photometric errors . clusters with helium - enriched multiple populations also tend to exhibit evidence for na and al being anticorrelated with o and mg , indicative of materials that have been exposed to hydrogen burning in a hot environment . however , such variations appear to be virtually universal among gcs , no matter whether they exhibit multiple main sequences or not . helium enrichment does not appear to be associated with an increase of [ c+n+o / fe ] . a massive cluster ( m54 ) sits at the center of the core of the sagittarius dsph galaxy , and is embedded in its multiple stellar populations . this offers a concrete example of a massive gc being the nucleus of a dwarf galaxy . critical for understanding the origin of the multiple populations in gcs is the identification of the kind of stars responsible for the production of the excess helium now incorporated in some of these populations . three kinds of stars have been discussed in the literature , namely : agb and super - agb stars , massive rotating stars , and population iii stars . agb stars have long been considered for being responsible for at least some of the composition anomalies of gc stars ( dantona , gratton & chieffi 1983 ; renzini 1983 ; iben & renzini 1984 ) . indeed , agb stars present two attractive characteristics , namely : 1 ) they eject large amounts of mass at low velocity ( @xmath42 km s@xmath43 ) which can then be retained within the potential well of gcs , and 2 ) the ejected materials can be highly processed through hydrogen burning at high temperature , hence being enriched in he and n , and presenting the na - o and al - o anticorrelations ( e.g. , renzini & voli 1981 ; dantona & ventura 2007 ; karakas & lattanzio 2007 ) . however , three difficulties with the agb scenario have been pointed out : the helium abundance , the mass of the secondary populations relative to the primary one , and the constancy of [ c+n+o / fe ] ( e.g. , karakas et al . 2006 ; karakas & lattanzio 2007 ; choi & yi 2008 ) . these difficulties are here addressed again . among agb stars , especially interesting are those in the mass range @xmath44 , because they experience the hot bottom burning ( hbb ) process that even in the presence of carbon third dredge - up ( 3du ) prevents the formation of carbon stars ( renzini & voli 1981 ) , and will produce the na - o and al - o anticorrelations . instead , in lower mass agb stars the c / o ratio largely exceeds unity , and especially so at low metallicities . if stars were to form from these agb ejecta they would be carbon stars , whereas such stars are absent in the clusters with multiple populations . moreover , @xmath44 stars experience the so - called second dredge - up ( 2du ) shortly before reaching the agb ( becker & iben 1979 ) , leading to a sizable helium enrichment in the whole stellar envelope . besides the 2du , also the the 3du and the hbb may contribute to increase the helium abundance in the envelope of massive agb stars , see e.g. , fig . 11 in renzini & voli ( 1981 ) . however , renzini & voli models assumed the validity of a universal core mass - luminosity relation for agb stars , whereas blcker & schnberner ( 1991 ) showed that this is no longer valid once the hbb process operates . instead , in the presence of hbb the luminosity of agb stars increases dramatically , driving stars to very high ( superwind ) mass loss rates and leading to an earlier termination of the agb phase . although this scenario is generally accepted , the precise duration of the agb phase , hence the extent to which the 3du and the hbb process operate in massive agb stars , all remain highly model dependent . thus , a reasonable _ lower limit _ to the amount of helium enrichment is given by the 2du contribution alone , which is fairly well established . this assumption is consistent with the blcker & schnberner ( 1991 ) result , which suggests a very prompt ejection of most of the envelope shortly after the onset of the hbb process . in practice , there would be time for the hot cno processing of these elements originally present in the star , converting most of c and o into n ( renzini & voli 1981 ) , as well as for establishing the anticorrelations of na and al with o and mg . these are indeed fairly rapid nuclear processes once the temperature at the base of the convective envelope is high enough . moreover , the prompt ejection of the envelope drastically reduces the time spent on the thermally pulsing agb ( e.g. , over the early estimates of renzini & voli ) , suppressing along with it most of the 3du events , hence preventing an appreciable increase of the overall cno abundance in the envelope . thus , it is quite plausible for agb stars with hbb to eject material in which helium is highly enriched , cno nuclei are globally not significantly enhanced , but have approached their nuclear equilibrium partition , and anticorrelations among other nuclei have been established by proton captures at high temperatures . thus , in this scenario there is no significant increase of c+n+o due to the 3du . in summary , it is assumed here that agb stars experiencing the hbb process ( @xmath45 ) 1 ) eject the whole envelope shortly after the onset of hbb , 2 ) the ejecta are enriched in helium solely by the 2du , 3 ) too few 3du events have time to take place , and no appreciable increase of the overall cno abundance occurs in the envelope , and 4 ) hbb is sufficiently effective to promptly establish the na - o and al - o anticorrelations . some observational evidences support these assumptions . the mass distribution of white dwarfs terminates at @xmath46 ( bergeron , saffer & liebert 1992 ; bragaglia , renzini & bergeron 1995 ; koester et al . 2001 ) , indicating that the core mass of agb stars does not grow beyond this limit . this also implies that there must be very little , if any , increase of the core mass during the evolution of the most massive agb stars , since the core mass of a @xmath47 star just after completion of the 2du is already @xmath48 ( becker & iben 1979 ) . in addition , an agb calibration based on globular clusters in the magellanic clouds indicates that in clusters younger than @xmath49 myr there is negligible contribution of agb stars to the bolometric luminosity of the clusters ( maraston 2005 ) . this implies a very short agb phase and a very small , if any , increase of the core mass during the agb phase of stars more massive than @xmath50 . assuming that the first stellar generation formed in a short , virtually instantaneous burst , @xmath51 to @xmath52 stars were shedding their envelope between @xmath53 and @xmath49 myr after the burst . it is during this time interval that helium - enriched agb ejecta may have accumulated inside the cluster potential well , and new stars may have formed out of them . the amount of fresh helium released by stars of initial mass @xmath54 assuming only the 2du contribution can be easily estimated from becker & iben ( 1979 ) , see also fig . 1 in renzini & voli ( 1981 ) , where the mass of dredged - up helium is @xmath55 for @xmath56 and increases linearly to @xmath57 for @xmath58 . note that only 3/4 of the dredged - up helium is `` fresh '' , i.e. , has been synthesized within the star itself , whereas 1/4 is primordial , given that @xmath4 in the first stellar generation . therefore : @xmath59 which applies to a metal - poor population with @xmath60 . here only stars with initial mass in the interval @xmath61 experience the 2du before the agb phase , then soon activate the hbb process and expel the envelope according to the scenario sketched above . thus , convolving the mass of fresh helium with the initial mass function ( imf ) one can derive the total mass of fresh helium produced and expelled by these 3 to 8 @xmath62 stars per unit stellar mass in the whole population . for a salpeter imf ( slope @xmath63 ) for @xmath64 , and a flatter imf for the lower mass stars with @xmath65 for @xmath66 , one then derives : @xmath67 i.e. , the mass of fresh helium released is @xmath68 of the original mass of the parent stellar population . in a similar fashion , one can estimate the helium abundance @xmath69 in the ejecta of the 3 to 8 @xmath62 stars , which ranges from the primordial value @xmath4 for @xmath56 to @xmath70 for @xmath71 . integrating over the same imf , one gets that the average helium abundance of the ejecta from 3 to 8 @xmath62 stars is @xmath72 . higher values could be obtained by restricting the integration over a narrower mass range , e.g. , 4 to 8 @xmath62 or 5 to 8 @xmath62 . for the minimum mass approaching 8 @xmath62 then @xmath73 tends to 0.35 , but the total amount of released fresh helium would vanish . stars in the range @xmath74 ignite carbon non - explosively and may leave o - ne white dwarfs , or proceed to electron - capture / core - collapse supernovae . the former outcome results if the envelope is lost in a ( super)wind during helium - shell burning ( the super - agb phase ) . conversely , a supernova explosion ends the life of the star if mass loss during the super - agb phase is insufficient to prevent the core from growing in mass until it finally collapses ( e.g. , nomoto 1984 ; ritossa , garcia - berro & iben 1996,1999 ; poelarends et al . s - agb stars have also experienced the 2du and their envelope has been enriched in helium to @xmath6 , which makes them attractive helium contributors in the context of helium - rich populations in globular clusters ( pumo , dantona & ventura 2008 ) . assuming that all 8 to 10 @xmath62 s - agb stars leave o - ne white dwarfs , one can estimate that the fresh helium mass produced by 3 to 10 @xmath62 stars would be @xmath75 higher than given by eq . ( 2 ) , i.e. , @xmath76 the accuracy of this theoretical estimate is difficult to assess . the helium mass could actually be somewhat higher if the duration of the agb phase is long enough to allow the 3du and hbb processes to increase the envelope helium beyond the value reached after the 2du , but it could be lower if the mass range of the useful s - agb stars is narrower than adopted here . ( note that in both cases the main uncertainty comes from what one is willing to assume for the mass loss during the agb / s - agb phases . ) still , one can regard this estimate as quite reasonable , given our current understanding of agb / s - agb evolution . as far as the average helium abundance is concerned , from the inclusion of s - agb stars one predicts a modest increase from @xmath72 to @xmath77 , or a little higher if considering a minimum contributing mass somewhat in excess of @xmath78 . in any event , @xmath79 can be regarded as the upper limit given the assumed agb / s - agb evolution . finally , it is worth noting that stars below @xmath1 do not experience the 2du and the hbb processes , spend a long time on the agb in its thermally - pulsing phase , and experience repeated 3du episodes which increase both helium and carbon in their envelopes . the fact that helium is accompanied by carbon enhancement makes these lower mass agb stars less attractive helium producers , because the helium rich populations in gcs do not appear to be enriched in carbon ( piotto et al . thus , the interesting mass range is @xmath51 to @xmath80 . in summary , it is assumed that metal poor ( @xmath81 ) agb stars more massive than @xmath1 and s - agb stars experience the 2du and the hbb process , and leave enough for the hbb process to convert most of the original carbon and oxygen into nitrogen , but not enough to experience a sufficient number of 3du episodes to significantly increase the overall cno abundance in the envelope . in this respect , the schematic agb evolution adopted here differs from agb models existing in the literature ( e.g. , renzini & voli 1981 ; groenewegen & de jong 1993 ; herwig 2004 ; izzard et al . 2004 ; ventura & dantona 2005 , 2008 ; marigo & girardi 2007 ) . massive stars also produce sizable amounts of fresh helium , especially during their wolf - rayet phase . however , they also produce metals in large quantity , whereas the helium rich populations in gcs are very modestly enriched in metals ( @xmath0 cen ) , or not at all ( ngc 2808 ) . in the attempt to overcome this difficulty , maeder & meynet ( 2006 ) have proposed fast - rotating massive stars as potential helium producers in young gcs , a scenario further developed by decressin et al . ( 2007 ) , decressin , charbonnel & meynet ( 2007 ) and meynet , decressin & charbonnel ( 2008 ) . massive rotating stars would harbor meridional circulations bringing to the surface products of hydrogen burning ( i.e. , helium ) , while losing ( helium enriched ) mass in three distinct and physically separated modes : 1 ) a slow outflowing equatorial disk whose helium abundance increases as evolution proceeds , 2 ) a regular , fast , radiatively driven wind also enriched in helium in the directions unimpeded by the disk , and 3 ) a final core - collapse supernova explosion . only the slow outflowing disk is considered of interest for the production of helium - enriched stars , because both radiative winds and supernova ejecta run at thousands of km / s , and would not be retained inside the relatively shallow potential well of the proto - cluster . while physically plausible , this scenario suffers from the difficulty of predicting with any degree of confidence the efficiency of meridional circulations to mix helium into the stellar envelope , and the rate of mass loss via the outflowing disk , none of which can be estimated from first principles . moreover , star formation would have to be confined within the outflowing disks around individual stars , before such disks are destroyed by the fast winds and supernova ejecta from nearby cluster stars , and mixed with them ( meynet et al . 2008 ) . in an early attempt to account for some gc abundance anomalies , sweigart & mengel ( 1979 ) proposed that in low mass upper rgb stars ( @xmath82 ) mixing could extend below the formal boundary of the convective envelope , and reach well into the hydrogen burning shell . thus , materials processed by hydrogen - burning reactions within the shell could be brought to the surface , changing the c : n : o proportions , and possibly establishing some of the abundance ( anti)correlations typical of gc stars . moreover , along with processed cno elements , some helium enrichment could also take place , hence affecting the subsequent hb evolution ( sweigart 1997 ; sweigart & catelan 1998 ; moehler & sweigart 2006 ) . observations of upper rgb stars in several gcs suggests indeed that the sweigart - mengel process may be at work in these stars , given that the @xmath83c/@xmath84c ratio has reached close to its nuclear equilibrium value ( @xmath85 ) in virtually all of them ( recio - blanco & de laverny 2007 ) . however , several abundance anomalies extend to much lower luminosities and down to the main sequence ( e.g. gratton et al . 2004 ) , which means that rgb self - enrichment can not be the only process at work . in particular , it can not account for the helium - enriched ms stars , unless one is willing to consider the possibility of forming the subsequent stellar generation out of the ejecta from low - mass rgb stars . there are several difficulties with this option , such as the small mass lost by stars that may experience the sweigart - mengel process ( i.e. , those in the range @xmath86 to @xmath87 ) , relative to agb / s - agb stars , or the very long ( several gyr ) time required to accumulate any sizable amount mass before being suddenly turned into stars . this very long accumulation time would imply the secondary populations to be several gyr younger than the first generation . according to villanova et al . ( 2007 ) there is actually a hint for the helium rich population in @xmath0 cen being a few gyr younger , but other interpretations of the data exist that do not require such large age differences ( sollima et al . 2005 ; lee et al . moreover , a several gyr long accumulation time looks quite unlikely , given the possible interaction of the cluster with the galactic environment ( e.g. disk crossing , tidal stripping , etc . ) . perhaps more fundamentally , metal poor @xmath88 stars become carbon stars on the agb , the phase during which most of their mass loss takes place , and therefore they are not suited to provide raw material with the proper chemical composition for the production of secondary populations . for all these reasons rgb stars are considered less likely helium producer candidates . still , rgb self enrichment may add further variance on top of other processes . finally , in order to achieve very high @xmath24 values stochastic contamination by helium produced by population iii stars was suggested by choi & yi ( 2007 ) in a scenario in which population iii and population ii star formation temporally overlap . here helium would be produced by very massive pop . iii stars , and metals by massive pop . ii stars , and mixing their products in various proportions one could achieve high values of @xmath89 . although this process may take place in nature , it does not appear to work for explaining the properties of multiple stellar populations in globular clusters . for example , in the case of ngc 2808 we have three distinct values of the helium abundance , with the same metallicity , a pattern that can not be reproduced by mixing gas that formed the dominant cluster population with helium - rich gas from pop . iii stars . enrichment in helium would indeed be accompanied by dilution of metals , and the helium rich population would be metal poor , contrary to the case of both @xmath0 cen and ngc 2808 . a variant of this scenario was proposed by chuzhoy ( 2006 ) , with population iii stars being pre - enriched in helium thanks to gravitational settlement of this element within dark matter halos in the early universe . while this mechanism may produce even higher helium abundances in the ejecta of massive population iii stars , it is prone to the same difficulties of the choi & yi scenario if intended to account for the helium rich populations in gcs . a most stringent constraint on scenarios for the origin of multiple populations is their very nature , i.e. , being indeed , multiple , discrete populations each characterized by a specific helium abundance , as most clearly evident in the case of @xmath0 cen and ngc 2808 . this immediately implies that helium enrichment of the interstellar medium ( ism ) and star formation for the second ( and third ) stellar generation were not concomitant processes , but instead took place sequentially . helium - rich material was accumulated in the ism ( and mixed therein ) for a sufficiently long time until suddenly a burst turned a major fraction of the ism into stars . in fact , a continuous star formation process proceeding along with the ism helium enrichment would have resulted in a continuous distribution of helium abundances in the newly formed stars , hence in a broadening of the main sequence rather than in well separated sequences as actually observed . in this section one tests various proposed scenarios against this important constraint . any gas matter lost by the first stellar generation ( e.g. , by its agb stars ) was necessarily less massive than the parent cluster , and hence had a lower mass density compared to the stellar component if spatially distributed in roughly the same way . a simple calculation shows that under such circumstances any stellar mass size portion of the ism already included several lower main sequence stars . at first sight accretion onto these pre - existing seeds would seem more likely than starting new stars from scratch . hence , already a long time ago it was favored as a likely mean to produce chemical anomalies in gcs ( e.g. , dantona , gratton & chieffi 1983 ; renzini 1983 ; iben & renzini 1984 ) , and has been considered even recently in the context of the helium rich populations ( tsujimoto , shigeyama & suda 2007 ; newsham & terndrup 2007 ) . however , accretion depends on mass , velocity and orbit of each star inside the cluster , which would have resulted in a broad range of accreted masses from the ism whose helium abundance was secularly evolving . after rayleigh - taylor mixing of accreted matter with the underlying layers of lower molecular weight , stars would now show a range of helium abundances . thus , accretion would inevitably result in a broad , continuous distribution of stellar helium abundances , contrary to the required multiple discrete values . in spite of its attractiveness , accretion must therefore be rejected as the primary process having produced the helium - enriched populations , at least in the two clusters with clearly defined multiple main sequences . this leaves multiple star formation episodes as the only mechanism able to produce successive stellar generations , each with a uniform helium ( and metal ) abundance . this means that helium - enriched gas had to accumulate in the potential well of the cluster , without experiencing any star formation , until something suddenly triggered the burst of star formation . yet , it is unlikely that the efficiency of converting ism gas into stars was near unity , and the lower this efficiency , the more massive had to be the first stellar generation in order to account for the mass of the second generation and its helium content . any gas not converted into stars was soon lost from the proto - cluster , blown in a wind powered by high velocity stellar winds from massive stars and supernova explosions . it is reasonable to assume that the first cluster stellar generation arose from a burst of star formation , of duration shorter than the lifetime of the most massive stars ( i.e. , @xmath90 myr ) . then the subsequent 2030 myr were dominated by massive stars , whose winds and supernovae cleared the protocluster of any residual gas . after the last core - collapse supernova , i.e. , 2030 myr since the beginning , conditions were finally established favoring the low - velocity winds from s - agb / agb stars to accumulate . at the beginning of the accumulation the helium abundance of such material was that pertaining to the ejected envelopes of @xmath91 stars , or @xmath92 according to the estimate presented in section 3.1 . later , as agb stars of lower and lower mass were ejecting their envelope , contributing material less enriched in helium , the helium abundance in the ism kept steadily decreasing , down to @xmath93 some 300 myr after start . subsequently , as the agb started to be populated by @xmath94 stars , addition of fresh helium from 3du replaced that from 2du , but along with it also carbon and s - process elements began to accumulate . clearly , the time interval between 2030 myr and @xmath95 myr is the most propitious epoch for producing subsequent stellar generations with an helium abundance and overall composition close to that demanded by the observations . more demanding is , however , the requirement of producing a second or a third generation of the observed mass . for example , in the case of @xmath0 cen , according to eq . ( 3 ) its present first generation of @xmath96 produced only @xmath97 of fresh helium , even allowing for a full s - agb contribution , a factor of @xmath98 less than the observational requirement estimated in section 2.1.1 . we encounter here the main difficulty of the agb scenario , as already pointed out by several authors ( e.g. , karakas et al . 2006 ; bekki & norris 2006 ; karakas & lattanzio 2007 ) , i.e. , requiring a first generation of stars much more massive than that still harbored by the cluster . this difficulty is further exacerbated if the actual efficiency of star formation is less than unity . however , the helium abundance in agb / s - agb stars @xmath99 is not in sharp contrast with the observational requirements from the two cases with well established multiple main sequences ( @xmath0 cen and ngc 2808 ) , where in both cases @xmath100 . thus , agb / s - agb stars remain viable candidates , though we need much more of them than the agb / s - agb share of the first generation we now see in these clusters . one may think that one way of having more agb / s - agb stars is by making recourse to a flat imf . however , this may work only if the imf of the subsequent generations is different from that of the first one , and in particular much steeper than it : a very contrived scenario . otherwise , if the imfs are the same , with a flat imf one gets more helium from the first generation , but by as much increases the amount of fresh helium demanded by the second generation , and the discrepancy by a factor of @xmath98 remains . thus , a flat imf does not solve the problem . it is very difficult to prove or disprove those models of fast rotating massive stars ( frms ) in which helium is mixed in the stellar envelopes by meridional circulations , and slowly lost in an outflowing disk . thus , i take the frms scenario as described in decressin et al . ( 2007 ) at face value . one problem with this scenario is that it assumes that such disks survive long enough to produce new stars , in spite of their impervious environment in which they are bombarded from all directions by fast stellar winds and supernova explosions . but even admitting that disks manage to deliver new stars , such stars will reflect the helium abundance of the disks they are born from , which varies from one massive star to another , and for any given stars varies as a function of its evolutionary stage . thus , stars born out of such disks will inevitably show a spread in helium abundances , and one would have broad gc main sequences rather than multiple ones as demanded by the observations . the existence of multiple , discrete stellar populations in @xmath0 cen and ngc 2808 rules out this frms scenario as a viable one to explain the helium rich populations . nevertheless , it is possible that meridional circulations are at work in massive stars , and bring fresh helium to the surface . if so , some meridional mixing and helium enhancement may not be confined to the very massive stars exploding as supernovae . some helium enrichment might also take place during the main sequence phase of stars less massive than @xmath91 , which will later deliver such helium during their agb / s - agb phase . therefore , if meridional mixing of helium exists in @xmath101 main sequence stars , it would alleviate the difficulty for the agb / s - agb scenario outlined in the previous subsection , in particular concerning the helium abundance , yet by an amount that is hard to guess theoretically . spectroscopic observations of @xmath101 stars appear to be the only way to assess whether helium enrichment does indeed take place , either directly from helium line strengths in hot stars , or indirectly from c : n : o ratios indicative of deep mixing . contrary to the case of ngc 2808 , in @xmath0 cen the helium - enriched population identified by the blue main sequence is also enriched in iron , and contains some @xmath102 of fresh iron that was not initially present in the first stellar population ( cf . section 2.1.1 ) . if coming from relatively prompt type ia supernova events , each contributing @xmath103 of iron ( e.g. , iwamoto et al . 1999 ) , at least @xmath104 type ia supernovae ( snia ) from the first stellar generation had to explode within the helium enriched ism , and do so within the first @xmath105 yrs . this looks quite plausible given the wide variety of distributions of snia delay times that theoretical models can generate ( e.g. greggio 2005 ) . however , if the excess iron were produced by snia s , then the secondary population would have lower @xmath106-element to iron ratios , being selectively enriched only in iron . this is at variance with the observed [ ca / fe ] , [ mg / fe ] and [ si / fe ] ratios in @xmath0 cen rgb stars , which show no dependence on the iron abundance ( norris & da costa 1995 ; smith et al . 2000 ; pancino et al . thus , this excludes snia s for being responsible for the iron enrichment , and favours core collapse supernovae ( ccsn ) , which along with iron produce also @xmath106 elements . on average , each ccsn produces @xmath107 of iron ( hamuy 2003 ) , roughly 10 times less than each snia . thus , a few hundred ccsne are needed to produce the 25 @xmath62 of iron in the helium - rich population of @xmath0 cen . with the adopted imf , one ccsn is produced every @xmath108 of gas turned into stars , and therefore with its @xmath109 the first stellar generation in @xmath0 cen has produced @xmath110 ccsne . if the progenitor was at least 10 times more massive than the present cluster ( see below ) , then over @xmath111 ccsne had been produced . thus , it is sufficient that @xmath112 of their ejecta were trapped inside the protocluster while mass lost by s - agb / agb stars had already started to accumulate . a very small time overlap between ccsn events and the appearance of s - agb / agb stars is necessary for this to happen , consistent with the short timescale ( @xmath90 myr ) postulated for the formation of the first stellar generation ( cf . section 4.2.1 ) . note that these supernova explosions should have avoided to trigger any major star formation , otherwise a continuous distribution of helium and iron abundances would have resulted . what triggered the major star formation burst leading to the second generation remains unidentified . no attempt is made here to speculate on the full star formation history in @xmath0 cen , which is much more complex given the identification of its 5 sub - populations ( sollima et al . 2005 ; lee et al . 2005 ; villanova et al . in the previous sections it has been argued that only the agb / s - agb scenario remains viable to account for the helium - enriched sub - populations . still , with a serious difficulty to overcome , plus some minor ones . if our current understanding of the helium enrichment in intermediate mass stars is not grossly incorrect , then the mass of the first stellar population in a cluster such as @xmath0 cen had to be several times larger than the present mass of the cluster . following bekki & norris ( 2006 ) this difficulty can be solved if clusters such as @xmath0 cen and ngc 2808 are the remnant nucleus of a nucleated dwarf galaxy that was torn apart by the tidal field of the galaxy . hence the parent first population providing the necessary raw material for the successive generation(s ) would have been much more massive than these clusters are today . some circumstantial evidence for this now widely entertained scenario is the finding that m54 , one of the most massive gcs in the galaxy , is indeed associated with the sagittarius dwarf , albeit there is no evidence for multiple populations _ within _ this cluster ( siegel et al . 2007 ) . the nucleated dwarf ngc 205 may be another example relevant to this scenario : with its tidal stream towards m31 ( mcconnachie et al . 2004 ) it may represent an early stage of the process suggested by bekki & norris . its nucleus is dominated by old stellar populations , and yet it is quite bright in the wfpc2 ultraviolet f225w and f185w passbands ( cappellari et al . if the uv light comes from an extended hb ( such as that of @xmath0 cen and ngc 2808 ) it may also harbor helium - enriched populations , and would represent a possible testbed for the nucleated dwarf scenario of bekki & norris ( 2006 ) . of course , for the nucleated dwarf scenario to work , successive stellar populations must have a much lower probability of being tidally stripped compared to the first population , otherwise the mass discrepancy would remain . this can only be achieved if the successive starbursts are far more centrally concentrated compared to the first one , i.e. , the agb / s - agb ejecta from the first generation should collapse to the very bottom of the potential well before leading to star formation . in this connection , it is quite reassuring that the helium - enriched main sequence stars in @xmath0 cen are indeed markedly more centrally concentrated than the others ( cf . section 2.1.1 ) . besides reproducing the required helium enhancement , a successful theory for the origin of multiple stellar populations in massive gcs should also account for the observed abundance and distribution of cno and other intermediate mass elements . several authors ( e.g. , karakas et al . 2006 ; romano et al . 2007 ; choi & yi 2008 ) have shown that yields of existing agb models fail to satisfy this constraint , as along with helium stellar ejecta would be enriched also in carbon from the 3du . the question is therefore as to whether one should conclude that the fresh helium does not originate from agb stars , or that the used theoretical yields are not correct . given the current uncertainties affecting the massive agb models ( especially due to the treatment of mass loss and convective overshooting ) it seems worth keeping a pragmatic approach , hence taking existing agb models with special caution . the schematic agb evolution described in section 3.1 is quite plausible given our ignorance of mass loss in bright agb / s - agb stars with hbb , and does not conflict with existing observations . actually , it allows keeping agb / s - agb stars as viable helium producers for the multiple stellar generations in globular clusters . for the present scenario to work it is essential that the agb ejecta accumulate for a fairly long time ( @xmath105 yr ) without any significant star formation before suddenly a major fraction of the interstellar medium is turned into stars by a burst . this may not be such an _ ad hoc _ assumption , given that star formation in sporadic bursts appears to be the norm for dwarf galaxies ( gerola , sneden & sulman 1980 ) , as also demonstrated by the discrete multiple generations in dwarfs such as carina ( e.g. , monelli et al . 2003 ) . in the case of ngc 2808 there had to be a second and a third stellar generation . if the scenario presented in this paper is basically correct , then the most helium rich secondary sub - population would have been the first to form out of the most massive agb / s - agb ejecta , which are the most helium rich . then the ism was replenished again by the ejecta of the less massive agb stars from the first generation , plus perhaps a contribution by the secondary generation . if so , then the generation identified by the _ middle _ of the three main sequences in this cluster was the last to form . worth briefly mentioning are those observational studies that may help testing the plausibility of the agb / s - agb scenario proposed in this paper . some of these tests concern the adopted agb / s - agb evolution and nuclear yields , in particular concerning @xmath51 to @xmath80 stars . it would be interesting to test whether the surface helium and nitrogen abundance are enhanced in @xmath113 to @xmath80 main sequence stars , possibly by the meridional circulation process advocated by maeder & meynet ( 2006 ) , albeit high rotational velocities may hamper accurate abundance determinations , and massive stars a metal poor as stars in @xmath0 cen and ngc 2808 are not within reach . direct observations of bolometrically very luminous agb / s - agb stars in the magellanic clouds with very high mass loss rates should help understanding the crucial , final evolutionary stages of intermediate mass stars . moreover , high resolution spectroscopy of very large samples ( several hundreds ) of stars in the various sub - populations in globular clusters should help identify unequivocal chemical signatures of the _ donors _ of the materials out of which these sub - populations have formed . these kind of studies are now possible thanks to the high multiplex multiobject spectrographs at 810 m class telescopes , such as e.g. , flames at the vlt ( sollima et al . 2005 ; carretta et al . 2006 ; villanova et al . finally , the multifrequency study of nucleated dwarfs in and around the local group may help testing whether the most massive globular clusters may have originated from the tidal stripping of these objects . in conclusion , excluding scenarios that _ qualitatively _ conflict with observations , such as accretion , fast rotating massive stars or population iii stars , turn out to be much easier than proving others that qualitatively appear to work , but may have _ quantitative _ difficulties . this is the case for the agb / s - agb scenario advocated in this paper , where the predicted helium abundance in the secondary populations admittedly falls a little short of the highest values suggested in the literature ( @xmath114 ) . in this respect , one should consider than helium abundance estimates are affected by uncertainties that must be of the order of a few 0.01 , hence no macroscopic discrepancy appear to exist . still , it would help if , besides the 2du , other processes contribute a little additional fresh helium in @xmath51 to @xmath115 stars . additional helium may come from the hbb process operating during the agb / s - agb phase , and/or from meridional circulations during the main sequence phase of the progenitors of agb / s - agb stars . i would like to thank giampaolo piotto for numerous stimulating discussions on the multiple populations in globular clusters and for a critical reading of the manuscript . useful suggestions on the metal enrichment by supernovae are acknowledged from laura greggio . finally , i would like to thank the anonymous referee for his / her many constructive comments . becker s.a . & iben i.jr . 1979 , apj , 232 , 831 bedin l.r . , piotto g. , anderson j. , cassisi s. , king i.r . , momany y. , carraro g. 2004 , apj , 605 , l125 bekki , k. & norris , j.e . 2006 , apj , 637 , l109 bergeron p , saffer r.a . , liebert j. 1992 , apj , 394 , 228 blcker t. , schnberner d. 1991 , a&a , 244 , l43 bragaglia a. , renzini a. , bergeron p. 1995 , apj , 443 , 735 busso g. , cassisi s. , piotto g. , castellani m. , romaniello m. , catelan m. , djorgovski s.g . , recio - blanco a. , et al . 2007 , a&a , 474 , 105 cannon , r.d . & stobie , r.s . 1973 , mnras , 162 , 207 cappellari m. , bertola f. , burstein d. , buson l.m . , greggio l. , renzini a. 1999 , apj , 515 , l17 carretta e. , bragaglia a. , gratton r.g . , leone f. , recio - blanco , a. , lucatello s. 2006 , a&a , 450 , 523 carretta e. , bragaglia , a. , gratton , r.g . , momany y. , recio - blanco a. , cassisi s. , franois p. , james g. , et al . 2007 , a&a , 464 , 967 cassisi s. , salaris m. , pietrinferni a. , piotto g. , milone a.p . , bedin l.r . , anderson j. 2008 , apj , 672 , l115 choi e. , yi s.k . 2007 , mnras , 375 , l1 choi e. , yi s.k . 2008 , arxiv:0804.1598 chuzhoy l. 2006 , mnras , 369 , l52 dantona f. , caloi v. 2004 , apj , 611 , 871 dantona f. , gratton r. , chieffi a. 1983 , mem . s. a. it . , 54 , 173 dantona f. , ventura p. 2007 , mnras , 379 , 1431 decressin t. , charbonnel c. , meynet g. 2007 , a&a , 475 , 859 decressin t. , meynet g. , charbonnel c. , prantzos n. , ekstrm s. 2007 , a&a , 464 , 1029 freeman k.c . , rodgers , a.w . 1975 , apj , 201 , l71 gerola h. , sneden p.e . , schulman l.s . 1980 , apj , 242 , 517 gratton r. , lucatello s. , bragaglia a. , carretta e. , cassisi s. , momany y. , pancino , e. , valenti , e. , et al . 2007 , a&a , 464 , 953 gratton r. sneden c. , carretta e. 2004 , ara&a , 42 , 385 greggio l. 2005 , a&a , 441 , 1055 groenewegen m.a.t . , de jong t. 1993 , a&a , 267 , 410 hamuy m. 2003 , apj , 582 , 905 herwig f. 2004 , apjs , 155 , 651 iben , i.jr . & renzini , a. 1984 , physics reports , 105 , no . 6 iwamoto k. , brachwitz f. , nomoto , k. , kishimoto n. , umeda h. , hix w.r . , thielemann f .- k . 1999 , apjs , 125 , 439 izzard r.g . , tout c.a . , karakas a.i . , pols o.r . 2004 , mnras , 350 , 407 johnson c.i . , pilachowski c , a . , simmerer j. , schwenk d. 2008 , apj , 681 , 1505 karakas a. , fenner y. , sills a. , campbell s.w . , lattanzio j.c . 2006 , apj , 652 , 1240 karakas a. , lattanzio , j.c . 2007 , publ . astr . soc . australia , 24 , 103 koester d. , napiwotzki r. , christlieb n. , drechsel h. , hagen h .- j . , heber u. , homeier d. , karl c. , et al . 2001 , a&a . 378 , 556 kraft r.p . 1979 , ara&a , 17 , 309 kraft r.p . 1994 , pasp , 106 , 553 lee y .- w . , m . , sohn y .- j . , rey s .- c . , lee h .- c . , walker a.r . 1999 , nature , 402 , 55 lee y .- w . , joo s .- j . , han s .- i . , chung c. , ree c.h . , sohn y .- j . , kim y .- c . , yoon , s .- j . , et al . 2005 , apj , 621 , l57 maeder a. , meynet , g. 2006 , a&a , 448 , l37 maraston , c. 2005 , mnras , 362 , 799 marigo p. , girardi l.2007 , a&a , 469 , 239 mcconnachie a.w . , irwin m.j . , lewis g.f . , ibata r.a . , chapman s.c . , ferguson a.m.n . , tanvir n.r . 2004 , mnras , 351 , l94 meynet g. , decressin t. , charbonnel c. 2008 , mem.s . a. it . , 79 , 584 milone a.p . , bedin l.r . , piotto g. , anderson j. , king i.r . , sarajedini a. , dotter a. , chaboyer b. , et al . 2008 , apj , 673 , 241 moehler s. , sweigart a.v . 2006 , a&a , 455 , 943 monelli m. , pulone l. , corsi c.e . , castellani m. , bono g. , walker a.r . , brocato e. , buonanno r. , et al . 2003 , aj , 126 , 218 newsham g. , terndrup d.m . 2007 , apj , 664 , 332 nomoto k. 1984 , apj , 277 , 791 norris j.e . 2004 , apj , 612 , l25 norris j.e . , da costa g.s . 1995 , apj , 447 , 680 pancino e. , ferraro f.r . , bellazzini m. , piotto g. , zoccali m. 2000 , apj , 534 , l83 pancino e. , pasquini l. , hill v. , ferraro f.r . , bellazzini m. 2002 , apj , 568 , l101 piotto , g. 2008 , mem . s. a. it . , 79 , 334 piotto , g. , et al . 2005 , apj , 621 , 777 piotto g. , bedin l.r . , anderson j. , king i.r . , cassisi s. , milone a.p . , villanova s. , pietrinferni a. , et al . 2007 , apj , 661 , l53 poelarends a.j.t . , herwig , f. , langer , n. , heger , a.2008 , apj , 675 , 614 pritzl b. , smith h.a . , catelan m. , sweigart a.v . 2000 , apj , 530 , l41 pryor c. , meylan g. 1993 , in structure and dynamica of globular clusters , ed . djorgovski & g. meylan , asp conf . 50 , 357 pumo m.l . , dantona f. , ventura p. 2008 , apj , 672 , l25 recio - blanco a. , de laverny p. 2007 , a&a , 461 , l13 renzini a. 1983 , mem . s. a. it . , 54 , 335 renzini a. & voli , m. 1981 , a&a , 94 , 175 rich r.m . , sosin c. , djorgovski s.g . , piotto g. , king i.r . , renzini a. , phinney e.s . , et al 1997 , apj , 484 , l25 ritossa c. , garcia - berro e. , iben i.jr . 1996 , apj , 460 , 489 ritossa c. , garcia - berro e. , iben i.jr . 1999 , apj , 515 , 381 romano d. , matteucci f. , tosi m. pancino e. , bellazzini m. , ferraro f.r . , limongi m. , sollima a. 2007 , mnras , 376 , 405 siegel m.h . , dotter a. , majewski s.r . , sarajedini a. , chaboyer b. , nidever d.l . , anderson j. , marn - franch a. , et al . 2007 , apj , 667 , l57 sirianni m. , jee m.j . , bentez n. , blakeslee j.p . , martel a.r . , meurer g. , clampin m. , de marchi g. , et al . 2005 , pasp , 117 , 1049 smith v.v . , suntzeff n.b . , cunha k. , gallino r. , busso m. , lambert d.l . , straniero o. 2000 , aj , 119 , 1239 smith v.v . , cunha k. , ivans i.i . , lattanzio j.c . , campbell s. , hinkle k. 2005 , apj , 633 , 392 sollima a. , pancino e. , ferraro f.r . , bellazzini m. , straniero o. , pasquini l. 2005 , apj , 634 , 332 sweigart a. v. 1997 , apj 474 , l23 sweigart a. v. , catelan m. 1998 , apj , 501 , l63 sweigart a.v . , mengel j.g . 1979 , apj , 229 , 624 tsujimoto t. , shigeyama t. , suda y. 2007 , apj , 654 , l139 ventura p. , dantona f. 2005 , a&a , 431 , 279 ventura p. , dantona f. 2008 , a&a , 479 , 805 villanova s. , piotto g. , king i.r . , anderson j. , bedin l.r . , gratton r.g . , cassisi s. , momany y. , et al . 2007 , apj , 663 , 296 yoon s .- j . , podsiadlowski ph . , rosswog s. 2007 , mnras , 380 , 933 yong d. , grundahl f. 2008 , apj , 672 , l29 yong d. , grundahl f. , johnson j.a . , asplund m. 2008 , arxiv:0806.0187
the various scenarios proposed for the origin of the multiple , helium - enriched populations in massive globular clusters are critically compared to the relevant constraining observations . among accretion of helium - rich material by pre - existing stars , star formation out of ejecta from massive agb stars or from fast rotating massive stars , and pollution by population iii stars , only the agb option appears to be viable . accretion or star formation out of outflowing disks would result in a spread of helium abundances , thus failing to produce the distinct , chemically homogeneous sub - populations such as those in the clusters @xmath0 cen and ngc 2808 . pollution by population iii stars would fail to produce sub - populations selectively enriched in helium , but maintaining the same abundance of heavy elements . still , it is argued that for the agb option to work two conditions should be satisfied : i ) agb stars experiencing the hot bottom burning process ( i.e. , those more massive than @xmath1 ) should rapidly eject their envelope upon arrival on the agb , thus experiencing just a few third dredge - up episodes , and ii ) clusters with multiple , helium enriched populations should be the remnants of much more massive systems , such as nucleated dwarf galaxies , as indeed widely assumed . [ firstpage ] _ ( galaxy : ) _ globular clusters : general _ ( galaxy : ) _ * globular clusters : individual : @xmath0 cen , ngc 1851 , ngc 2808 , ngc 6388 , ngc 6441 , m54 * stars : agb and post - agb
endoscopic screening for gastric cancer has been carried out in tottori and yonago since 2000 . local governments have performed radiographic screening and endoscopic screening for gastric cancer in both cities . all individuals aged 40 years and above can participate in the gastric cancer screening programs . individuals can choose either endoscopy or radiography for gastric cancer screening based on their preference . although the introduction of endoscopic screening has increased , the participation rate in gastric cancer screening involving both methods has remained at approximately 25%.14 physicians who carried out the endoscopic screening were approved by the local committee for gastric cancer screening based on certain requirements.14 although endoscopic screening has been performed in clinical settings , the results have been evaluated based on monitor screen review by the local committee , including experienced endoscopists in each city . the study subjects were selected from the participants of gastric cancer screening in tottori and yonago between 2007 and 2008 . there were 28 782 participants in tottori and 23 753 participants in yonago . the subjects were defined as participants aged 4079 years who had no gastric cancer screening in the previous year . the following cases were excluded : ( i ) subjects who had registry duplication ; and ( ii ) subjects who had a history of gastric cancer . the selected subjects were divided into two groups , the endoscopic screening group and radiographic screening group , according to the first screening method used from 2007 to 2008 . all cancer deaths except gastric cancer death and allcauses deaths except gastric cancer death were assessed to ensure comparability between the two groups . mortality data were obtained by linkage to the residential registrations of each city and the tottori cancer registry ( tottori , japan ) . followup of gastric cancer incidence and mortality was continued from the date of the first screening to the date of gastric cancer diagnosis or up to december 31 , 2013 . differences in the proportion of both screening groups were compared using the test and student 's ttest . a cox proportional hazards model was used to estimate the relative risk ( rr ) of incident gastric cancer , gastric cancer death , all cancer deaths except gastric cancer death , and allcauses deaths except gastric cancer death . unadjusted and adjusted rrs by sex , age group , and resident city were calculated . the cumulative hazard values of gastric cancer incidence and mortality were estimated by the nelson aalen method and plotted on graphs . all test statistics were twotailed , and pvalues of < 0.05 were considered to indicate a statically significant difference . analyses were carried out using stata 13.0 ( stata , college station , texas , usa ) . this study was approved by the institutional review board of the national cancer center of japan ( tokyo , japan ) . endoscopic screening for gastric cancer has been carried out in tottori and yonago since 2000 . local governments have performed radiographic screening and endoscopic screening for gastric cancer in both cities . all individuals aged 40 years and above can participate in the gastric cancer screening programs . individuals can choose either endoscopy or radiography for gastric cancer screening based on their preference . although the introduction of endoscopic screening has increased , the participation rate in gastric cancer screening involving both methods has remained at approximately 25%.14 physicians who carried out the endoscopic screening were approved by the local committee for gastric cancer screening based on certain requirements.14 although endoscopic screening has been performed in clinical settings , the results have been evaluated based on monitor screen review by the local committee , including experienced endoscopists in each city . the study subjects were selected from the participants of gastric cancer screening in tottori and yonago between 2007 and 2008 . there were 28 782 participants in tottori and 23 753 participants in yonago . the subjects were defined as participants aged 4079 years who had no gastric cancer screening in the previous year . the following cases were excluded : ( i ) subjects who had registry duplication ; and ( ii ) subjects who had a history of gastric cancer . the selected subjects were divided into two groups , the endoscopic screening group and radiographic screening group , according to the first screening method used from 2007 to 2008 . all cancer deaths except gastric cancer death and allcauses deaths except gastric cancer death were assessed to ensure comparability between the two groups . mortality data were obtained by linkage to the residential registrations of each city and the tottori cancer registry ( tottori , japan ) . followup of gastric cancer incidence and mortality was continued from the date of the first screening to the date of gastric cancer diagnosis or up to december 31 , 2013 . differences in the proportion of both screening groups were compared using the test and student 's ttest . a cox proportional hazards model was used to estimate the relative risk ( rr ) of incident gastric cancer , gastric cancer death , all cancer deaths except gastric cancer death , and allcauses deaths except gastric cancer death . unadjusted and adjusted rrs by sex , age group , and resident city were calculated . the cumulative hazard values of gastric cancer incidence and mortality were estimated by the nelson all test statistics were twotailed , and pvalues of < 0.05 were considered to indicate a statically significant difference . analyses were carried out using stata 13.0 ( stata , college station , texas , usa ) . this study was approved by the institutional review board of the national cancer center of japan ( tokyo , japan ) . the procedure used for the selection of the target population is shown in figure 1 . a total of 52 535 subjects participated in gastric cancer screening in tottori and yonago from 2007 to 2008 . of these subjects , those subjects excluded from the target group were more than 80 years old at the first screening , which was not the actual target for cancer screening . a total of 14 394 subjects were selected as they had no gastric cancer screening history in the previous year . three patients who had duplication on the participant list for gastric cancer screening were excluded from the target group for the analysis . there were 117 subjects who were identified as having a history of gastric cancer by linkage to a local cancer registry , and they were also excluded from the target group . the remaining 14 274 subjects were finally divided into two groups according to the first screening procedure as follows : endoscopic screening group ( n = 9950 ) , and radiographic screening group ( n = 4324 ) . flowchart of the selection process for the study target group to compare endoscopic and radiographic screening for gastric cancer . a total of 52 535 subjects participated in gastric cancer screening in tottori and yonago ( japan ) from 2007 to 2008 , of which 5720 participants were not within the target age for the analysis ( aged more than 80 years at the time of first screening ) . a total of 14 394 subjects were selected as they had no gastric screening history of the previous year . three patients who had duplication on the participant list of gastric cancer screening were excluded from 2007 to 2008 the target group of the analysis . there were 117 subjects who were identified as having a history of gastric cancer by linkage to a local cancer registry and they were also excluded from the target group . the remaining 14 274 subjects were divided into two groups according to the first screening procedure : endoscopic screening group ( n = 9950 ) , and radiographic screening group ( n = 4324 ) . the results of the comparison of the basic characteristics of the endoscopic screening group and radiographic screening group are shown in table 1 . the proportion of female subjects was significantly higher than that of male subjects in both groups . the proportion of the 70 years age group was significantly lower in the radiographic screening group than in the endoscopic screening group ( p < 0.001 ) . during the 6year followup period , the screening frequency was 2.3 for the endoscopic screening group and 2.2 for the radiographic screening group ( p = 0.988 ) . during the followup period , very few subjects of the endoscopic screening group had also been screened by radiography . in contrast , subjects in the radiographic screening group had two screenings on average , one radiographic and one endoscopic screening . comparison of participants between endoscopic screening and radiographic screening for gastric cancer during the 6year followup period , 127 gastric cancers were diagnosed in the endoscopic screening group and 41 gastric cancers in the radiographic screening group ( table 2 ) . approximately half of the subjects were aged 7079 years and the proportions of the age group were nearly equal in both screening groups ( p = 0.365 ) . although the proportion of localized cancers was higher in the endoscopic screening group than in the radiographic screening group , the stage distribution was similar in both groups ( p = 0.276 ) . comparison of detected gastric cancers between endoscopic screening and radiographic screening the mean followup period was 66.6 0.9 months ( 95% confidence interval [ ci ] , 66.466.7 ) . the gastric cancer incidence was 233.7 per 100 000 personyears in the endoscopic screening group and 172.1 per 100 000 personyears in the radiographic screening group ( table 3 ) . although the gastric cancer incidence of the endoscopic screening group was higher than that of the radiographic screening group , it was not significantly different ( unadjusted rr = 1.168 , 95%ci , 0.8041.695 ; adjusted rr = 0.988 , 95%ci , 0.6791.438 ) . during the followup years , cumulative hazard values of gastric cancer incidence were nearly equal between the radiographic screening group and the endoscopic screening group ( fig . relative risks ( rr ) and 95% confidence intervals ( ci ) of endoscopic screenings adjusted by sex , age group ( 4059 years , 6069 years , and 7079 years ) , and resident city . cumulative hazard values of gastric cancer incidence ( a ) and mortality ( b ) in followup years , estimated by the nelson aalen method . after the 6year followup period , seven subjects from the endoscopic screening group and eight from the radiographic screening group died of gastric cancer . the gastric cancer death rate was 33.1 per 100 000 personyears in the endoscopic screening group and 12.7 per 100 000 personyears in the radiographic screening group ( table 3 ) . although the unadjusted rr was not statistically significant ( unadjusted rr = 0.384 , 95%ci , 0.1391.060 ) , the subjects screened by endoscopy showed a 67% mortality reduction from gastric cancer compared with the subjects screened by radiography when the rr was adjusted by sex , age group , and resident city ( adjusted rr = 0.327 , 95%ci , 0.1180.908 ) . the cumulative hazard of gastric cancer mortality became nearly similar in both screening groups until 3 years of followup , but the difference subsequently widened ( fig . after the 6year followup period , 111 subjects of the endoscopic screening group and 41 subjects of the radiographic screening group died from all cancer deaths excluding gastric cancer death . the all cancer deaths excluding gastric cancer death were 201.8 per 100 000 personyears in the endoscopic screening group and 169.5 per 100 000 personyears in the radiographic screening group ( table 3 ) . a total of 264 subjects of the endoscopic screening group and 104 subjects of the radiographic screening group died from allcauses deaths excluding gastric cancer death . the allcauses deaths excluding gastric cancer death was 480.0 per 100 000 personyears in the endoscopic screening group and 430.1 per 100 000 personyears in the radiographic screening group ( table 3 ) . the adjusted rr of the endoscopic screening group was 0.968 ( 95%ci , 0.6751.387 ) for all cancer deaths except gastric cancer death and 0.929 ( 95%ci , 0.7401.168 ) for allcauses deaths except gastric cancer death . the present results suggest that endoscopic screening can reduce mortality from gastric cancer by 67% compared with radiographic screening . although upper gastrointestinal endoscopy has been commonly used for diagnostic examinations in clinical settings , evidence for cancer screening has remained controversial . this has limited its use to opportunistic screening in clinical settings even if high detection rates of gastric cancer can be expected.2 we have recently published the results of our communitybased case control study evaluating the effectiveness of endoscopic screening for gastric cancer . the findings of our previous study suggest a 30% reduction in gastric cancer mortality by endoscopic screening within 36 months before the date of gastric cancer diagnosis.10 a nested case control study from korea reported a 57% mortality reduction by endoscopic screening based on the national database.11 hosokawa et al.15 reported a 78% mortality reduction from gastric cancer by endoscopic screening compared with radiographic screening based on a 5year followup period . the age distribution of the target population was younger in the endoscopic screening group than in the radiographic screening group . although the present study has a different study design or background from these previous studies , the results consistently demonstrate mortality reduction from gastric cancer by endoscopic screening . the possibility of reducing mortality from gastric cancer by radiographic screening has been mainly reported in japan.6 although radiographic equipment for the upper gastrointestinal series has been improved , the sensitivity range of radiographic screening has remained from 80% to 90%.16 , 17 , 18 , 19 to evaluate mortality reduction from gastric cancer by radiographic screening , case control studies were mostly carried out until 1995 , and then cohort studies were started for followup from the early 1990s . the subjects compared in these studies were individuals who had no screening history and had been treated by the usual care as needed . in 1996 , the total number of upper gastrointestinal endoscopy procedures carried out was 73 879 in hospitals and 149 848 in outpatient clinics per month.20 however , the total number of upper gastrointestinal endoscopic examinations carried out in 2011 increased to 521 936 in hospitals and 392 773 in outpatient clinics per month,20 with endoscopic examination becoming a more common technique in medical services in japan . however , the opportunity to be examined by endoscopy has rapidly increased according to the increase in the total number of upper gastrointestinal endoscopy procedures conducted . a recent case control study particularly showed that mortality reduction could not be obtained by radiographic screening.10 the impact of radiographic screening may be decreased depending on the periods when the evaluation studies were carried out . therefore , to evaluate the effectiveness of endoscopic screening for gastric cancer , radiographic screening can be used for comparison . although the gastric cancer mortality in the endoscopic screening group was found to be lower than that in the radiographic screening group , the gastric cancer incidence and the stage distribution of diagnosed cancer were similar in both screening groups . as the proportion of the unknown stage of the endoscopic screening group was higher than that of the radiographic screening group , there might be more patients with early stage cancer included in the endoscopic screening group than in the radiographic screening group . in japanese studies , the proportion of early stage cancer , which constitutes tumor showing invasion within the gastric submucosa , based on the definition of the japanese gastric cancer association,21 was usually approximately 70% in the radiographic screening group22 and more than 80% in the endoscopic screening group.23 hosokawa et al.15 previously reported that the detection rate of early cancer was higher in the endoscopic screening group than in the radiographic screening group , and the stage distribution was different in both groups . endoscopy can diagnose more early stage cancers that can be treated by endoscopic surgical dissection . in fact , endoscopic surgical dissection has been carried out for approximately half of early stage cancers detected by endoscopic screening.23 the difference in the cumulative hazard of gastric cancer mortality widened after 3 years from the first screening . this indicates that the detection of early stage cancer was initially achieved and then the gap of cumulative hazard of gastric cancer mortality widened between endoscopic screening and radiographic screening . early stage gastric cancer takes approximately 44 months to become advanced stage gastric cancer.24 this fact has to be taken into consideration when aiming for mortality reduction from gastric cancer by endoscopic screening . although detecting more early stage gastric cancer is advantageous for endoscopic screening , cases of overdiagnosis might also be included . currently , there are no reports of overdiagnosis by gastric cancer screening using radiography and endoscopy . however , the numbers of cancers detected by endoscopic screening have reportedly been twice the expected numbers.25 these excess cases include overdiagnosis cases and early stage cancers that progress to advanced stage cancers . to further validate evidence of the effectiveness of endoscopic screening for gastric cancer , additional studies to evaluate mortality reduction from gastric cancer by endoscopic screening are warranted . the relative risks of all cancer mortality excluding gastric cancer death and allcause mortality excluding gastric cancer death were nearly equal between the endoscopic screening group and the radiographic screening group . however , to compare mortality reduction from gastric cancer between endoscopic screening and radiographic screening , the background difference should be considered between the endoscopic screening group and the radiographic screening group . the age of the participants in the endoscopic screening group was more advanced than that of the participants in the radiographic screening group.10 individuals aged more than 70 years could be screened by physicians using endoscopy in their own private practice . as the number of younger people with family physicians was fewer than older people with family physicians , there was little opportunity for the younger people to be tested in clinical practice . helicobacter pylori infection is a major cause of gastric cancer,26 and the difference of the age group also affects the h. pylori infection rate . although the h. pylori infection rate has decreased in japan , the rate has remained higher in individuals aged 70 years and over than in individuals aged 4069 years.27 as the proportion of individuals aged 70 years was higher in the endoscopic screening group than in the radiographic screening group , the risk for gastric cancer might be higher in endoscopic screening than in radiographic screening . however , we could not obtain the h. pylori infection rates at the first screening in both screening groups . lifestyle behaviors could also be a risk factor for gastric cancer ; in particular , high salt intake and smoking are associated with gastric cancer.28 , 29 , 30 the smoking rate is reportedly higher in tottori prefecture than the national average and the rates decrease according to age in men and women.31 salt intake in tottori prefecture is reportedly similar to the national average and the differences of the age group are small.31 although fukao et al.32 reported differences in family history and smoking between participants and nonparticipants in gastric cancer screening , the difference in the backgrounds between the endoscopic and radiographic screening groups is unclear . we could not use the results of the questionnaire survey at the screening participation because there were no questions regarding salt intake or smoking . first , the quality of the tottori cancer registry was not optimal as the percentage of deathcertificationonly cases was 15.1% in 2007 , which was lower than the national average.33 in japan , cancer registries have not yet been prepared at the national level , and the registry method , as of 2014 , has not yet been standardized.34 , 35 as the registration of gastric cancers remains insufficient , differences in the detected cancers by each screening group might not have been fully clarified . second , there was no information as to whether or not the patients participated in opportunistic screenings . third , the subjects of the radiographic screening group had been screened by endoscopy once during the followup period . in the study areas , people could choose either endoscopy or radiography as the screening method at the individual level . therefore , the results might suggest a comparison between higher intensive and lower intensive endoscopic screening . finally , subgroup analysis could not be adequately carried out because of the small sample size . the incidence of gastric cancer has been decreasing and a predicted additional decrease is anticipated because of a decrease in the h. pylori infection rate,27 , 36 , 37 however , as the participation rate in gastric cancer screening has decreased , its impact on mortality reduction has become limited . although the participation rate in radiographic screening for gastric cancer has sunk below 10%,38 there is a possibility of improving the participation rate by the introduction of endoscopic screening as an option for gastric cancer screening . notably , the participation rate is reportedly approximately 25% in municipalities that have already introduced endoscopic screening.22 , 39 however , according to the change in the incidence of gastric cancer , the possibility of a new screening system should be investigated considering the risk factors for gastric cancer . in conclusion , the present study suggests that endoscopic screening can reduce mortality from gastric cancer by 67% compared with radiographic screening . the results consistently support mortality reduction from gastric cancer by endoscopic screening described by previous studies . although this indicates the effectiveness of endoscopic screening for gastric cancer , several limitations , including selfselection bias , remain , and prudent interpretation of the finding is needed . thus far , endoscopic screening for gastric cancer has shown promising results . endoscopic screening therefore deserves further comprehensive evaluation to reliably confirm its effectiveness and how its optimal use can be strategically promoted .
to evaluate mortality reduction from gastric cancer by endoscopic screening , we undertook a populationbased cohort study in which both radiographic and endoscopic screenings for gastric cancer have been carried out . the subjects were selected from the participants of gastric cancer screening in two cities in japan , tottori and yonago , from 2007 to 2008 . the subjects were defined as participants aged 4079 years who had no gastric cancer screening in the previous year . followup of mortality was continued from the date of the first screening to the date of death or up to december 31 , 2013 . a cox proportional hazards model was used to estimate the relative risk ( rr ) of gastric cancer incidence , gastric cancer death , all cancer deaths except gastric cancer death , and allcauses death except gastric cancer death . the number of subjects selected for endoscopic screening was 9950 and that for radiographic screening was 4324 . the subjects screened by endoscopy showed a 67% reduction of gastric cancer compared with the subjects screened by radiography ( adjusted rr by sex , age group , and resident city = 0.327 ; 95% confidence interval [ ci ] , 0.1180.908 ) . the adjusted rr of endoscopic screening was 0.968 ( 95%ci , 0.6751.387 ) for all cancer deaths except gastric cancer death , and 0.929 ( 95%ci , 0.7401.168 ) for allcauses death except gastric cancer death . this study indicates that endoscopic screening can reduce gastric cancer mortality by 67% compared with radiographic screening . this is consistent with previous studies showing that endoscopic screening reduces gastric cancer mortality .
electronic transport through quantum rings structures has become the subject of active research during the last years . interesting quantum interference phenomena have been predicted and measured in these mesoscopic systems in presence of a magnetic flux , such as the aharonov - bohm oscillations in the conductance , persistent currents@xcite and fano antiresonances @xcite . recently , there has been much interest in understanding the manner in which the unique properties of nanostructures may be exploited in spintronic devices , which utilize the spin degree of freedom of the electron as the basis of their operation @xcite . a natural feature of these devices is the direct connection between their conductance and their quantum - mechanical transmission properties , which may allow their use as an all - electrical means for generating and detecting spin polarized distributions of carriers . for instance , recently son et al.song described how a spin filter may be realized in open - quantum dot system , by exploiting the fano resonances that occur in their transmission . in a quantum dot in which the spin degeneracy of carrier is lifted , they showed that the fano effect may be used as an effective means to generate spin polarization of transmitted carriers , and that electrical detection of the resulting polarization should be possible . this idea was extended to side attached quantum rings . in ref . ( ) shelykh et . analyze the conductance of the aharonov - bohm ( ab ) , one - dimensional quantum ring touching a quantum wire . they found that the period of the ab oscillations strongly depends on the chemical potential and the rashba coupling parameter . the dependence of the conductance on the carrier s energy reveals the fano antiresonances . on the other hand , bruder et . al.@xcite introduce a spin filter based on spin - resolved fano resonances due to spin - split levels in a quantum ring side coupled to a quantum wire . spin - orbit coupling inside the quantum ring , together with external magnetic fields , induces spin splitting , and the fano resonances due to the spin - split levels result in perfect or considerable suppression of the transport of either spin direction . they found that the coulomb interaction in the quantum ring enhances the spin - filter operation by widening the separation between dips in the conductance for different spins and by allowing perfect blocking for one spin direction and perfect transmission for the other . in this paper we study the two ring side - coupled to a quantum wire in presence of magnetic flux and rashba spin - orbit interaction , as shown schematically in fig . [ fig1 ] . in a previous paper ( ref ) we investigate the conductance and the persistent current of two mesoscopic quantum ring attached to a perfect quantum wire in presence of a magnetic field . we show that the system develops an oscillating band with resonances ( perfect transmission ) and antiresonances ( perfect reflection ) . in addition , we found persistent current magnification in the rings due to the dicke effect in the rings when the magnetic flux difference is an integer number of the quantum of flux . the dicke effect in optics takes place in the spontaneous emission of a pair of atoms radiating a photon with a wave length much larger than the separation between them . @xcite the luminescence spectrum is characterized by a narrow and a broad peak , associated with long and short - lived states , respectively . now , we show that by using the fano and dicke effects this system can be used as an efficient spin - filter even for small spin orbit interaction and small values of magnetic flux . we find that the spin - polarization dependence for this system is much more sensitive to magnetic flux and spin - orbit interaction than the case with only one ring side - coupled to the quantum wire . in the presence of rashba the spin - orbit coupling and magnetic flux @xmath0 , the hamiltonian for an isolated one - dimensional rings reads @xcite , @xmath1\ ] ] where , @xmath2where @xmath3 , @xmath4 and @xmath5 are the pauli matrices . the parameter @xmath6and @xmath7 is the frequency associated to the so coupling . the spin - orbit coupling constant @xmath8 depends implicitly on the strength of the surface electric field @xcite . the energy spectrum of the above hamiltonian is given by , @xmath9\ ] ] where @xmath10 and @xmath11 @xmath12 , the aharonov - bohm phase . the eigenstates are given by the following wave functions , @xmath13 @xmath14 the second quantization form of the quantum wire - quantum ring device with a magnetic flux and spin - orbit interaction can be written as , @xmath15 the operator @xmath16 creates an electron in the site @xmath17 of the wire and with spin index @xmath18 , @xmath19 creates an electron in the level @xmath20 of the ring @xmath21 and with spin index @xmath22 . the wire site - energy is assumed equal to zero and the hopping energies for wire and rings are taken to be equal to @xmath23 , whereas @xmath24 couples both systems . within the described model the conductance can be calculated by means of a dyson equation for the green s function . @xmath25 where @xmath26 and @xmath27 and , @xmath28 where @xmath29 is the green s function of the isolated ring @xmath30 . the conductance of the system can be calculated using the landauer formula . @xmath31 where @xmath32 is the probability transmission . in the linear response approach it can be written in term of the green s function of the contact by : @xmath33 = \frac{1}{1+\gamma ^{2}% \left [ \sum\limits_{\beta } a_{\mu } ^{\beta } ( \omega ) \right ] ^{2}},\ ] ] where @xmath34 . following ref . we introduce the weighted spin polarization as @xmath35 notice that this definition takes into account not only the relative fraction of one of the spins , but also the contribution of those spins to the electric current . in other words , we will require that not only the first term of the right - hand side of ( [ wsp ] ) to be of order of unity , but also the transmission probability @xmath36 . in what follow we present results for the conductance and spin polarization for a double ring system of radius @xmath37 , coupled each other through a quantum - wire . for this radius the energy @xmath38 . we consider only energies near of the center of the band therefore we consider the tunneling coupling as a constant . then we set the tunneling coupling @xmath39 . by using the results given [ ] @xmath40 can be evaluated analytically , @xmath41 where , @xmath42 is the net phase for the @xmath21-ring . then , we can obtain an analytical expression for the conductance , @xmath43 ^{2}}{% \left [ \left ( \cos ( 2\pi \phi _ { \mu } ^{u})-\cos ( z)\right ) \left ( \cos ( 2\pi \phi _ { \mu } ^{d})-\cos ( z)\right ) \right ] ^{2}+\beta ^{2}\left [ \cos ( 2\pi \phi _ { \mu } ^{u})+\cos ( 2\pi \phi _ { \mu } ^{d})-2\cos ( z)\right ] ^{2}}.\label{conduc1}\ ] ] with @xmath44 an interesting situation appears when the energy spectrum of both rings becomes degenerated . this occurs when the magnetic fluxes threading the rings are equals ( @xmath45 ) . for this case we obtain , @xmath46 the spin - dependent conductance vanishes when @xmath47 , i.e , when @xmath48 . the zeroes in the conductance ( fano antiresonances ) represent exactly the superposition of the spectrum of isolated rings . in fact , the conductance can be written as superposition of symmetric fano line - shapes @xmath49 where , @xmath50 is the detuning parameter and @xmath51 is the fano parameter , in this case @xmath52 . , red line ( @xmath53 ) ) for @xmath54,@xmath55 . ] figure 2 displays the spin - dependent linear conductance ( upper layers ) and spin polarization ( lower layers ) versus the fermi energy for the symmetric case with @xmath56 and a spin - orbit coupling @xmath57 . the energy spectrum consists of a superposition of quasi - bound states reminiscent of the corresponding localized spectrum of the isolated rings . as expected from the analytical expression ( eq . [ conduc1 ] ) the linear conductance displays a series of resonances and fano antiresonances as a function of the fermi energy . on the other hand , for given set of parameters the system shows zones of high polarization due to the splitting of the spin energy states . , red line @xmath53 ) for , @xmath58,@xmath59 and @xmath60 now we analyze the asymmetric case , i.e @xmath61 . figure 3 displays the spin - dependent linear conductance ( upper layers ) and spin polarization ( lower layers ) versus the fermi energy for a spin - orbit coupling @xmath62 and parameters of magnetic flux given by @xmath63 . newly , the zeroes in the conductance represent exactly the superposition of the spectrum of each isolated ring @xmath64 . in fact , now the conductance vanishes when , @xmath65 or @xmath66 i.e when @xmath67 . notice that due to the difference between both fluxes new resonances in the conductance appear . this also affects the structure of the polarization . we note that when there is a magnetic flux difference @xmath68 high spin polarization can obtain even for small values of the spin - orbit coupling . in fact , for small values spin - orbit coupling by adjusting the magnetic flux difference @xmath69 maxima of polarization are reached . we analyze in detail this situation . the maxima of the conductance are obtained when @xmath70 or when @xmath71 the first condition is spin - independent and it is not interesting in this case . the second condition is spin - dependent and for small magnetic flux difference can be written as , @xmath72 -\cos ( z)\approx 0.$ ] this occurs for the energies given by @xmath73 , $ ] where @xmath74 i.e the position of the maxima of the conductance corresponding to the spectrum of an effective ring with phase @xmath75 therefore the condition for the maxima of polarization are given when the minima of the conductance for one spin - state coincide with the maxima of the conductance of the opposite spin ( or viceverse ) , that is @xmath76 then @xmath77 . then , for a given spin - orbit coupling by adjusting the magnetic flux difference between the upper and lower rings , the maxima of the spin polarization are reached . fig.[fig4 ] displays the spin dependent conductance ( upper layer ) and the spin polarization ( lower layer ) for @xmath78 , @xmath79 and @xmath80 . the conductance shows broad and sharp peaks and the spin polarization shows a series peaks of maximum of polarization . fig.[fig5 ] displays a zoom of the conductance ( right panel ) and the polarization ( left panel ) as a function of the fermi energy . clearly the sharp peaks and fano antiresonances for the two spin states are shifted given origin to the peaks of maximum of polarization . for comparison we plot the corresponding conductance and polarization for one ring for the same values of the magnetic flux and spin orbit coupling ( fig . [ fig6 ] ) . for these parameter the spin polarization of one ring is very low for both spin states . the inset in fig.6 ( lower panel ) shows a zoom of the spin polarization . , red line @xmath53 ) for @xmath81,@xmath82 and @xmath83 . ] , red line @xmath53 ) for @xmath81,@xmath82 and @xmath83 . ] , red line @xmath53 ) for @xmath54,@xmath84 . ] for small values of magnetic flux difference @xmath69 the conductance of the two ring system can be written approximately as a superposition of a broad fano line shape and a narrow breit - wigner line shape . this is , @xmath85 .\]]where the width @xmath86 and @xmath87 . as we discuss in a previous paper@xcite , this expression clearly shows the superposition of short and long living states developed in the rings . the apparition of quasi - bound states in the spectrum of the system is a consequence of the mixing of the levels of both rings which are coupled indirectly through the continuum of states in the wire . a similar effect was discussed recently in a system with a ring coupled to a reservoir by wunsch et al . in ref . they relate this kind of collective states with the dicke effect in optics . the dicke effect in optics takes place in the spontaneous emission of a pair of atoms radiating a photon with a wave length much larger than the separation between them . @xcite the luminescence spectrum is characterized by a narrow and a broad peak , associated with long and short - lived states , respectively . this feature allows to obtain high spin polarization even for small spin - orbit coupling by adjusting the magnetic flux difference @xmath69 . high spin polarization holds even for small values of the magnetic flux . for instance the fig . [ fig7 ] displays the conductance and spin polarization as a function of the fermi energy for @xmath88,@xmath89 ) and @xmath90 . the spin - polarization shows sharp peaks for the two spin states . as a comparison with a single ring side - coupled to a quantum wire , the system composed by two rings allows us to obtain high spin polarization even for small spin - orbit interaction and small magnetic fluxes , keeping a small difference for these fluxes . , red line @xmath53 ) for @xmath91,@xmath92 and @xmath93 . ] we have investigated the spin dependent conductance and spin polarization in a system of two side quantum rings attached to a quantum wire in the presence of magnetic fluxes threading the rings and rashba spin - orbit interaction . we show that by using the fano and dicke effects this system can be used as an efficient spin - filter . we compare the spin - dependent polarization of this design and the polarization obtained with one ring side coupled to a quantum ring . as a main result , we find better spin polarization capabilities as compared to the one ring design . we find that the spin - polarization dependence for this system is much more sensitive to magnetic flux and spin - orbit interaction than the case with only one ring side - coupled to the quantum wire . this behavior is interesting from theoretical point of view , but also by its potential technological application . p. a. o. and m. p. would like to thank financial support from conicyt / programa bicentenario de ciencia y tecnologia ( cenava , grant act27 ) .
the electronic transport in a system of two quantum rings side - coupled to a quantum wire is studied via a single - band tunneling tight - binding hamiltonian . we derived analytical expressions for the conductance and spin polarization when the rings are threaded by magnetic fluxes with rashba spin - orbit interaction . we show that by using the fano and dicke effects this system can be used as an efficient spin - filter even for small spin orbit interaction and small values of magnetic flux . we compare the spin - dependent polarization of this design and the polarization obtained with one ring side coupled to a quantum ring . as a main result , we find better spin polarization capabilities as compared to the one ring design
SECTION 1. SHORT TITLE. This Act may be cited as the ``Profiting from Access to Computer Technology (PACT) Act'' or the ``Child PACT Act''. SEC. 2. PROTECTION OF EDUCATIONALLY USEFUL FEDERAL EQUIPMENT. Each Federal agency shall, to the extent practicable, protect and safeguard educationally useful Federal equipment that has been determined to be surplus, so that such equipment may be transferred under this Act. SEC. 3. EFFICIENT TRANSFER OF EDUCATIONALLY USEFUL FEDERAL EQUIPMENT. (a) Transfer of Equipment to GSA.--Each Federal agency, to the extent permitted by law and where appropriate, shall-- (1) identify educationally useful Federal equipment that it no longer needs or such equipment that has been declared surplus in accordance with section 549 of title 40, United States Code; (2) erase any hard drive, before transfer under paragraph (3), in accordance with standards in effect under the Department of Defense Industrial Security Program (Directive 5220.22 or successor authority); and (3)(A) transfer the equipment to the Administrator of General Services for conveyance to educational recipients; or (B) transfer the equipment directly to-- (i) an educational recipient, through an arrangement made by the Administrator of General Services under subsection (b); or (ii) a nonprofit refurbisher under subsection (d). (b) Advance Reporting of Equipment to GSA.--Each Federal agency shall report to the Administrator of General Services the anticipated availability of educationally useful Federal equipment as far as possible in advance of the date the equipment is to become surplus, so that the Administrator may attempt to arrange for the direct transfer from the donating agency to educational recipients. (c) Preference.--In carrying out conveyances to educational recipients under this Act, the Administrator of General Services shall, to the extent practicable, give particular preference to educational recipients located in an enterprise community, empowerment zone, or renewal community designated under section 1391, 1400, or 1400E of the Internal Revenue Code of 1986. (d) Refurbishment of Non-Classroom-Usable Equipment.--At the request of an educational recipient, educationally useful Federal equipment that is not classroom-usable shall be conveyed initially to a nonprofit refurbisher for upgrade before transfer to the educational recipient. (e) Lowest Cost.--All transfers to educational recipients shall be made at the lowest cost to the recipient permitted by law. (f) Notice of Availability of Equipment.--The Administrator of General Services shall provide notice of the anticipated availability of educationally useful Federal equipment (including non-classroom- usable equipment) to educational recipients by all practicable means, including the Internet, newspapers, and community announcements. (g) Facilitation by Regional Federal Executive Boards.--The regional Federal Executive Boards (as that term is used in part 960 of title 5, Code of Federal Regulations) shall help facilitate the transfer of educationally useful Federal equipment from the agencies they represent to recipients eligible under this Act. SEC. 4. AGENCY TECHNICAL ASSISTANCE. Each Federal agency with employees who have computer expertise shall, to the extent permitted by law and in accordance with any guidelines prescribed by the Director of the Office of Personnel Management, encourage those employees-- (1) to help connect classrooms in schools to the Nation's information infrastructure; (2) to assist teachers in schools in learning to use computers to teach; and (3) to assist in providing ongoing maintenance of, and technical support for, educationally useful Federal equipment transferred to educational recipients under this Act. SEC. 5. RULEMAKING. The Administrator of General Services shall prescribe rules and procedures to carry out this Act. SEC. 6. EFFECT ON OTHER LAWS. This Act supersedes Executive Order No. 12999 of April 17, 1996. SEC. 7. RULE OF CONSTRUCTION. This Act may not be construed to create any right or benefit, substantive or procedural, enforceable at law by a party against the United States, its agencies, officers, or employees. SEC. 8. DEFINITIONS. In this Act: (1) The term ``Federal agency'' means an Executive department or an Executive agency (as such terms are defined in chapter 1 of title 5, United States Code). (2) The term ``educational recipient'' means a school or a community-based educational organization. (3) The term ``school'' includes a prekindergarten program (as that term is used in the Elementary and Secondary Education Act of 1965), an elementary school, a secondary school, and a local educational agency (as those terms are defined in section 9101 of that Act). (4) The term ``community-based educational organization'' means a nonprofit entity that-- (A) is engaged in collaborative projects with schools or the primary focus of which is education; and (B) qualifies as a nonprofit educational institution or organization for purposes of section 549(c)(3) of title 40, United States Code. (5) The term ``educationally useful Federal equipment'' means computers and related peripheral tools (such as computer printers, modems, routers, and servers), including telecommunications and research equipment, that are appropriate for use by an educational recipient. The term also includes computer software, where the transfer of a license is permitted. (6) The term ``classroom-usable'', with respect to educationally useful Federal equipment, means such equipment that does not require an upgrade of hardware or software in order to be used by an educational recipient without being first transferred under section 3(d) to a nonprofit refurbisher for such an upgrade. (7) The term ``nonprofit refurbisher'' means an organization that-- (A) is exempt from income taxes under section 501(c) of the Internal Revenue Code of 1986; and (B) upgrades educationally useful Federal equipment that is not classroom-usable at no cost or low cost to the ultimate recipient school or community-based educational organization.
Profiting from Access to Computer Technology (PACT) Act - Child PACT Act - Directs each Federal agency to: (1) safeguard and identify educationally useful Federal equipment that it no longer needs or that has been declared surplus; (2) transfer such equipment, either directly or through the General Services Administration (GSA), to educational recipients or nonprofit refurbishers; and (3) encourage employees with computer expertise to assist in providing maintenance and technical support for the recipients of such equipment, connecting school classrooms to the Internet, and helping teachers to learn to use computers to teach.
termed first by bell in 1970 , cruciate paralysis is a rare neurological disease of the cervicomedullary junction . cruciate paralysis often presents with bilateral paresis of the upper extremities while sparing the lower extremities . patients may also present with difficulty breathing , cranial nerve deficits , or a comatose state . while trauma is the most common cause of cruciate paralysis , the leading hypothesis involves disruption of the anatomy of the pyramidal decussation at the cervicomedullary junction . the anatomical decussation extends longitudinally , spanning from the cervicomedullary junction to the c-2 level . within this region , the motor tract fibers of the upper extremities cross both ventrally and superiorly to the fibers supplying the lower extremities . by crossing at a spatially different location , the independent upper extremity fibers provide a way for lesions to preferentially damage upper extremity fibers while sparing those of the lower extremities . however , cruciate paralysis is a rare condition with few reported studies ; hence , treatments have been variable and are often without supportive evidence . in this report , we conducted a systematic literature review from 1966 to the present of patients diagnosed with cruciate paralysis to identify potential prognostic predictors for the outcome . using medline and pubmed central , a comprehensive search for all papers under mesh and keyword term cruciate paralysis additional information and cases were obtained through google and google scholar , and appropriate search of relevant sources was performed using the same keywords . a case was included if it met the following criteria : ( 1 ) the paper under review demonstrated appropriate signs and symptoms of cruciate paralysis as defined above ; ( 2 ) a mechanism of injury was noted ; ( 3 ) the type of intervention and treatment was noted ; and ( 4 ) a follow - up examination was documented . cases with patients presenting in a comatose state were excluded along with papers written in languages other than english . our study focused on patients who were noncomatose and carried the diagnosis of cruciate paralysis . this is due to the fact that different states of coma may affect appropriate examination of the upper and lower extremities and hence the diagnosis . of the 39 cases initially found , 37 of them met our criteria [ table 1 ] . follow up results were classified into three categories of recovery : insignificant recovery , moderate recovery , or full recovery . a case was considered to have made a full recovery if , at the time of the last documented follow - up , upper extremity neurologic deficits had completely resolved . a case was considered to have made a moderate recovery if , at the time of the last follow - up , symptomatic improvement was documented , but residual upper extremity neurologic deficits still remained . finally , a case was considered to have made an insignificant recovery if , at the time of the last follow - up , there was little to no change in upper extremity neurologic deficits since the time of initial presentation . each category was assigned a numerical score of 1 , 2 , or 3 , respectively , for simplicity of analysis . treatments were further characterized into two groups : those who underwent surgical intervention and those who did not . each of the 36 cases was analyzed for outcome trends based on cause ( trauma or nontrauma related ) , and an appropriate anova test was run using the mean numerical scores of each category [ table 2 ] . the 28 cases associated with trauma were further analyzed , and respective anova tests were run to determine trends and associations of outcomes categorized by age , gender , and type of intervention [ table 3 ] . clinical studies investigating the management of cruciate paralysis percentage of cases making a full recovery , moderate recovery , or insignificant recovery by cause of symptoms percentage of trauma cases ( 29 patients ) making a full recovery , moderate recovery , or insignificant recovery by age , gender , and type of correctional intervention in patients who carried the diagnosis of cruciate paralysis and who were not comatose , the overall reported outcome was favorable with 54% of patients achieving full recovery and 29.7% of patients achieving moderate recovery . the overall outcomes associated with cruciate paralysis secondary to trauma did not differ significantly from other nontraumatic causes , p = 0.5 [ table 2 ] . since the majority of cases of cruciate paralysis were traumatic ( 29 patients , 78.4% ) , we analyzed factors that might impact outcomes of traumatic cruciate paralysis [ table 3 ] . patients over the age of 60 years showed significantly worse outcomes as compared to those under the age of 60 , p < 0.001 . similarly , patients in the both 020 and the 2040 age ranges had statistically better outcomes when compared to the rest of the cohort , p = 0.02 and p = 0.02 , respectively . male patients also seemed to have slightly better outcomes on average than female patients , p = 0.08 . finally , patients treated without surgical intervention had better prognoses than those treated surgically but did not reach statistical significance , p = 0.08 [ table 3 ] . we included the details of the patient with traumatic cruciate paralysis that was treated at our institution in figures 13 . her neurological examination showed a motor strength of 1/5 in the upper extremities and 3/5 in the lower extremities . sagittal t2-weighted sequence magnetic resonance imaging of the cervical spine demonstrating a type iii odontoid fracture with posterior subluxation causing compression of the cervicomedullary junction with upper cervical spine signal cord change the patient was placed in crown halo traction and her fracture fragment was reduced as demonstrated by the lateral cervical spine x - ray ( a ) . the patient then was placed in crown halo vest , and a magnetic resonance imaging of the cervical spine was done revealing reduction and realignment with decompression at the cervicomedullary junction as demonstrated with a sagittal t2-weighted sequence ( b ) the patient underwent tracheostomy and percutaneous endoscopic gastrostomy tube placement . her neurological examination continued to improve . ultimately , the tracheostomy and the percutaneous endoscopic gastrostomy tubes were removed . she was kept in a crown halo vest for 6 weeks , followed by 6 weeks of rigid collar placement . during her 6-month follow - up visit , it resembles central cord syndrome of the subaxial cervical spine , in that it usually affects the upper more than the lower extremities ; however , since it is localized to the upper cervical spine , it is also associated with various degrees of lower cranial nerve palsies and at times states of coma . our review demonstrated that most cases are traumatic in nature with 78.4% of the cases reported . overall , in the absence of coma , the outcome following this injury is favorable with 54% of patients achieving full recovery and 29.7% of patients achieving moderate recovery . patients who were older than 60 years had a worse outcome than younger patients suffering from traumatic cruciate paralysis . while no concrete treatment recommendations have been suggested in the literature , due to the presence of a neurological deficit , severe cruciate paralysis has traditionally warranted surgical intervention . our review , however , showed that patients who were treated nonsurgically may have better outcomes with a p value of 0.08 . this may reflect that patients who warrant surgical intervention may be sicker due to other associated injuries or suffer from a biomechanically unstable fracture requiring surgical intervention . similarly , dickman et al . recommended surgical intervention only be used for patients having more severe fractures with associated ligamentous instability of the atlantoaxial complex . our study has a few limitations ; the cohort size is small in size since our search focused only on papers that included cruciate paralysis as a keyword and hence some papers that may have included patients with cruciate paralysis secondary to atlantooccipital dissociation and combination atlas and axis fractures were not included . moreover , patients who were in a comatose state were excluded as well since it would be hard to ascribe coma due to an intracranial or upper cervical spine injury . while numerous cases of trauma - associated cruciate paralysis have been reported in the literature , there remain insignificant data to make any sound conclusion concerning whether or not surgical intervention is always the best method of treatment .
objective : cruciate paralysis is a rare , poorly understood condition of the upper craniovertebral junction that allows for selective paralysis of the upper extremities while sparing the lower extremities . reported cases are few and best treatment practices remain up for debate . the purpose of this study was to conduct a systemic literature review in an attempt to identify prognostic predictors and outcome trends associated with cases previously reported in the literature.materials and methods : we conducted a systematic literature review for all cases using the term cruciate paralysis , reviewing a total of 37 reported cases . all outcomes were assigned a numerical value based on examination at the last follow - up . these numerical values were further analyzed and tested for statistical significance.results:of the 37 cases , 78.4% were of traumatic causes . of these , there were considerably worse outcomes associated with patients over the age of 65 years ( p < 0.001 ) . those patients undergoing surgical treatment showed potentially worse outcomes , with a p value approaching significance at p = 0.08.conclusion:numerous cases of trauma - associated cruciate paralysis have been reported in the literature ; however , there remains a strong need for further study of the condition . while certain risk factors can be elicited from currently reported studies , insignificant data exist to make any sound conclusion concerning whether surgical intervention is always the best method of treatment .
J. Gerald's Smith's dinner was on the stove waiting for him to get home Wednesday around 4:30, as the 82-year-old retired real estate agent sat at a stop sign at Northeast First Street in his Buick SUV. Roger Wittenberns, 60, the multimillionaire founder of the Lady of America fitness club chain, and his longtime girlfriend Patty Ann McQuiggin, 61, spent the afternoon eating and drinking in downtown Delray Beach. They left in separate yellow luxury cars — he in a Lamborghini, she in a Porsche. They were speeding on Federal Highway, police said. As Smith headed west to cross Federal Highway, the Lamborghini crashed into his Buick Enclave, killing him, police said. Speed and alcohol are both being considered as factors in the crash, said Delray Beach Police spokeswoman Dani Moschella. Police said it was Wittenberns who told them he had been drinking in the afternoon. He suffered critical injuries in the crash and remains hospitalized at Delray Medical Center. Police haven't determined exactly how fast Wittenberns was driving. No one has been charged in the crash, but police say the investigation continues. A Lamborghini collided with a Buick SUV on Sept. 21, 2016, in Delray Beach, killing the Buick driver, police say. Newly released documents and photos, provided by Palm Beach County prosecutors, give a more detailed look at the collision that has resulted in charges against the Lamborghini driver, Roger Wittenberns. A little extra money A woman who answered the door at Smith's Boynton Beach home Thursday said lawyers had advised the family not to speak with the media. Smith's wife told police he usually came home around dinner time. "It's heartbreaking," Moschella said. "He was 82 years old, trying to make a little extra money driving an Uber." Before becoming an Uber driver, Smith was a real estate agent in Boynton Beach. His former boss, Scott Mcvey, said Smith had been selling real estate for more than a decade when the two teamed up in 2008 at Blue Water Realty. The company closed in 2010, but in those two years, Mcvey said he and Smith became "quite good friends." "He was a gentleman all around — just a wonderful individual," Mcvey said. "He was very happy. He was just a super guy always with a smile on his face, always very pleasant." Semi-retired, Smith really didn't need to work, but he did because he loved helping people, Mcvey said. "He was a real people person, and he loved taking people around and getting them situated in a new home." Luxury vehicle drivers McQuiggin left the crash scene and parked her Porsche in the parking lot of Anthony's Coal Fired Pizza, a restaurant nearby. Police caught up with her later. It was unclear how she got home to Vista Del Mar, police said. Attorney Mitchell Beers said he is representing both Wittenberns and McQuiggin. "To comment on the releases by the Delray Beach police department at this point doesn't serve anything," he said. "From all the protests that are going on right now, I think your readers understand that you cannot take everything the police release at face value." Beers said he had seen Wittenberns at the hospital. "He does have serious injuries and he is going to need some hospitalization and treatment," he said. Wittenberns is an avid fan of exotic cars, his Facebook page shows. It includes photographs of him attending specialty car shows all over the country. One photograph shows him posing with comedian and former "The Tonight Show" host Jay Leno, a car aficionado. He retired from the Lady of America fitness club chain in 2005. He now runs Diversified Health & Fitness, a Fort Lauderdale-based holding company for health club franchises Zoo Health Clubs, American Body Works, Fit For Her, FitZone For Women, Access Fitness and Sedona, according to its website. Wittenberns moved to a $2 million home in Delray Beach in 2014 after selling his Fort Lauderdale mansion — on Harborage Isle — for $6.5 million. Wittenberns also was a neighbor and victim of convicted Ponzi schemer Scott Rothstein, losing about $300,000 in the $1.4 billion fraud. Wittenberns said in newspaper interviews at the time that he had checked out the investment pitch from Rothstein and uncovered no red flags. McQuiggin's Facebook page is packed with animal videos and dog pictures. It also includes photos of her at a car show in Monterey, Calif., and of her driving a bright red Ferrari while on a trip to Santa Monica. Both Wittenberns and McQuiggin had their licenses suspended for driving under the influence of alcohol in the past, according to Florida's Department of Highway Safety and Motor Vehicles. Authorities said he had a blood-alcohol level of 0.15 on Nov. 29, 1999, in Broward County. Florida law considers a driver under the influence with a level of 0.08. Wittenberns' license was suspended for six months, according to state records. McQuiggin's license was suspended for six months for a June 14, 2003 incident in Broward County when officials said she had a blood-alcohol level of 0.08 or more, according to state records. Dangerous stretch Investigators on Thursday were reviewing surveillance video from a business near the crash site, Moschella said. Glyn Moulton, 68, who lives around the corner from where the crash happened, said he walked up Wednesday to find the area blocked off with police tape — the heavily damaged SUV and Lamborghini still on the road. "I saw the yellow Porsche parked [at Anthony's], and the two cars in the middle of the street were just demolished," Moulton said. "The Lamborghini was totally disintegrated. The SUV was all caved in and smashed in on the driver's side." He said he has seen numerous crashes on the road in his 13 years of living in the neighborhood. "That's a dangerous intersection right there," he said, pointing to where the crash happened. Moschella, the police spokeswoman, said the Lamborghini was in the left lane on Federal Highway, and the Porsche was in the right lane "just next to, and behind the Lamborghini." As they headed north on Federal, in an area with a posted limit of 35 mph, "they're traveling at a very high rate of speed," Moschella said. Residents say the straight stretch of road without stops makes the area ideal for speeding. "By the time they get to Anthony's in this area, often, daily, they're hitting 70, 80 mph," Moulton said. Although the road has a 35 mph limit, Moulton said he sees many drivers ignore it. "We've been yelling for years, put a light there. It's really simple," Moulton said. "They put up red lights everywhere when they were doing to beautification project of Delray, but they didn't put one up there." ||||| Just One More Thing... We have sent you a verification email. Please check your email and click on the link to activate your profile. If you do not receive the verification message within a few minutes of signing up, please check your Spam or Junk folder. Close ||||| Just One More Thing... We have sent you a verification email. Please check your email and click on the link to activate your profile. If you do not receive the verification message within a few minutes of signing up, please check your Spam or Junk folder. Close
– Florida police say a heath club chain mogul and his girlfriend had been drinking when they hopped into matching luxury yellow sports cars shortly before Roger Wittenberns' speeding Lamborghini plowed into an SUV, killing the 82-year-old driver last Wednesday. Wittenberns, 60, and longtime girlfriend, Patty Ann McQuiggin, 61, were allegedly speeding side by side in Delray Beach, reports the Palm Beach Post, when witnesses say Wittenberns' car struck a Buick Enclave driven by J. Gerald Smith, per WPTV. Smith, a retiree who was earning extra money driving for Uber, died soon after the crash. Wittenberns, the multimillionaire founder of the Lady of America fitness club chain, was thrown from the car but survived and was hospitalized in serious condition. McQuiggin allegedly ditched her Porsche, fled the scene, and refused to give a statement to police. But Wittenberns told police in an "extensive" interview the couple had spent the afternoon eating and drinking downtown before the crash, per WPTV. No charges have been filed yet, though police said speed and alcohol were factors, per the Post. Wittenberns often posted photos of his luxury cars on his Facebook page, which was taken down. Both Wittenberns and McQuiggin have been arrested previously for driving under the influence, he in 1999 and 2000, and she for the second time 2003. Both had their licenses suspended after the second offense, per the Post. Meanwhile, Smith was remembered as "just a super guy always with a smile on his face," as a former colleague told the Sun Sentinel. An online fundraising campaign has raised more than $6,800 for funeral expenses. "I made him dinner and he didn’t come home," his wife, Eloise Smith, told the Post.
LISTEN TO ARTICLE 5:38 SHARE THIS ARTICLE Share Tweet Post Email American cheese will never die. It has too many preservatives. But it’s melting away. One by one, America’s food outlets are abandoning the century-old American staple. In many cases, they’re replacing it with fancier cheeses. Panera Four Cheese Grilled Cheese Source: Panera Bread Wendy’s is offering asiago. A&W’s Canada locations switched to real cheddar. McDonald’s is selling the Big Mac’s soft, orange square of American cheese with a version that doesn’t contain artificial preservatives. Cracker Barrel ditched its old-fashioned grilled cheese. So did Panera Bread, replacing American with a four-cheese combo of fontina, cheddar, monteau and smoked gouda. The result: higher sales. American cheese is “an ingredient we’re looking to less and less in our pantry,” said Sara Burnett, the chain’s director of wellness and food policy. Cultural Crossroads American (cheese) culture is at a crossroads. The product, made famous by the greatest generation, devoured by boomers on the go and touted as the basis for macaroni and cheese, the well-documented love object of Gen X, has met its match with millennials demanding nourishment from ingredients that are both recognizable and pronounceable. Kraft Singles Photographer: Daniel Acker/Bloomberg Don’t rely on anecdotal evidence. The data show it, too. U.S. sales of processed cheese, including brands like Kraft Singles and Velveeta, a mainstay of delicacies such as ballpark nachos, are projected to drop 1.6 percent this year, the fourth-straight year of declines, according to Euromonitor International. The end of the affair is also evident at the Chicago Mercantile Exchange, where 500-pound barrels of cheddar -- which are used to make American cheese -- are selling at a record discount to 40-pound cheddar blocks, the cheddar that shows up on party platters. That’s because demand for the cheese in the barrels has been dwindling for years, according to Alyssa Badger, director of operations at Chicago-based HighGround Dairy. American cheese isn’t the point of a lot of the barrel production, Badger said. It’s for the byproduct whey, a staple of pricey protein shakes. Cheese Factories Decline is also evident when looking at the manufacturing landscape. The number of U.S. cheese factories increased 40 percent between 2000 and 2017, but the growth is from small, specialty cheesemakers, said Matt Gould, editor at Dairy & Food Market Analyst Inc. Prices at the grocery store for processed American cheese have been slipping, too, dipping below $4 a pound for the first time since 2011, according to the U.S. Bureau of Labor Statistics. Gayle Voss, owner of Gayle V’s Best Ever Grilled Cheese in Chicago, takes two slices of fresh-baked sourdough and fills them not with American cheese but with Wisconsin-made butterkäse cheese. It’s made in small batches by farmers who know the names of their cows. It’s melty and slightly stretchy, and yes, buttery. It’s what people want these days, she said. “I could buy preservative-filled cheese and butter,” Voss said. “But I’m all-out on supporting small businesses and offering a good, quality product, and the minute people bite into it, they know -- because it’s so good.” Pause here to imagine taking a bite of crunchy bread and melted cheese that forms a string as mouth and sandwich separate. “People want to know where their food is coming from,” she said, “and my sales reflect that.” Voss said her husband will use Kraft Singles to whip up a quick sandwich for himself at home, something that cheeses her off. But it’s what he grew up with, she said. Cheese Born Most Americans did. American cheese was born at a time when utility reigned. James and Norman Kraft invented processed cheese in 1916 and sold it in tins to the U.S. military during World War I. Soldiers kept eating it when they returned home and its popularity soared. It wasn’t until 1950 that Kraft perfected the slicing. Soon after came a machine that could individually wrap the slices, and in 1965, Kraft Singles were born. Like Wonder Bread, society marveled at the uniformity of the product, the neatness of the slices, the long shelf life and its ability to stay moist even in the desert, in the middle of the summer, at noon. Ingredients include substances that sound like a chemistry set: sodium citrate, calcium phosphate, natamycin, modified food starch. And, of course, milk. Kraft Singles Though 40 percent of U.S. households buy Kraft Singles, overall sales are flat, according to Peter Cotter, general manager of cheese and dairy for Kraft Heinz Co. Kraft has a 30-person research-and-development team working on ways to get American cheese into more homes, he said, offering qualities that healthier, more natural cheeses can’t. For instance, “the melt.” “Honestly, you can’t get that in a natural cheese,” Cotter said. “It’s a very unique product. The creamy smooth texture and melt of the cheese. The natural cheeses, they just don’t melt that way.” The backlash to the backlash remains robust, with about $2.77 billion in retail sales this year, according to Euromonitor. Some trendy restaurants are going retro on their menus, hearkening back to somebody’s idea of what a classic America might’ve been like before fancy cheeses ruined the youth. After all, Homer Simpson pulled an all-nighter eating 64 slices of American cheese, not gouda, provolone or even butterkäse, which, after all, just means butter cheese in German. Burger joint Mini Mott opened this summer in Chicago’s trendy Logan Square neighborhood and when diners order a cheeseburger they’ll get American cheese and nothing else. Chef Scott Sax said he buys it in five-pound bricks from a supplier in Minnesota. “There’s nothing pure or organic about it,” Sax said. “The ingredients are very American.” ||||| This morning, Bloomberg published a report suggesting that the youth of America have turned their collective back on the cheese that bears their country’s name. American cheese has fallen out of favor with fast-food restaurants and seen its retail price drop to an eleven-year low. Millennials are not satisfied with killing cable television, mayonnaise and Hooters; they are now coming for the Kraft Single. Advertisement - Continue Reading Below I was raised on this shining yellowish slice on a hill, and I will defend her to my dying breath, probably a few years too early: American cheese is the best cheese. Yes, I am aware there are fancier cheeses. A wedge of d’Affinois triple-creme brie on a torn-off fistful of just-warmed pretzel bread can be life-changing. I will pair a nice moldy bleu with a crisp white wine like the snobbiest cheesemonger in town. When it's time for some charcoots, the stinkier the accompanying cheese, the better. I know my way around a prestige cheese, is what I’m saying. But when it's time to make a sandwich, or an omelette, or—you know in your heart that this is true—a cheeseburger, there is no choice but American. The way it melts smoothly and evenly, like something made in a lab during the Eisenhower administration, but does not knot up and stretch out endlessly like a cheddar? The way that hot droplet leaks out of the first bite of your egg-and-cheese sandwich and scalds your lower lip? The way it serves as the glue that bonds your burrito contents together? The way it peeks out the top of a Jack In The Box taco? It’s reliable. It’s probably killing you. It's America. What is American cheese? I don’t know, and I don’t want to. It’s made of secret chemicals, yumminess compounds, and can-do spirit. It is the color of an El Segundo sunset, somewhere between a yellow light and a traffic cone. It is equally at home on a McDonald’s cheeseburger and one of those fancy boys they serve at Shake Shack. It tastes like a more viscous version of whatever you’ve put it on. There must exist a brick version of it somewhere, but I have only ever seen it in its single-slice, individually-wrapped format, one-eighth of one American inch thick. They pick them off the cheese tree that way, when they are good and ripe. Advertisement - Continue Reading Below American cheese has practical use as well. How many times has this happened to you: you and your friends order a nice big plate of nachos for the table. The nachos come, you grab a single one, and because the cheddar mix has already thickened around the chips, the entire plate comes up to your mouth. Sure, you can try to get in there with the other hand and separate them, but before you know it, you’re elbow-deep in a mess of sour cream, guacamole, and hot chip grease. Your friends are disgusted and your appetite is shot to hell. American cheese will not do you like that. Ask for it by name. "What is American cheese? I don’t know, and I don’t want to." It is possible that my cheese palate has been forever damaged by having grown up in the middle of this country. I’m from St. Louis, MO, where the local pizza employs a cheese-food called Provel. Provel is the wild child of provolone and Velveeta. It’s a gelatinous and controversial substance that stays with you long after you’ve eaten it. “Do I have a head cold," you will ask yourself the morning after having housed an Imo’s sausage pizza. “No,” you will remember, “that is just the Provel doing its important work.” And then you will roll over and sleep the rest of the day away. This was my foundational pizza cheese, and for that reason, I love it above all else. It tastes like home, largely because they can’t sell the shit anywhere outside of St. Louis. Provel bit me young, and now, like a much less sexy species of vampire, I roam the earth with a hunger for processed cheese. Only that day-glo square of American will do. Keep your moisture-deficient Asiago far away from me, as a burger topping. Monterey Jack, I see what you’re doing with your dried bits of yesterday’s peppers, but it’s not going to happen. Emmenthal, get back on top of my French onion soup where you belong. Gastropubs of America: I appreciate your ambitious four-cheese blends, but if you’re going to make me a grilled cheese, save your blending time and unwrap me four slices of Kraft. And yes, I will add bacon. Join me as I stand athwart history yelling “Stop,” with my mouth full. This may be a terrifying, disheartening moment for America, but her cheese is as delicious as ever. It is available for you right now, at your local grocery chain, in a stack of 72. Go and salute it.
– American cheese has been a staple in US kitchens for decades, but Bloomberg reports that its fortunes appear to be heading south. The story also sees one big reason why: Millennials aren't interested in buying or eating the processed stuff, preferring instead natural cheeses that taste more distinctive even if they are pricier. Chains such as Wendy's, Cracker Barrel, and Panera Bread are either ditching American cheese—a champion melter—or at least offering alternatives. It's "an ingredient we're looking to less and less in our pantry," says Panera exec Sara Burnett of American cheese. The story finds even more evidence in economic stats, including one that notes US sales of processed cheese are on track to dip for the fourth consecutive year. Prices for American cheese are down, too, and the number of specialty cheesemakers is surging. A solid 40% of US households still buy Kraft Singles, but sales are flat, a company exec tells Bloomberg, whose story traces the cheese's origins back to WWII. Need an impassioned defense of the stuff? See Dave Holmes at Esquire. "What is American cheese? I don't know, and I don't want to," he writes. "It's made of secret chemicals, yumminess compounds, and can-do spirit." (Bloomberg is less romantic, ticking off ingredients such as sodium citrate, calcium phosphate, natamycin, and modified food starch.) "I appreciate your ambitious four-cheese blends," Holmes tells foodies, "but if you're going to make me a grilled cheese, save your blending time and unwrap me four slices of Kraft."
Forest Wagner is in stable condition after being attacked by a bear. (Photo by Ryan Cortes, University of Alaska Southeast) A bear mauled a professor who was leading a mountaineering class in Alaska on Monday, and the man was airlifted to the hospital after one of his students hurried down the mountain to find cell reception to call for help. Forest Wagner, 35, an assistant professor of outdoor studies at the University of Alaska Southeast, is in stable condition in intensive care at a hospital in Anchorage, according to a university spokeswoman. Wagner was on skis leading 9 students and two teaching assistants in an area between Mount Emmerich and the Chilkat River near Haines, on Alaska’s panhandle, when Wagner was attacked by a brown bear. The initial report to police was that Wagner suffered extensive injuries to his leg. State police contacted a helicopter company to see if heli-skiers could find the group and evaluate if a helicopter could land, according to an Alaska State Troopers report shared by a spokesman for the Alaska Department of Fish and Game. Before the helicopter left the mountain, the bear came back to the place where the rest of the hikers were, about 200 yards from the helicopter, according to that report, and a trooper hiked back to the area to provide security. The university chancellor was contacted and agreed it would be safest to evacuate the entire group; he called a helicopter company to do that. None of the students was injured; they returned to Juneau on Tuesday by ferry as originally planned. Haines, 92 miles north of Juneau, can only be reached from the capital city by air or sea. University of Alaska Southeast professor Kevin Krein said through a university spokeswoman that all the students are doing well. “Forest, the teaching assistants, and the students were great in the situation,” Krein said. “They applied their medical and wilderness training, worked together, and responded effectively. I am very proud of them.” School chancellor Rick Caulfield said in a statement: “I commend the students for their quick action in responding to this situation and appreciate the prompt response from Alaska State Troopers, Haines Police, Temsco Helicopters, and medical staff in Haines and in Juneau. Our thoughts are with faculty member Forest Wagner as he recovers from this incident, and we are thankful that all involved are safe.” Temsco Helicopters, which performed the evacuation, had no comment. Wagner has been coordinating and teaching the outdoor studies program at the university for a decade, with courses on outdoor leadership, ice climbing, backcountry navigation, rock climbing, glacier travel, crevasse rescue and mountaineering. He has led many extended expedition courses. He also works as a guide on Mount Denali and has worked internationally as a high-altitude guide. He is a graduate student in northern studies at the University of Alaska Fairbanks who, according to his faculty web page, “is most interested in human narrative, northern identity, and sense of place.” Last month, nine students and four instructors were swept into an avalanche on Canwell Glacier during a University of Alaska Fairbanks mountaineering class, according to Alaska Public Media; all were rescued, but the incident triggered questions about the safety of the mountaineering program. Earlier this month, the Alaska Department of Fish and Game warned that, after a stretch of unusually warm weather, bears were waking earlier than usual from their winter sleep. Grizzlies and black and brown bears had already been spotted, they announced, and in the southeast, “recent warm days have skunk cabbage and other wild greens blooming, setting the stage for bears there to start moving any day.” Because of those early signs, Alaska Gov. Bill Walker proclaimed April “Bear Awareness Month,” explaining that “April is a good time to remind Alaskans about bears, their behavior, and how we can live responsibly and safely in bear country.” It was the second bear attack reported in Alaska in the last few days; a bear hunter was mauled by a grizzly in the interior, east of Denali National Park, over the weekend, according to the Associated Press, and is recovering. This post has been updated. ||||| JUNEAU, Alaska (AP) — The Latest on a university educator mauled by a bear in Alaska (all times local): 9 a.m. An assistant professor who was mauled by a bear while teaching a mountaineering course in southeast Alaska is in critical condition. A spokesman at Providence Alaska Medical Center in Anchorage says 35-year-old Forest Wagner is in the intensive care unit Tuesday, a day after the attack. A University of Alaska Southeast spokeswoman says Wagner was with a group of 12 students on Mount Emmerich near Haines, Alaska, when he was attacked by a sow with two cubs. No students were hurt Monday. A student hiked down the mountain to get cellphone reception and call for help. Wagner's biography says he's been coordinating and teaching in the outdoor studies program at the university's Juneau campus since 2006. He teaches rock and ice climbing, backcountry navigation, glacier travel and mountaineering. ___ 9:45 p.m. A teacher has been hospitalized after he was mauled by a bear during a mountaineering class in the Alaska Panhandle. A University of Alaska Southeast spokeswoman says Forest Wagner, 35, of Fairbanks, was with a group of 12 students on Mount Emmerich near Haines, Alaska, on Monday when he was attacked. No students were hurt. A student hiked down the mountain to get cellphone reception and call for help. The university says Wagner was taken to Providence Hospital in Anchorage. His condition was not immediately available, but the university said he was stable. Wagner has been coordinating and teaching in the outdoor studies program at the university since 2006, according to his biography. He teaches rock and ice climbing, backcountry navigation, glacier travel and mountaineering. ||||| Starting in 1996, Alexa Internet has been donating their crawl data to the Internet Archive. Flowing in every day, these data are added to the Wayback Machine after an embargo period.
– A professor was mauled by a bear while teaching a mountaineering course Monday in the Alaskan wilderness, Alaska Dispatch News reports. According to the Washington Post, police say 35-year-old Forest Wagner "suffered extensive injuries to his leg" in the attack. Wagner, who's taught at the University of Alaska Southeast for a decade, was with approximately a dozen students and two teacher's assistants. The AP reports the group was confronted by a female bear with two cubs. Following the attack, one of Wagner's students hiked down Mount Emmerich until they had cell reception and could call for help. A helicopter arrived to take Wagner off the mountain. He remains hospitalized in critical condition. The bear reappeared as Wagner was being airlifted off the mountain, and a state trooper stayed behind to protect the group. With the bear remaining in the area, a second helicopter was called to remove the students, none of whom were injured. "Forest, the teaching assistants, and the students were great in the situation,” a fellow professor tells the Post. “They applied their medical and wilderness training, worked together, and responded effectively. I am very proud of them.” The safety of Alaskan mountaineering courses was already being questioned due to an incident last month in which students from the University of Alaska Fairbanks were caught in an avalanche.
today , human faces to conditions that suppress the immune system directly or indirectly , such as malignant disease or many kinds of chronic diseases like chronic obstructive pulmonary disease ( copd ) . one of the most dangerous of these microorganisms is a yeast - like fungus of the genus pneumocystis . p. jirovecii is one of the most important human pathogens ; particularly among immunocompromised hosts . s , the first epidemic of pneumocystis pneumonia ( pcp ) was occurred in malnourished children in the european countries and it was reported in the premature infants in iranian orphanages during the second world war ( 1 ) . for the first time , etiologic association between p. jirovecii and plasma cell interstitial pneumonia was found in malnourished children ( 2 ) . the study showed the disease was cured when the life returned to normal condition and patient s diet was improved . after 1954 , the sporadic infection by pneumocystis was reported in all of the countries throughout the world ( 3 ) . a great list of related pcp diseases were published including premature infants , marasmus and malnourished children , congenital immune deficiency or acquired immune deficiency , patients with malignancies ( pms ) under chemotherapy , recipients of organs and finally hiv positive patients ( 3 ) . thereafter , p. jirovecii was detected in 10 - 51% of hiv positive patients and 5.8 - 32% of non hiv positive immunosuppressed patients without any clinical symptoms using molecular assays ( 5 - 12 ) . today , a direct relationship between mortality and pcp in immunosuppressed patients has been proven ( 13 ) . therefore , patients who take suppressive drugs , pms under chemotherapy , patients with transplanted organs or bone marrow , patients with vascular collagen and hiv positive patients are at a high risk of getting pneumocystis pneumonia ( 14 , 15 ) . generally , two types of patients are at risk of p.jirovecii : 1 ) hiv positive patients and 2 ) non - hiv positive immune compromised patients ( 16 ) . pneumocystis pneumonia occurs in two distinct individuals : the first main victims are premature and malnourished children in overcrowded orphanages or hospital situation and the other is the patients with chronic diseases such as pms who are being under chemo or radio therapy ( 17 ) . although p. jirovecii is one of the potential opportunistic agents in hiv positive patients , it can also be able to infect most of non hiv positive immunodeficiency patients ( 1 , 18 ) . after application of highly active antiretroviral therapy and prophylactic treatment against pneumocystis in developed countries , the incidence rate of pcp was decreased in hiv positive patients . however , the rate remains high in developing countries and in the immunocompromised patients ( 19 ) . though pcp had been reported in iranian orphanages in 1950 s ( 1 ) , there is no any report about the rate of disease in iranian immunosuppressed patients . therefore , we tried to detect dna of p. jirovecii in clinical specimens of hiv positive and hiv negative patients to find the rate of disease in iranian immunosuppressed patients . in this study we report the rate of colonization and active disease of p. jirovecii in two groups of immunosuppressive patients . one hundred and fifty five bronchoalveolar lavage samples ( bal ) were collected from 153 pms under chemotherapy treatment . in addition , 62 sputum and 38 nasopharyngeal washing samples were collected from 89 copd patients who referred to national research institute of tuberculosis and lung disease ( nritld ) during the period of june 2010 to december 2011 . both groups of patients were seronegative for hiv as confirmed by elisa and western blot test . to reduce the mucosa of samples , they were treated with 4% naoh and then were neutralized with normal hcl . briefly , 200 l pbs was added to pellet of samples after centrifuging at 3000 rpm for 10 minutes . then 20 l proteinase k and 200 l of al buffer were added and after brief shaking they incubated at 56c for 10 minutes . after treating with proteinase k 200 l absolute alcohol was added and suspension was transferred to column prepared by the manufacturer . the columns were washed twice with washing buffer prepared by kit and 100 l of elution buffer was added . then dna was collected in a new 1.5 ml microtube after 1 minute incubation at room temperature . the extracted dna was used for pcr directly or incubated at -20 degree centigrade for a long time . two nested pcr was performed to amplify mt lsu rrna and dhps genes of p. jirovecii ( 20 - 22 ) . to prevent of contamination , a negative control ( distilled water without any dna ) was added in each step of procedure . to confirm the accuracy of pcr , a positive control ( sputum of hiv / aids patient with p. jirovecii ) the pcr product was run on 1.5% agarose gel besides a 100 bp ladder to see the amplified 260 bp fragment of mt lsu rrna gene . the work has been approved by the ethical committees of tarbiat modares university , iranian hiv / aids research center and mycobacteriology research center . one hundred and fifty five bronchoalveolar lavage samples ( bal ) were collected from 153 pms under chemotherapy treatment . in addition , 62 sputum and 38 nasopharyngeal washing samples were collected from 89 copd patients who referred to national research institute of tuberculosis and lung disease ( nritld ) during the period of june 2010 to december 2011 . both groups of patients were seronegative for hiv as confirmed by elisa and western blot test . to reduce the mucosa of samples , they were treated with 4% naoh and then were neutralized with normal hcl . briefly , 200 l pbs was added to pellet of samples after centrifuging at 3000 rpm for 10 minutes . then 20 l proteinase k and 200 l of al buffer were added and after brief shaking they incubated at 56c for 10 minutes . after treating with proteinase k 200 l absolute alcohol was added and suspension was transferred to column prepared by the manufacturer . the columns were washed twice with washing buffer prepared by kit and 100 l of elution buffer was added . then dna was collected in a new 1.5 ml microtube after 1 minute incubation at room temperature . the extracted dna was used for pcr directly or incubated at -20 degree centigrade for a long time . two nested pcr was performed to amplify mt lsu rrna and dhps genes of p. jirovecii ( 20 - 22 ) . to prevent of contamination , a negative control ( distilled water without any dna ) was added in each step of procedure . to confirm the accuracy of pcr , a positive control ( sputum of hiv / aids patient with p. jirovecii ) the pcr product was run on 1.5% agarose gel besides a 100 bp ladder to see the amplified 260 bp fragment of mt lsu rrna gene . the work has been approved by the ethical committees of tarbiat modares university , iranian hiv / aids research center and mycobacteriology research center . totally , 255 pulmonary samples were collected from 153 pms and 89 copd patients to detect dna of p. jirovecii by a nested pcr assay . thirty eight nasopharyngeal washing and 62 sputum samples were collected from the copd patients hospitalized in the nritld because of pulmonary infection . the mean age of these patients was 66.5 years and all of them were men . 35% of patients suffered from high level stage of copd disease ( stage iii or iv ) . all of patients were confirmed to be hiv seronegative by elisa and western blot test . cough was the most common symptoms in the studied patients and 86% of them were suffered from breath shortness , 59% had sputum with cough . in lung auscultation , wheezing was heard in 73% of patients . it showed that 53 patients had fev1 less than 70% that means about 60% of patients suffered from exacerbate copd ( table 1 ) . one hundred and fifty five bronchoalveolar lavage ( bal ) samples were collected from 153 malignant patients under chemotherapy as the second group in this study . all of them were also hiv seronegative as confirmed by elisa and western blot test . the cause of their hospitalization in 44% of patients was pneumonia , and 20% of these patients showed mono - lateral or bi - lateral infiltration in the ct scan . others were hospitalized due to pulmonary edema , hemorrhage , pain chest , fever and chilling and trauma . pulmonary symptoms were the cause of hospitalization in 7 other patients because of background disease ( malignancy ) . there was not any ct scan report for 72% of patients ( table 2 ) . we could isolate dna of p. jirovecii from 11 malignant patients ( 7.2% ) and 7 copd patients ( 7.9% ) . totally , the rate of p. jirovecii infection in these two groups of iranian patients was 7.4% ( table 3 ) . in this survey , no other fungus or bacterial growth were observed in the routine laboratory culture media and it is determined that the virus agents was not cause of pneumonia in these patients . generally , this study showed that the cause of pneumonia in 10.5% of pms was p. jirovecii . p. jirovecii is a eukaryotic microorganism which can infect most of mammalian in addition human beings . one of the most important characteristics of these organisms is their pathogenesis in immunosuppressive patients . this characteristic of microorganism causes that it is known as one of the most dangerous microorganism in the immunodeficiency patients particularly hiv positive ones ( 23 , 24 ) . throughout history the first epidemic of the disease was reported during the second world war in the european countries ( 2 ) . although , it was detected as a dangerous disease in the iranian orphanages in 1950 s ( 25 ) , there is no any systematic study about the disease and its prevalence in iranian patients . one of probably reason is the problem of detection . in most laboratories the detection of organism is based on giemsa staining , a method depending on the experience of microscopist with low sensitivity . nowadays , in most developed countries molecular assays such as nested pcr are used to detect the colonized or active cases of p. jirovecii . for this reason we tried to diagnosis the p. jirovecii in immunosuppressive patients by nested pcr and find the rate of active disease and colonized cases in these categories of patients in comparing with clinical symptoms . p. jirovecii is known as one of the most important colonization agents in copd patients and can be dangerous for them . therefore , it is very important to detect the active and colonized cases in patients whose immune system is defected . based on the result , we could isolate dna of p. jirovecii from 7.9% of admitted copd patients without any signs and symptoms of pneumonia . it is also possible that the person has subclinical infection with p. jirovecii until his immune system become deficient ( 26 ) . some studies in european countries reported a prevalence of 6 - 40% for p. jirovecii among patients with copd ( 7 , 27 - 29 ) . a positive result for pcr and pneumonia symptoms are indicators of an active disease in the patient . but none of our studied copd patients hospitalized in the icu ward showed pneumonia symptoms . they were admitted because of influenza , sore throat , wheezing , purulent sputum , fever , body pain and hoarseness . although , they were referred to our hospital for more than one time , the clinical and radiological symptoms did not show any evidence of pneumonia . therefore , our deduction was the positive result of pcr is due to colonization of microorganism in the lung and not the disease in these patients . the reported rate of colonization of p. jirovecii in lung of copd patients is inconsistent with ours ( 30 ) . it is possible due to the different kind of specimens . in our study , the nasopharyngeal washing or sputum was tested because of no cooperation of patients , while in similar study conducted by morris et al . , bronchoalveolar lavage or induced sputum specimens were tested . it is noted that the sensitivity of pcr assay to detect dna of p. jirovecii , in the pulmonary specimens specially induced sputum and bronchoalveolar lavage is more than nasopharyngeal washing . jirovecii is one of the most important infective agents that trigger the severity of copd ( 31 ) . based on global health initiative on obstructive lung disease ( gold ) the studied copd patients were divided in 4 groups ( stages i , ii , iii , iv ) . our study showed that p. jirovecii was colonized in the patients at stage ii and iii . in this study just five percent of patients were in stage iv , while in morris et al . study more patients were classified in stage iv . they want to detect the relationship between colonization and the severity of copd ( 30 ) . our study in comparing with theirs had two important differences : 1- the kind of specimens and 2- the stage of disease . although our results were not exactly showed the association of severity of disease with colonization of p. jirovecii , the percent of fev1 was under 70% in six from seven positive cases ( 85.7% ) . therefore , it could be concluded that colonization of p. jirovecii is one of causes for intensifying the copd . recently , sporadic cases or outbreaks of pneunocystis pneumonia were reported in patients receiving immunosuppressive therapy for transplantation or malignancy ( 32 , 33 ) . the chemotherapy with cytotoxic drugs suppresses the immune system and the patients under chemotherapy were susceptible to infectious diseases especially opportunistic agents such as p. jirovecii . physicians confirmed that 43.8% of these patients were hospitalized because of pneumonia ; the cause of pneumonia was p. jirovecii in 10.5% of them ( 7/67 ) . in these cases , although p. jirovecii was one of colonization agents in copd patients , it could cause active disease in about 91% of pms ( 10/11 ) . one of the possible reasons of these dramatic differences of colonization in two categories of studied patients is the type of immunodeficiency . most of pms consumed cytotoxic drugs such as methotrexate which hurt their cellular immunity especially t cells , however in copd patients humoral immunity was attenuated because of corticosteroid . logically , the microorganism can cause active disease in patients whom cellular immunity defected , i.e. malignant patients in our study . despite the lack of any report on the prevalence of pcp among the iranian immunocompromised patients , it should be considered as a potential dangerous and infectious disease . our study showed that p. jirovecii was one of the opportunistic agents which could be colonized in lung of copd patients , while it could cause pneumonia in pms under chemotherapy . since most of the times it remains unknown in laboratory routine test , we suggest that it is to be diagnosed with more sensitive methods such as pcr . our results suggest that p. jirovecii is an infectious agent that may play a role in the pathophysiology of copd . however , future studies are needed to further define the role of pneumocystis infection in copd .
background and objectiveswith increasing rate of immunodeficiency diseases in the world , opportunistic micro - organism such as pneumocystis jirovecii ( p. jirovecii ) become more important . little information is available on prevalence of this life - threatening microorganism in iran . this study was designed to determine the colonization and the rate of active disease caused by p. jirovecii in two groups of iranian immunosuppressed patients.materials and methodstwo hundred and fifty five pulmonary samples were collected from two groups of immunosuppressed patients to detect a 260bp fragment of mt lsu rrna gene of p. jirovecii by nested pcr . the first group was copd patients consumed oral , inhaled or injectable corticosteroid and the second group was patients with malignancies under chemotherapy . both groups were referred to national research institute of tuberculosis and lung disease and imam khomeini hospital because of pulmonary symptoms . all patients introduced to this project were confirmed hiv sera - negative by elisa and western blot test.resultsthe mean age of copd patients was 66.5 11 ( 41 - 88 ) years and all of them were men . the mean age of patients with malignancy ( pms ) was 43 11 ( 23 - 65 ) years and 51.6% were men . the p. jirovecii was colonized in 7 of 89 copd patients ( 7.9% ) and its dna was isolated from 11 of 153 pms ( 7.2% ) . the microorganism could cause active disease in 7 of 67 ( 10.5% ) pms who suffered from pneumonia.conclusionthe study showed that p. jirovecii was one of colonizing agents in the copd patients , but it could cause active disease in pms . generally , the microorganism can exist in the lung of non - hiv+ immunosuppressed patients . therefore , it should be considered as a potential infective agent in non - hiv+ immunocompromised patients .
malaria transmission is ordinarily calculated as the product of the density of anopheline vectors per human and the infectivity of this anthropophilic fraction of mosquitoes . up to now all - night stationary direct bait collections ( also called human landing collections ) with a human acting both as bait and collector has been the reference method for evaluating mosquito density per human . most of the time , mosquitoes that land on human skin are collected before they have bitten but , clearly , this method exposes men to mosquito bites . therefore , alternative methods are needed and there have been many attempts to develop new strategies and traps , with varying degrees of success . the ' mbita trap ' : - is baited by one human protected from mosquito bites ; - allows the human to sleep ad libitum ; - consists of a modified conical bednet made of white cotton cloth ( not netting ) that concentrates in its upper part the heat and various odours produced by the human bait ; the apex is made of netting and forms a funnel with a small round hole ( 5 cm in diameter ) at its base that permits the entrance of mosquitoes but impedes their escape ; a netting panel is fixed halfway up the net to separate the upper mosquito chamber from the lower human chamber ; - is inexpensive to produce , does not require any maintenance , and is simple to use . mathenge et al provide evidence of its efficacy in trapping laboratory - reared anopheles gambiae released in a screen - walled greenhouse in the mbita point icipe field station , near victoria lake , kenya . when compared side - by - side with similar samples of mosquitoes , the mbita trap caught 43.2 10% of the number caught by human landing collections . clearly , if such success were verified in the field with wild mosquitoes , this trap would become an attractive alternative for mosquito surveillance . the aim of this study was to evaluate the success of mbita traps in sampling mosquitoes in the field conditions of malagasy highlands with special references to two indicators , the anopheline vector species and the anopheline density per human . in other words , we made a comparison of methods between the mbita trap and human landing collection . the study was carried out in three traditional villages on the western fringes of the central highlands of madagascar , antananarivo province , tsiroanomandidy prefecture . these villages were : - andranonahaotra ( anh ) , 1,002 inhabitants , 400 zebus , coordinates 1900'34"s 4625'21"e , altitude 920 m , ankadinondry - sakay commune ( fig 1 ) , view of the village of andranonahaotra . habitat and - soanierana ( soa ) , 1,274 inhabitants , 160 zebus , 1908'42"s , 4625'26"e , 900 m , mahasolo commune , - analamiraga ( amg ) , 900 inhabitants , 390 zebus , 1914'35"s , 4616'22"e , 885 m , maroharona commune ( fig 2 ) . these three villages follow a general line ne - sw and are separated by 14 km for anh - soa and 17 km for soa - amg . rice production is the main activity of villagers . in this region , most people ( > 99% ) do not use bednets and zebus are kept within the village at night . in the twentieth century , the central highlands of madagascar have experienced large malaria outbreaks . a national programme for preventing malaria epidemics , with caid ( " campagne d'aspersion intra - domiciliaire " of insecticide ) performing ddt spraying of house walls at 2 g / m . the houses in the study area are normally covered by this treatment , but the last insecticide treatment was carried out pre-1998 in amg , in 2000 in soa and 2001 in anh , i.e. > 60 , 36 and 24 months respectively before the beginning of this study . adult male volunteers were placed in a room ordinarily used as a bedroom or out - of - doors in places protected from the rain . according to who recommendations , mosquitoes were collected with glass tubes closed by cotton plugs as they landed on the exposed lower legs of adult humans ( fig . each night , four houses were used and , for each house , two men were sited indoors and two outdoors , working in six - hours shifts . the total number of men per night was 32 divided in two teams of 16 . indoor landing collection of mosquitoes . they were used as described and baited with a man resting in bed and in the trap for 12 hours from 18.00 to 06.00 ( fig . three traps were used per night , with one outdoors , and two indoors in separate bedrooms without people other than this under the trap in order to avoid local competition between the trap and other more accessible people for mosquitoes . bedrooms chosen for mbita trap collections were used one single night each month ( i.e. 4 different bedrooms per village and per month ) . at 06.00 , when the human bait left the trap , an experienced technician collected the mosquitoes with an aspirator . the man who acts as bait is sleeping and protected from mosquito bites . by design , mosquitoes enter in the trap via a funnel - shaped entry - no - return port at the bottom of the trap . , a sample of 50 females per village and per month was tested by pcr ( this sample was obtained by human landing , pyrethrum spray , and artificial pit shelter collections ) . as only anophelesarabiensis was observed in a sample of over 1,100 , hereafter , any an . the origin of the blood meal of anophelines found fed in traps was assessed by elisa ) . one human - night referred to the unit of human landing collections i.e. the activity of mosquitoes on one human during the whole night . a trap - night referred similarly to the activity of one trap during the whole night . the efficiency of the mbita trap ( ) is defined as the number of mosquitoes collected per trap - night divided by the number of mosquitoes collected per human - night in similar conditions of location ( indoors and/or outdoors ) and time ( nights of observation ) . funestus samples between mbita trap and human landing collections using the pearson 's coefficient correlation . the results are from december 2002 to may 2003 in amg ( i.e. 12 nights with 192 men - nights for human landing collections and 36 traps - nights for mbita trap collections ) and december 2002 to march 2003 in soa and anh ( i.e. 8 nights with 128 men - nights and 24 traps - nights in each of these 2 villages ) . the study was carried out in three traditional villages on the western fringes of the central highlands of madagascar , antananarivo province , tsiroanomandidy prefecture . these villages were : - andranonahaotra ( anh ) , 1,002 inhabitants , 400 zebus , coordinates 1900'34"s 4625'21"e , altitude 920 m , ankadinondry - sakay commune ( fig 1 ) , view of the village of andranonahaotra . habitat and - soanierana ( soa ) , 1,274 inhabitants , 160 zebus , 1908'42"s , 4625'26"e , 900 m , mahasolo commune , - analamiraga ( amg ) , 900 inhabitants , 390 zebus , 1914'35"s , 4616'22"e , 885 m , maroharona commune ( fig 2 ) . these three villages follow a general line ne - sw and are separated by 14 km for anh - soa and 17 km for soa - amg . rice production is the main activity of villagers . in this region , most people ( > 99% ) do not use bednets and zebus are kept within the village at night . in the twentieth century , the central highlands of madagascar have experienced large malaria outbreaks . a national programme for preventing malaria epidemics , with caid ( " campagne d'aspersion intra - domiciliaire " of insecticide ) performing ddt spraying of house walls at 2 g / m . the houses in the study area are normally covered by this treatment , but the last insecticide treatment was carried out pre-1998 in amg , in 2000 in soa and 2001 in anh , i.e. > 60 , 36 and 24 months respectively before the beginning of this study . adult male volunteers were placed in a room ordinarily used as a bedroom or out - of - doors in places protected from the rain . according to who recommendations , mosquitoes were collected with glass tubes closed by cotton plugs as they landed on the exposed lower legs of adult humans ( fig . malaria prophylaxis was offered . in each village , collections were performed monthly for two consecutive nights from 18.00 to 06.00 . each night , four houses were used and , for each house , two men were sited indoors and two outdoors , working in six - hours shifts . the total number of men per night was 32 divided in two teams of 16 . indoor landing collection of mosquitoes . they were used as described and baited with a man resting in bed and in the trap for 12 hours from 18.00 to 06.00 ( fig . three traps were used per night , with one outdoors , and two indoors in separate bedrooms without people other than this under the trap in order to avoid local competition between the trap and other more accessible people for mosquitoes . bedrooms chosen for mbita trap collections were used one single night each month ( i.e. 4 different bedrooms per village and per month ) . at 06.00 , when the human bait left the trap , an experienced technician collected the mosquitoes with an aspirator . the man who acts as bait is sleeping and protected from mosquito bites . by design , mosquitoes enter in the trap via a funnel - shaped entry - no - return port at the bottom of the trap . , a sample of 50 females per village and per month was tested by pcr ( this sample was obtained by human landing , pyrethrum spray , and artificial pit shelter collections ) . as only anophelesarabiensis the origin of the blood meal of anophelines found fed in traps was assessed by elisa ) . one human - night referred to the unit of human landing collections i.e. the activity of mosquitoes on one human during the whole night . a trap - night referred similarly to the activity of one trap during the whole night . the efficiency of the mbita trap ( ) is defined as the number of mosquitoes collected per trap - night divided by the number of mosquitoes collected per human - night in similar conditions of location ( indoors and/or outdoors ) and time ( nights of observation ) . funestus samples between mbita trap and human landing collections using the pearson 's coefficient correlation . the results are from december 2002 to may 2003 in amg ( i.e. 12 nights with 192 men - nights for human landing collections and 36 traps - nights for mbita trap collections ) and december 2002 to march 2003 in soa and anh ( i.e. 8 nights with 128 men - nights and 24 traps - nights in each of these 2 villages ) . the whole data set consists of 6,899 mosquitoes for human landing and 85 for mbita trap collections . mosquitoes landing on humans belonged to 26 mosquito species ( 10 anophelinae and 16 culicinae ) and those collected with mbita traps to eight species ( three anophelinae and five culicinae ) ( see additional file 1 for the complete data used to performed this analysis ) . mosquito species with less than five specimens in human landing collections were excluded from the analysis ( i.e. a total of 18 mosquitoes with 2 anophelinae and 16 culicinae , all human landing , that represented 0,26% of the whole data set ) and results presented hereafter concern 6,881 and 85 mosquitoes , respectively , belonging to 17 species ( table 1 ) . the ratio of the total numbers of anophelinae / culicinae was 2.02 for the human landing catch and 1.18 with mbita trap collections ( p = 0.015 by exact fisher 's test ) . on average one man - night collected 15.36 ( 10.27 anophelinae and 5.09 culicinae ) and one trap - night collected 1.01 mosquitoes ( 0.55 anophelinae and 0.46 culicinae ) . number and density of mosquitoes collected indoor and outdoor by human landing collections and mbita trap collections . man = mosquitoes collected during the night by human landing catches mbita = mosquitoes collected during the night by mbita trap m - n = nomber of " men - nights " overall , the efficiency of mbita traps vs. human landing collections ( ) is 0.066 . this is not influenced by the indoor / outdoor location ( = 0.050 indoors and 0.098 outdoors , p > 0.99 by exact fisher 's test ) . for anopheles funestus , is 0.103 indoors and 0.237 outdoors ( p = 0.074 ) , for an . arabiensis , is 0.070 indoors and 0.000 outdoors ( p = 0.28 ) , whereas for an . funestus , variations of were analysed per village and month ( original data used to perform this analysis are in tabl . was 0.036 in amg , 0.963 in soa and 1.212 in anh ( = 165.7 , df = 2 , p < 10 ) with values that varied inversely with the density of this species in human landing collections . was also highly variable between months and ranged from 0 to 6.8 ( maximum in february , outdoors , soa ) without clear tendencies that would provide clues to explain this variation . beside this analysis of efficiency , no statistically - significant positive correlation either for the indoor or the outdoor samples was evidenced ( indoor , pearson 's coefficient correlation r = -0.21 , n = 14 , p = 0.47 ; outdoor , r = 0.20 , n = 14 , p = 0,50 ) . another similar analysis using log - transformed values ( + 1 ) did not modify the conclusions . gambiae s.l . collected in mbita traps were from indoor trap at amg on january . funestus collected in the mbita trap , one was collected fully fed in an indoor trap and had taken its blood meal from zebu . funestus were examined for ovaries , 12 were parous and six were nulliparous , i.e. with an excess of nullipars relative to those sampled by human landing collections ( 85% of parous among 1,512 mosquitoes , collected either indoors or outdoors without difference in the parity rate ) ( p = 0.04 by exact fisher 's test ) . the efficiency of the mbita trap compared to human landing collections is very poor for all species of mosquitoes ( with the possible exception of an . this low efficiency observed in the highlands of madagascar with wild mosquitoes is in complete contradiction with previous published results obtained in semi - field conditions using laboratory reared an . we hypothesise the main reason for these discrepancies resides in the well known zoophilic / exophilic trophic preferences of malagasy mosquitoes . during the study , the antropophilic rate for an . arabiensis was 0.00 for those collected indoors by pyrethrum spray collections ( only 12 fed females tested ) and 0.016 for those outdoors resting in pit shelters ( 318 tested , unpublished data ) . this zoophilic / exophagic behaviour may be antagonist to the entry in the trap that is thought as a positive response to convective heat currents and various odours produced by the human bait in the trap . unfortunately , there was no explanation for the large variations in trap performance and the unreliability with the reference method constituted by the human landing collection . why is the efficiency of the trap higher in villages with low density in human landing collections ? one fact is that anthropophilic behaviour is not positively linked to this efficiency : the rate for indoor an . funestus was 0.33 in amg , 0.61 in soa and 0.19 in anh ( unpublished data obtained from about 400 mosquitoes collected by pyrethrum spray collections ) i.e. a higher efficiency was observed in the village with a lower anthropophilic rate . funestus was 0.10 in the three villages ( from about 200 mosquitoes resting in pit shelters ) , i.e. the higher efficiency was observed outdoors with the lower anthropophilic rate . these data are in contradiction with the hypothesis on zoophily stated in the previous paragraph . is there a density dependent factor that acts on the efficiency of mbita trap , as suggested by observations in the three villages ? does an unbaited mbita trap would collect mosquitoes ? the efficiency of the mbita trap appears to be poor and/or unreliable compared to classic human landing collections in the highlands of madagascar . our findings do not corroborate those obtained in the previous experiments in semi - field conditions in a greenhouse using laboratory - reared an . a possible explanation for this discrepancy is the marked zoophilic preferences of malagasy mosquitoes ( including an . human landing collections remain the gold standard method for evaluating mosquito density and , thus , malaria transmission in the highlands of madagascar . however , this conclusion can not be extrapolated to areas , such as most of tropical africa , where malaria vectors are consistently endophilic and anthropophilic . fr and vora participated in the data collection and actively contributed to the interpretation of the findings . funestus per man and per night by human landing collections and mbita trap collections man = mosquitoes collected during the night by human landing catches mbita = mosquitoes colleced during the night by mbita trap one sheet presents the total number and density of mosquitoes per species and endophilic / exophilic behaviour . the other sheet presents the total number and density of mosquitoes per species and villages . we extend our thanks to the medical entomology staff of the institut pasteur de madagascar for their participation in the field trials . we are also grateful to villagers who kindly gave us access to their homes , thereby making this study possible . we would also like to thank evan mathenge and carlo ayala for providing the mbita traps and the photos , respectively and acknowledge bart knols , arthur talman and two anonymous reviewers for improving the manuscript . funds were obtained from ird and institut pasteur through the acip " population parasitaire des moustiques " .
backgroundone method of collecting mosquitoes is to use human beings as bait . this is called human landing collection and is a reference method for evaluating mosquito density per person . the mbita trap , described by mathenge et al in the literature , consists of an entry - no return device whereby humans are used as bait but can not be bitten . we compared the mbita trap and human landing collection in field conditions to estimate mosquito density and malaria transmission.methodsour study was carried out in the highlands of madagascar in three traditional villages , for 28 nights distributed over six months , with a final comparison between 448 men - nights for human landing and 84 men - nights for mbita trap , resulting in 6,881 and 85 collected mosquitoes , respectively.resultsthe number of mosquitoes collected was 15.4 per human - night and 1.0 per trap - night , i.e. an efficiency of 0.066 for mbita trap vs. human landing . the number of anophelines was 10.30 per human - night and 0.55 per trap - night , i.e. an efficiency of 0.053 . this efficiency was 0.10 for indoor anopheles funestus , 0.24 for outdoor an . funestus , and 0.03 for anopheles arabiensis . large and unexplained variations in efficiency were observed between villages and months.conclusionin the highlands of madagascar with its unique , highly zoophilic malaria vectors , mbita trap collection was poor and unreliable compared to human landing collections , which remains the reference method for evaluating mosquito density and malaria transmission . this conclusion , however , should not be extrapolated directly to other areas such as tropical africa , where malaria vectors are consistently endophilic .
the reduction of a feature vector to an optimized dimensionality is a common problem in the context of signal analysis . consider for example , the assessment of the dynamics of biomedical / biophysical signals ( e.g. , eeg time series ) . these may be assessed with either linear ( mainly : power spectral ) and/or nonlinear ( mainly : fractal dimension ) analysis methods [ 15 ] . each of the methods used for analysis of the time series extracts one or several measures out of a signal like peak frequency , band power , correlation dimension , k - entropy , and so forth . some , but not necessarily all of these measures are supposed to exhibit state - specific information connected to the underlying biological / physiological process . an appropriately weighted collection of these information , specific measures may span an optimal feature vector in the sense that the states may be best separated . the temporal variation of these signals often has to be regarded as being almost stationary over limited segments only and not as being stationary in a strict sense , a property which is sometimes denoted as quasistationarity . this suggests regarding a specific outcome as being randomly drawn from a distribution of outcomes around a state - specific mean . hence any inference made on such outcomes must be based on statistics relating the effect of interest to that stochastic variation even when regarding a single individual . if a comparative study is conducted , one has to select samples of probands , and this again introduces sources of random variations into analysis . efforts must be made ( 1 ) to retrieve effects out of the random variations for the different measures and ( 2 ) to reduce the set of all measures to the set of those which allow for a reliable state identification . a widespread statistical method used to attack the first type of problem is known as analysis of variance . given the ith measurement of a biophysical / biomedical signal , the perhaps most simple variance analytic model for this signal reads as ( 1)signalji=j+errori , where i denotes the ith measurement of the signal which was obtained under experimental condition j. the so - called effect ( or treatment ) term j may be a fixed or a random effect and either continuous or discrete ( cf . , the analysis of variance infers the extent to which the estimates of the squared differences among the effects j rise above the squared error . testing the significance of the effect then depends upon whether the levels j are regarded as fixed or random , whereby the null hypothesis is normally formulated as having equal levels . a typical situation for this problem is when a study is based on a sample of probands . the probands must be viewed as a random sample drawn out of the reservoir of all possible individuals . if no correction is made , the analysis result applies specifically to the sample at the end . this is in most cases not the effect hunted for because one searches results applicable also to those ( normally vast majority of ) humans who were not included in the study , for example , reliable discriminant functions . the classical approach in variance analysis splits the effect term into two parts , fixed and random , and also enriches the error term with an estimate of the random part . as an alternative to this classical approach , one may consider the family of the so - called f - ratio tests which are based on randomly splitting and recollecting the sample . one hereby chooses repeatedly random subsets of the original data to gain an estimate of the variance of f , namely , (f ) , and inspects the ratios (f)/f or variants therefrom . here f denotes the quantity obtained from a f - test ( cf . section 2.1 ) . such resampling methods have proven capabilities to enhance statistical inference on parameter estimates which are not available otherwise . f - ratio test statistics have indicated to ( a ) better retrieve fixed effects by fading away the random parts and ( b ) allow for an incremental test , that is , testing the effect of the inclusion of additional variables into an existing feature vector . the latter property makes them especially interesting when one tries to reduce the dimension of a feature vector to an optimal size . the different combinations with additional variables included lead to different probabilities under the hypotheses of interest which , in turn , allow for a weighted inclusion of these measures into an optimal feature vector . . a traditional way of model selection would be to perform analysis on all combination of features under interest and then to make a decision with the help of some information criterion ( aic , bic , etc . ) . these try to select the optimal combination by weighting the number of measures in the model against residual error . this kind of selection leads to an inclusion of a measure with weight of either one or zero , however , and may neglect knowledge gained from incremental tests as those mentioned above . weighting information of different sources to an optimal degree is frequently conducted via bayes ' theorem . the bayesian view will be adapted to derive weights different from zero and one for the construction of feature vectors , that is , to allow for partial inclusion . we note that reduced inclusion is also an important property of the so - called shrinkage or penalized regression methods . we first recapitulate the derivation of three different f - ratio test statistics and outline the computational scheme to construct the corresponding confidence intervals by means of monte carlo simulations . a comparison to the outcome of the traditional method we then show the inclusion of the outcome of these multivariate statistical methods into a selection scheme following a bayesian heuristic by weighting hypotheses . these weights are the basis for constructing reliable feature vectors suitable for further analysis , for example , discriminance procedures . we demonstrate our approach on the reanalysis of an earlier study and address the problem of state specificity : psychosis versus nonpsychosis as expressed in the eeg . it is shown that an optimal combination of the so - called relative unfolding ( or taken 's ) and two power spectral estimates ( , ) will allow for a correct classification of at least 81% of the probands , even in absence of active mental tasks . the usage of analysis of variance is the traditional approach to distinguish systematic effects from noise . the methods of analysis of variance ( anova / manova ) try to decompose the variance of a population of outcomes ( e.g. , the results of eeg assessments obtained under different well - defined conditions ) into two parts , namely , the treatment effect and the error effect . we adopt the notation of bortz and denote the treatment effect as h and the error effect as e. the treatment effect h explains how much of the total sum of squares may be due to a systematic effect of the different conditions ( treatments ) . the second part , e , is an estimator of the remaining sum of squares due to other random or noise effects . in the light of ( 1 ) , the term error affects both , e and h , whereas affects h only . the important question is : to what extent the treatment effect significantly rises above the level of a possible error effect . the quantity entering this test is ( univariate case ) ( 2)c = h2e2 . as stated above , h denotes the sum of squares due to treatment and e the sum of squares due to error . if the influence of the treatment is zero , h also reflects only the error influence . hence the test may be formulated as an f - test , that is , to test whether a calculated value of f might have occurred by chance or if the value deviates significantly from an outcome by chance . this might be done classically by comparing the evaluated value of f with the values in a table displaying f - value probabilities or get it from an appropriate statistical software package . the f - value is given as ( 3)f = cggdfedfh = h2e2dfedfh:=h2e2 , where g is some appropriate weight ( without having an effect in the univariate case , however ) , and dfe and dfh are the corresponding degrees of freedom , respectively . the univariate case ( anova ) tests the influence of one or more treatment effects upon the outcome of a single variable , for example , how the nonlinear correlation - dimension estimate b0 is affected by group , mental situation , and proband ( cf . the possible existence of an overall effect must be tested not only on b0 but also simultaneously on all evaluated measures , however . so the appropriate test is not a sequence of anova tests but a multivariate approach ( manova ) . this is because the outcome of the variables might be statistically dependent to some degree , and thus the simultaneous effect is different from the set of the effects of the individual variables . the f - test depends now on the eigenvalues of the matrix he which is analogous to ( 3 ) , but the single weight g splits up into the weights gi , and these may be different for different axesi . gi = 1/g i ) , ( 5)fp=isci/(1+ci)sisci/(1+ci)dfedfh(i.e . , gi = 1/(1 + ci ) ) , or ( 6)fr = c1/(1+c1)1c1/(1+c1)dfedfh ( i.e. , g1 = 1/(1 + c1 ) ; gi = 0 i 2 ) , where ci is the ith ( ordered by value ) eigenvalue of the matrix he , and s = rank(he ) . equation ( 4 ) is known as hotelling 's ( generalized ) t , , ( 5 ) as pillais ' trace , and ( 6 ) as roy 's largest root . for a sufficiently large number of observations , fh , fr , and fp become equivalent and , in the s = 1 case , they become identical . as in the univariate case , testing for significance of an effect is done by evaluating the probability that a calculated f - value might occur by chance . the software packages that perform manova do normally return this probability together with further properties on the sum of squares involved in h and e. to motivate the derivation of our algorithm , we consider the influence of a randomly chosen sample of persons out of a population , whereby other effects might also be present , but fixed . the effect term h may then be decomposed into ( 7)h2=(a)2+(pa)2+(e)2 , where ( a ) denotes here the influence of fixed conditions , ( pa ) the effect of the ( randomly chosen ) persons , and ( e ) the influence of the random error effects . ( we note that the quantities ( a ) and ( pa ) are sometimes also called treatment effects in a biomedical context ) . under the null hypothesis of having no fixed effect , ( a ) generally , if an observable stems from a subpopulation drawn from a larger set , the corresponding effect may itself become random . this is normally the case when regarding person as condition ( one will never be able to assess all humans ) . hence , ( pa ) is zero only within the bounds of statistical deviations . the classical approach to solve this problem within the anova / manova framework is a modification of the f - test . the error term is hereby enhanced from e to ( e + ( pa ) ) , and the effect is tested through h/ ( e + ( pa ) ) instead of ( 2 ) . the obvious disadvantage is the requirement of a higher level of the effect ( a ) which has to rise significantly above the noise-term ( e + ( pa ) ) as compared to the pure noise level due to e. so an attempt to test ( h ( pa))/e seems more favorable . but this might lead to a negative variance estimate , and it is not clear what effective degrees of freedom would have to be assigned to such a variance estimate . to overcome this situation , we propose a statistic estimating the influence of the population with the help of a resampling technique . this statistic is based on the decreasing sample - to - sample variation when a fixed term is present as compared to the influence of purely random effects . following , we rely ( a ) upon the classical error propagation rule and ( b ) upon the variance 's variance . , where g is a smooth function , x a random variable , and h.o.t denote higher order terms . as usual in error we mention further that neglecting variations around absolute means the variance of an empirical variance estimate may be written as ( 9)^2(2)=2^4df . we denote the variance with ^2 and the empirical variance estimate with . this conforms to ( 3 ) . as our last step ( c ) , we decompose ^2(h2 ) , the variance of the effect term ( 10)^2(h2)=^2((pa)2)+^2((e)2 ) . essential here is the fact that the fixed effect does not contribute to the variation of h and accordingly does not enter into the variance ^2(h2 ) . with ( 9 ) , ( 8) , and ( 7 ) , we may write the variance of the f - value defined in ( 3 ) as ( 11)2(f)=f2[2(h2)h4+2(e2)e4]+h.o.t . using ( 8) , this turns into ( 12)2(f)=4f2[2dfk+12dfek ] , where dfk denotes the degrees of freedom of the effect considered , dfek the corresponding error degrees of freedom , and is the ratio ( 13)=((pa)2+(e)2)h2 . we note that in the case of a pure random effect , becomes 1 and significant deviations towards a lower value point to a nonnegligible fixed effect . equation ( 12 ) obviously suggests using the statistic (f)/f to test for < 1 . according to ( 12 ) , the expectation value of this statistic is under the null hypothesis = 1given by 1/2dfk + 1/2dfek . to gain an estimate for (f ) , one may randomly resample , m times , a subset encompassing an equal number of probands from the original sample and , each time , find the f - value corresponding to the particular subset . so the method becomes a variant of the so - called delete - d jackknife . it has been shown that the following quantity estimates (f ) up to a factor [ 16 , 17 ] ( 14)2(f)=1m1(fjf)2 , where e(2(f))=^2(f).the number of random splittings conducted is denoted as m , the average f is defined as ( 15)f=1mfj , and fj denotes the found f - value obtained from the jth of the m runs . the above mentioned factor depends on # probands and selected # probands per random sample . this is important , because p , the probability of a person to appear in a particular random sample , increases with the ratio # probands per random sample/#probands per sample . in case of a small sample size , this may impose an additional restriction of the variance (f ) . the cumulative distribution of the ratios (f)/f will hence depend on the parameters ( dfk , dfek , # random splittings , # probands , # probands per random sample ) . the # random splittings , m , hereby influences the cumulative distribution because higher values for m lead to a narrower deviation around ^2(f ) . a deviation from a random result may be found by estimating the probability that a ratio (f)/f is by chance as small or smaller than the experimentally found estimate . therefore the off - diagonal elements of e are random with an expectation value of zero . furthermore , the trace of the matrix he remains unchanged when the basis is changed such that the eigenvectors build the new basis . hence the diagonal terms of he are expected to represent , on the average , the individual f - values , and the trace is the sum over the individual fi 's . in case of a fixed effect with only two states ( s = 1 ) and n random variables , this leads to a multivariate f with value 1/n i=1fi . to test the null hypothesis h0 of having random effects only , we may again use the independence of ( fi ) and find testat0 , our first test statistic , ( 16)teststat0=2(fi)4(fi)2 , whose distribution is a function of ( dfk , dfek , n , # random splittings , # probands , # probands per random sample ) . if random effects for the treatment term exist , things become a bit more complicated . in that case , the contributions of the individual (fi ) may be unequal , and in extremis the sum may be dominated by one single term . a way to account for this effect is to consider dfeff , the effective degrees of freedom . the effective degrees of freedom are defined as dfeff = ( i)/((i / dfi ) ) ( cf . , chapter 8) . this quantity is minimized if one term is clearly dominant and maximized when there are equal contributions . as stated above , if an empirical value of teststat0 appears too low , one may conclude that there is a systematic nonrandom deviation in at least one variable between the treatment groups under consideration ( see figure 1 ) . in the case of a true multivariate statistic type , one has to replace the univariate individual f - values by the eigenvalues of he and modify testat0 into ( 17)teststat1=i=1s2(1/gij=1nkijfj)4(i=1s1/gij=1nkijfj)2 , where kifj is the contribution of the individual univariate f - value fj to the ith eigenvalue of ( he ) adjusted with the degrees of freedom , namely , ci dfe/ dfh . this statistic depends on ( dfh , dfe , n , # simulations , # probands , # probands per random sample , stattype , dfeff ) . if stattype , the statistics type , is hotelling 's statistics , this obviously becomes equivalent to the s = 1 case because gi = const . and f = i=1ci dfe / dfh ( cf . this suggests two normalized versions of our test statistic in the following way : ( 19)teststat1r=i=1s2(1/gij=1nkijfj)4(i=1s1/gij=1nkijfj)2/2(fmulti)4fmulti2 . the expectation value under the null hypothesis ( i.e. , having no multivariate effect ) is 1 , and the cumulative distribution depends on ( dfh , dfe , n , # simulations , # probands , # probands per random sample , stattype ) . significant deviations from 1 indicate that at least one variable shows a fixed effect or that a between - variable effect exists . as a last step , we extend ( 19 ) to an incremental test statistic . in the case of having already knowledge on certain measures displaying a multivariate effect we therefore modify the test statistic testat1r into ( 20)teststat1m = k22(fc)+2(fadd)4(kfc+fadd)2/2(fmulti)4fmulti2 , where k is the number of those measures already showing a multivariate effect , and fc is the f - value found with these measures . our assumption of an existing effect implies fc > 1 , because e(fc ) > e(frandom ) and (fc ) (frandom ) . hence testat1 m tests the null hypothesis ( fc > 1 , = (fc ) ) , that is , the additional variable has no influence . the cumulative distribution function then depends on ( dfh , dfe , n , # simulations , # probands , # probands per random sample , fc , (fc ) , dfeff , stattype ) because e(fc ) > e(frandom ) and (fc ) (frandom ) . because (fc ) is assumed to be unequal to (fadd ) , we must again consider the so - called effective degrees of freedom dfeff of the pooled variances . the null hypothesis states that the additional measure contributes its univariate f - value fadd to the trace while fadd is built up from nonfixed effects only . if the teststat1 m becomes unexpectedly high , this may be regarded as indicating an additional systematic effect due to the inclusion of this measure . if the statistic type is hotelling 's statistic , this becomes again equivalent to the s = 1 case . these statistics are useful answering questions like the following : are there measures providing significantly to the treatment term ? and , if so , which ones may be identified ? and to what extent do they provide to the effect ? the knowledge of such measures and its contribution to the treatment effect allows one , for example , to select them and collect them with appropriate weights into a feature vector useable for discriminance or predictive purposes . the quantity of interest , namely , the distribution of the ratios (f)/f , must be evaluated numerically , and the dependence of the ratios from the number of random splittings and the number of persons involved calls for a calculation of the confidence intervals for each case . generating the distribution of the f - ratios appropriately and , therefrom , the desired confidence interval is our method of choice to overcome this problem . this algorithm is basically a monte carlo technique generating l outcomes and their f - ratios . this leads to a population of l random deviates of the ratio (f)/f according to the appropriate null hypothesis ( remember figure 1 ) . we note that both the f - value obtained for the whole sample as well as f ( 15 ) provide an estimate for f and calculating (f ) and f is done within the same procedure , so we prefer (f)/f. from the population of the l ratios , one may derive a quantile and the associated probability p , for example , by building a histogram or ordering the population by rank and selecting the p lth value . this value estimates the quantile above which f - ratios occur by chance with probability p. the general scheme of our algorithm is stated in more detail as follows . the multivariate model describing our null hypotheses may be derived from ( 1 ) and may be formulated as ( 21)signalijk=i(j)+j+errorijk , where signalij denotes the ( uni- or multivariate ) measured quantities , j the random factor considered ( e.g. , different clinical groups ) , i and the other factor(s ) , which may implicitly depend on the random factor . determine / select the constants k , l , m , # n , p , stattype ( if necessary ) such that l is the number of deviates desired to estimate the quantile with acceptable accuracy , m is the number of random splittings needed for each deviate , # n the levels of the factor ( typically the number of persons involved , i.e. , # probands ) , p the relative number of levels ( or persons , i.e. , # probands per random sample/#probands ) entering one splitting , k the number of levels of i , and stattype is again the multivariate statistic type . the values k , m , # n , p , stattype must conform to the setting with which the original data was analyzed . generate a sequence of # n times k random numbers to mimic the random errors in ( 21 ) . the amplitude must be chosen to match the value found for e in the original analysis.generate another random # n - sequence to mimic the influence of the random factor . the random treatment effect assumed , ( pa ) , should be chosen such that f matches the found univariate outcome . add the different contributions to the simulated signal.build m random splittings and analyze it by the same procedures as the original sample was analyzed . typically m is chosen to lie between 12 and 50 . from the m splittings , build (f ) , f ( 14 ) , and ( 15 ) , and the ratio (f)/f. the analysis is normally done by means of a statistical software package estimating an appropriate f - value . this is sufficient for testat0 . in the case of testat1 , also build fmulti , (fmulti ) , and the ratios (fmulti)fmulti and ( (f)/f)/((fmulti)/fmulti ) . these are necessary for the different variants of testat1 ( 18)(20).repeat steps ( a ) to ( d ) l times and gain therefrom empirically the quantile(s ) of interest . as stated above , this may be done by means of a histogram or a rank ordered sequence obtained from the l f - ratios (f)/f and ( (f)/f)/((fmulti)/fmulti ) . depending on the probability p associated with the quantile and the desired accuracy , l will typically be on the order of 10, ,10 . generate a sequence of # n times k random numbers to mimic the random errors in ( 21 ) . the amplitude must be chosen to match the value found for e in the original analysis . generate another random # n - sequence to mimic the influence of the random factor . the random treatment effect assumed , ( pa ) , should be chosen such that f matches the found univariate outcome . build m random splittings and analyze it by the same procedures as the original sample was analyzed . typically m is chosen to lie between 12 and 50 . from the m splittings , build (f ) , f ( 14 ) , and ( 15 ) , and the ratio (f)/f. the analysis is normally done by means of a statistical software package estimating an appropriate f - value . this is sufficient for testat0 . in the case of testat1 , also build fmulti , (fmulti ) , and the ratios (fmulti)fmulti and ( (f)/f)/((fmulti)/fmulti ) . repeat steps ( a ) to ( d ) l times and gain therefrom empirically the quantile(s ) of interest . as stated above , this may be done by means of a histogram or a rank ordered sequence obtained from the l f - ratios (f)/f and ( (f)/f)/((fmulti)/fmulti ) . depending on the probability p associated with the quantile and the desired accuracy the statistic testat1 m ( 20 ) requires some attention with respect to ( a ) simulation and ( b ) effective degrees of freedom . this is because we estimate (fc ) , where fc is expected to be larger than one due to the already recognized fixed or common effect and , therefore , (fc)<random . f c is carried over from the result obtained without the measure under consideration , so we test the additional measure under the constraints that the known effect equals fc ( or ftotal = fsample_total ) . in the case that the measures contributing to fc are expected to carry fixed effects , the model must also be adjusted with a fixed effect , such that the expected values e((f ) ) and e((fc ) ) match the corresponding values of the original sample . the quantiles must be derived at the point where dfeff matches dfeff of the original sample . this may be done by repeating step ( e ) thus collecting a population of empirical quantiles belonging to the same probability p and building a functional dependence quantile versus dfeff ( cf . figure 2 , where dependencies quantilep = ap + bp dfeff were fitted ) . the alternative is waiting until l results with approximately equal effective degrees of freedom emerged by chance . the reconstruction of the model ( 21 ) is performed by generating streams of two types of uncorrelated random numbers from a normal distribution . the first type will mimic the error and has simulation parameters ( 0 , e ) , that is , the estimated squared mean of the errorij of the original sample . the second type has simulation parameters ( 0 , p ) , that is , the average squared effect due to the probands . both quantities may be read out from the output of the classical anova / manova analysis ( cf . , the expected outcome of the simulation with the classical approach will correspond to the result obtained with the original sample , if the parameters k and # n also correspond to the original sample and the null hypothesis h0 : no presence of a fixed effect due to person group is true . our clinical sample consists of 30 persons from two clinical groups evaluated at four mental states ( , see also section 4.2 ) . because the mental states have shown fixed effects in previous studies [ 18 , 19 ] , the simulated signals were offset by four fixed different levels . the amount of the offset values is not relevant , however , because the offset is fixed and the f - ratio test is set up to test for differences between the two groups . hence a simulated person has four outcomes built by one choosing four times the same random deviate from ( 0 , p ) plus four times a different random deviate from ( 0 , e ) enriched with the state - specific offset . the first 15 simulated persons were labeled as group 1 and the last 15 labeled as group 2 . the f - ratio tests were conducted with m = 30 and p = 2/3 , if not stated otherwise . a monte carlo loop was normally evaluated with l = 100 for each stattype . hence getting results for each of the stattypes testat0 , testat1r , and testat1 roy 's largest root ( 6 ) was used as the classical method , if not stated otherwise . the f - ratio test statistic obviously requires more numerical efforts than the classical approach . we therefore tested the sensitivity of the f - ratio tests to the presence of fixed effects of person categories , that is , we tested for h0 in case when h0 is false . we evaluated for each data set the probability that a test outcome as high or higher may occur by chance . this was done for both the classical test and the f - ratio test ( applying a nonparametric method ) . then we built for each set p the difference between the probability according to the classical and the probability according to the f - ratio test . the resulting 250 values of p were then sampled into a histogram . in case of equivalence of the two methods our data ( figure 3 ) show a significant deviation from a symmetric distribution towards the f - ratio test ( = 5.6 , p = 0.02 ) . the f - ratio test seems to be more sensitive to the presence of a fixed effect than the classical approach , thus a higher tendency to reject h0 in the case when the test should reject it . this seems not to be too surprising , however , because the deviations from the expected value of the quantity (f)/f occur in 4th power instead of the 2nd power as in the classical view . a further advantage of the f - ratio is its applicability to nonnormally distributed data because random number generation for nonnormal data bears no additional difficulties . having established this as a method for an incremental inclusion of measures , we will now turn to the problem of using this knowledge to construct optimized feature vectors . consider the outcomes of the tests above of , say , three measures which occur with different significance levels . we make the assumption that from these measures ( or variables ) the one with the least significance carries also the least information , while the others bear more information in accordance to their significance level . the problem with what weight they should enter into a feature vector is regarded from a bayesian view . bayes formula allows one to express a conditional probability p[ai | b ] with the conditional probabilities p[b | aj ] through ( 22)p[ai b]=p[ai]p[b ai]jp[aj]p[b aj ] . this may be used to express the probability of a hypothesis hi to be correct by means of the probabilities of the outcomes corresponding to the different hypotheses tested for . we would like to weight the hypotheses h0 ( measures display no difference between groups ) and h1 ( measures display a difference between groups ) . the probability p(hi ) , namely , hi being correct , appears as a natural weight for this hypothesis . let b denote the empirical outcome of an f - ratio test as obtained with the monte carlo technique above . let b denote the set of possible outcomes which deviate at least as much as the quantile belonging to the significance level . if b exceeds this quantile it is also an element of b. the set b then allows for weighting hypotheses by means of ( 22 ) . we may set the a priori probabilities p[h0 ] = 1 p[h1 ] = c = 0.5 , because we have no a priori preference neither for the hypothesis h0 nor an alternative h1 . h0]:= is our present knowledge , namely , the probability assigned to find an outcome b within b , given h0 , for example , = 0.05 , = 0.1 , and so forth . the probability of h0 = true given the set b may be written as ( 22 ) ( 23)p[h0 b]=cc+c2(1c ) and , similarly , ( 24)p[h1 b]=c2(1c)c+c2(1c ) . in general , we find the quantities p[h1 | b ] and may formally assign an expected hypothesis through the weighted mean ( 25)h=h1ip[h1i b]p[h1i b ] . the formulation of an expected alternative hypothesis seems somewhat purely formal at this stage . however , if each hypothesis is intrinsically connected to a specific feature vector fi , this approach returns the expected feature vector f given the observation b , however , ( 26)f=fip[fi b]p[fi b ] , because each feature vector fi is spanned by its specific collection of measures ( 27)fi={a , b , c, }i . from the weights of the hypotheses one immediately also gets the weights of the measures . in the context of eeg time series analysis , the measures a , b , c , denote quantities like correlation dimension , peak frequency , spectral band power , and so forth . the likelihood ratio p[h1 | b]/p[h0 | b ] then gives the weight with which the alternative is preferable to h0 when the weight of h0 is set to 1 . it is expressed as ( 28)c2(1c)/(c+c2(1c))c/(c+c2(1c))=c2(1c)c. now consider two alternatives h1 , h1 and p[b | h0 ] = 1 , p[b | h0 ] = 2 , and p[b | h1 ] = c2 for all h1 ( i.e. , no preference for any alternative ) . their likelihood ratio may be expressed through the ratio of their likelihood ratios against the null hypothesis ( 29)c2(1c)/c1c2(1c)/c2=21 . this may be regarded as the weight with which the second alternative should enter when the weight of the first alternative is set to 1 . if in addition h1 is a subset of the h1 , that is , the variables assigned to h1 are a subset of the variables assigned to h1 , this weighting applies to that part of h1 which is not common to h1 . we have to note that the formulation of c2 is correct only when each probability i is small . the ith feature vector is regarded as the ith combination of measures corresponding to the ith hypotheses . to find the weights with which the variables enter the feature vector , we assume assigning the weight 1 to that combination of measures with the highest significance level . taking into account the implicit dependence of c2 as stated above if a probability ( thus weight ) falls close to zero , it may be set to zero which results in dropping that particular feature vector and its corresponding measures . as an application , we choose the problem of distinguishing the eeg of the two proband groups taken from a neuropsychologically oriented study by their eeg . this choice was motivated by the following : it is well known that schizophrenic patients show abnormalities compared to healthy controls when the so - called evoked potientials are studied [ 2022 ] . this may point to a threshold regulation problem in the activation of the neural network in schizophrenics , and there might be differences in the metabolism of the frontal cortex [ 24 , 25 ] . therefore one may expect differences in the spontaneous eeg . such differences were indeed reported repeatedly , for example , [ 2628 ] using linear ( fft ) or nonlinear ( correlation dimension ) analysis . below ) revealed a significant difference between the two samples but only for a specific mental task . while the eeg of the controls showed a drastic decrease in dimensionality , the eeg of the patients did not exhibit any pecularity . other studies , however , pointed to the existence of a difference in the eyes - closed quiet state [ 2 , 9 ] . the degree to which this difference is visible in the eyes - closed quiet state , that is , in absence of external activation , however , is not yet established and was examined with the method proposed here . the neuropsychologically oriented eeg study consisted of two groups , namely , 15 acute hospitalized subjects diagnosed as schizophrenic and 15 controls in a healthy state . a trained clinical staff member ranked each patient 's symptoms on a psychiatric rating scale , and the psychopharmaceuticals were noted . both groups were exposed to the same mental tasks , while three 30-second segments of eeg were recorded . we focus here mainly on the so - called eyes - closed quiet " mental situation . the eeg were recorded according to the international 1020 standard , which allows for the so - called parallel embedding scheme . in contrast to standard methods , this technique also considers attractor unfolding , and the outcomes provide several nonlinear measures , namely , the asymptotic correlation dimension ( b0 ) , the so - called unfolding dimension m * , and the relative unfolding ( or taken 's ) . this provided measures like - or -power , that is , the spectral power from the so - called ( 812 hz ) and ( 15 hz ) frequency band . a complete description of the proband samples , conditions , and technical settings is given elsewhere [ 18 , 19 ] . with our experimental setup , the model consists of four fixed conditions ( i.e. , the four mental tasks ) and two groups with 15 persons ( i.e. , patients and controls ) . according to our hypothesis , those persons building the two groups must be suspected to provide a sample - specific ( or random ) effect to the discriminant capacities between the groups ( cf . section 2 ) , however , and demand for the application of our scheme . in each group , 10 from the 15 persons where chosen for the simulation , that is , at the point p = 2/3 . the findings listed in the section 4.1 led us to hypothesize differences in the absence of stimulated activation or medication . therefore we applied our method to the eeg outcomes to the eyes - closed quiet situation . the results obtained with the different test statistics of this setting are shown in table 1 . from here one sees that the relative unfolding seems to play the role of a major indicator , because occurs in all combinations of table 1 . this result is in agreement with findings from an earlier study and with previous results from our sample [ 18 , 19 ] . the power seems to be the best spectral measure because it appears in two combinations . an effect on the band is also in agreement with older findings in the literature . this let us expect a reliable discrimination between the two states , schizophrenic versus healthy , by means of the eeg outcomes , if a combination of measures is appropriately selected . among the triple combinations , only fi = ( , -power , -power ) seems to carry information . the combination ( , -power , b0 ) did not show any remarkable effect . so the effect on -power and b0 seems somewhat opposite , and this combination was dropped . to discriminate between the two groups , it seems therefore reasonable to select the variables , - , and power . the information obtained with these outcomes is used to build an appropriate feature vector . following section 3 to find weights for feature vector components , we assume the 95% interval as significant and assign the weight 1 . applying our considerations to the 90% solution ( 2 = 0.1 , h1 : , -power , -power ) reveals the weight 0.48 . hence , the variables , enter with weight 1.00 into the feature vector , while the variable enters with weight 0.48 only . a discriminant analysis with this weighted feature vector reveals a correct classification with more than 81% . the result is displayed in figure 4 , where the outcome on the main axis of the discriminant function ( essentially a rotation of the coordinate system , chapter 18 ) is shown . the discriminant analysis could not be done on all 15 persons of each group . due to failure to eeg - record quality requirements , one person of the control group and two persons of the patient group could not be evaluated , unfortunately . we note that our f - ratio test statistics with its ability to perform multivariate and incremental testing on fixed effects allowed for this weighting of feature vectors . furthermore , we may regard this result as reliable because this variable weighting has been done based on the emergence of fixed effects , therefore not optimizing across random ( or sample - specific ) discriminant capacities . we proposed and derived a computational scheme which is based on a random splitting method and which allows separating fixed and random effects in multivariate variance analysis . the classical method is implemented only for the univariate problem in most standard statistical software packages . so the decomposition of the effect matrix h into a fixed and a random effect requires additional matrix algebra programming efforts anyway . this may turn out to be a more difficult numerical problem than the generation of streams of random numbers . secondly , the normality assumptions inherent to the classical test also remain true for the multivariate test , namely , normally distributed random deviations around the effect levels . if this is not true , the statistics to be used do not follow an f - distribution and may be unknown , thus preventing a classical significance test . thus the calculations can be done completely analogously when it seems more appropriate to use a distribution other than the normal distribution . because our test statistic is based on relative ratios rather than absolute ratios , one might expect that an effect due to a particular distribution in the denominator will have a related effect in the numerator which could make our test statistic more robust . this exceeds the classical model selection because each measure enters with an appropriate weight between one and zero rather than in an all or none fashion . it has been recognized for a long time that these measures are affected by noise and estimation errors when they are used for eeg analysis which then may circumvent their interpretation as chaos indicators ( cf . e.g. , [ 9 , 31 , 32 ] and the references concerning this matter therein ) . despite this fact , these measures proved the ability to display individual properties of the eeg not seen with linear measures ( cf . e.g. , [ 2 , 3 ] ) , and this is confirmed here . as was shown with our eeg data , the above mentioned properties of our methods allowed for a clear distinction ( > 81% ) between the two proband groups , controls versus schizophrenic patients , in a resting state with eyes closed . earlier results stating that and seem to differentiate between the two groups are confirmed , but such a clear result has not yet been found in previous studies .
reducing a feature vector to an optimized dimensionality is a common problem in biomedical signal analysis . this analysis retrieves the characteristics of the time series and its associated measures with an adequate methodology followed by an appropriate statistical assessment of these measures ( e.g. , spectral power or fractal dimension ) . as a step towards such a statistical assessment , we present a data resampling approach . the techniques allow estimating 2(f ) , that is , the variance of an f - value from variance analysis . three test statistics are derived from the so - called f - ratio 2(f)/f2 . a bayesian formalism assigns weights to hypotheses and their corresponding measures considered ( hypothesis weighting ) . this leads to complete , partial , or noninclusion of these measures into an optimized feature vector . we thus distinguished the eeg of healthy probands from the eeg of patients diagnosed as schizophrenic . a reliable discriminance performance of 81% based on taken 's , - , and -power was found .
fluctuations of the order parameter near the critical temperature @xmath4 are much larger in high-@xmath4 superconductors than in classical low temperature superconductors . one of the reasons lies in the higher thermal energy @xmath5 which provides the excitations , and the other in a very short coherence lengths which occur in high-@xmath4 cuprate superconductors . with these properties , the region of critical fluctuations was estimated from the ginzburg criterion to be of the order of 1k , or more , around @xmath4 , which renders the critical region accessible to experimental investigations.@xcite farther above @xmath4 , one expects to observe the transition from critical to noninteracting gaussian fluctuations which are the lowest order fluctuation corrections to the mean field theory.@xcite the layered structure of high-@xmath4 superconductors requires some theoretical sophistication . one could treat these superconductors with various models from three - dimensional ( 3d ) anisotropic to coupled layers lawrence - doniach , or purely two - dimensional ( _ 2d _ ) ones . due to the temperature variation of the coherence lengths one could even expect a dimensional crossover in some systems . the fluctuation conductivity is altered by dimensionality in various models so that a detailed comparison of model calculations and experimental data could address the dimensionality problem . for the reasons stated above , the fluctuation conductivity in high-@xmath4 superconductors was studied experimentally by many authors . @xcite most of them used @xmath6 resistivity measurements . @xcite the reports were controversial in the conclusions about the dimensionality of the system , and the critical exponents . it has been shown that , in a wide temperature range above @xmath4 , the fluctuation conductivity did not follow any of the single exponent power laws predicted by scaling and mean - field theories.@xcite the data in the gaussian regime could be fitted by an expression derived within the ginzburg - landau ( gl ) theory with a short wavelength cutoff in the fluctuation spectrum . recently , silva et al.@xcite have proven that the gl approach with an appropriate choice of the cutoff parameter yields result which is identical to that of the microscopic aslamazov - larkin ( al ) approach with reduced excitations of the short wavelength fluctuations.@xcite it has been further shown that the detailed temperature dependence of the fluctuation conductivity was not universal , but sample dependent . in this respect , the gl approach has practical advantage since the cutoff parameter can be readily adjusted in fitting the experimental data . silva et al.@xcite could fit very well the data on a number of thin films in the gaussian region from @xmath4 + 1 k to @xmath4 + 25 k. when critical fluctuations are studied , it becomes essential to know accurately the value of @xmath4 . however , the determination of @xmath4 from @xmath6 resistivity measurements brings about some uncertainties . one should avoid the use of unjustifiable definitions of @xmath4 such as e. g. : ( i ) zero resistance temperature , ( ii ) midpoint of the transition , ( iii ) maximum of the derivative @xmath7 , ( iv ) intersection of the tangent to the transition curve with the temperature axis , etc . the correct value of the critical temperature can be determined as an additional fitting parameter in the analysis of the fluctuation conductivity . usually one assumes that a well defined power law holds in a given narrow temperature range and then determine both , @xmath4 and the critical exponent from the selected segment.@xcite however , the experimental data usually show an almost continuous change of the slope so that the uncertainty in the determination of @xmath4 is an unsolved problem . besides , the effects of the cutoff have been neglected in the analysis of the data close to @xmath4 . even though the values of the fluctuation conductivity near @xmath4 are not much affected by the introduction of the cutoff , the slopes can be considerably changed,@xcite and the analysis may become uncertain . a number of microwave studies have been reported showing clear signs of fluctuations in both , the real and imaginary parts of the @xmath0 conductivity.@xcite the real part @xmath8 of the complex conductivity @xmath9 has a sharp peak at @xmath4 , which is not observed in e.g. nb as a representative of low temperature classical superconductors . @xcite the salient feature of the @xmath0 case is that the fluctuation conductivity does not diverge at @xmath4 because a finite frequency provides a limit to the observation of the critical slowing down near @xmath4 . the real part @xmath8 has a maximum at @xmath4 . it is also important to note that @xmath8 and @xmath10 have individually different temperature and frequency dependences , even though they result from the same underlying physics . testing a given theoretical model becomes more stringent when two curves have to be fitted with the same set of parameters . the expressions for the @xmath0 fluctuation conductivity in the gaussian regime have been deduced within the time dependent ginzburg - landau ( tdgl ) theory by schmidt.@xcite using general physical arguments , fisher , fisher , and huse @xcite provided a formulation for the scaling of the complex @xmath0 conductivity as @xmath11 where @xmath12 is the correlation length , @xmath13 is the dynamical critical exponent , @xmath14 is the dimensionality of the system , and @xmath15 are some complex scaling functions above and below @xmath4 . this form of the fluctuation conductivity was claimed to hold in both , the gaussian and critical regimes . dorsey @xcite has deduced the scaling functions in the gaussian regime above @xmath4 , and verified the previous results of schmidt.@xcite more recently , wickham and dorsey @xcite have shown that even in the critical regime , where the quartic term in the gl free energy plays a role , the scaling functions preserve the same form as in the gaussian regime . the above mentioned theoretical expressions of the @xmath0 fluctuation conductivity did not take into account the slow variation approximation which is required for the validity of the ginzburg - landau theory.@xcite it was noted long time ago that the summation over the fluctuation modes had to be truncated at a wavevector which corresponded roughly to the inverse of the intrinsic coherence length @xmath16.@xcite the improved treatment with a short wavelegth cutoff was applied in fluctuation diamagnetism,@xcite and @xmath6 paraconductivity far above @xmath4.@xcite this approach was also applied in @xmath6 fluctuation conductivity of high-@xmath4 superconductors where one encounters a large anisotropy.@xcite the introduction of the short wavelength cutoff was found to be essential in fitting the theoretical expressions to the experimental data . in view of the great potential of the microwave method described above , we find motivation to elaborate in this paper the improved theory of @xmath0 fluctuation conductivity including the short wavelength cutoff . we find that the resulting expressions can be written in the form of eq . ( [ 1 - 1 ] ) . however , the cutoff introduces a breakdown of the scaling property in the variable @xmath17 . also , we find that the phase @xmath1 of the complex conductivity ( @xmath18 ) evaluated at @xmath4 departs from the value @xmath3 when cutoff is introduced . values of @xmath1 larger than @xmath3 were observed experimentally,@xcite but were attributed to an unusually large dynamic critical exponent . also , deviation of the scaling in the variable @xmath17 was observed already at 2 k above @xmath4,@xcite but no analysis was made considering the short wavelength cutoff in the fluctuation spectrum . the present theory is developed for different dimensionalities which facilitates comparison with experimental data . frequency dependent conductivity can be calculated within the kubo formalism from the current correlation function . for the fluctuation conductivity one has to consider the current due to the fluctuations of the order parameter . the resulting expression for the real part of the conductivity is @xcite @xmath19 where the current is assumed to be in the @xmath20-direction . @xmath21 is the fourier component of the order parameter , and @xmath22 is the relaxation time of the @xmath23-th component . the relaxation time for the @xmath24 mode is given by @xmath25 where @xmath13 is the dynamic critical exponent . an alternative approach is to calculate the response of the system to an external field through the expectation value of the current operator averaged with respect to the noise.@xcite however , the introduction of the short wavelength cutoff in this approach leads only to selfconsistent implicit expressions.@xcite eq . ( [ 3 - 1 ] ) is obtained from the time dependent ginzburg - landau theory and represents the equivalent of the aslamazov - larkin fluctuation conductivity obtained from microscopic calculations . in the following , we present the results which take account of the short wavelength cutoff in this contribution to the @xmath26 fluctuation conductivity . the other contributions such as maki - thomson ( mt ) and one - electron density of states ( dos ) renormalization@xcite can not be treated within the time dependent ginzburg - landau theory but require microscopic calculations . it has been shown@xcite that mt anomalous contribution in high-@xmath4 superconductors is almost temperature independent while dos contribution is strongly temperature dependent , and contains a number of parameters which have to be determined through a complex fitting procedure in an experimental data analysis.@xcite since the three terms in the fluctuation conductivity are additive it is important to have the aslamazov - larkin term corrected for short wavelength cutoff which then allows to fit the mt and dos contributions properly from the rest of the total experimental fluctuation conductivity . the sum in eq . ( [ 3 - 1 ] ) can be evaluated by integration considering the appropriate dimensionality . in this section we discuss the simplest case of an isotropic _ 3d _ superconductor . the integration in @xmath27-space needs a cutoff since the order parameter can not vary appreciably over distances which are shorter than some minimum wavelength . the cutoff in @xmath28 can be expressed as @xmath29 , where @xmath30 is a dimensionless cutoff parameter . obviously , @xmath31 would imply no cutoff in the integration , whereas for @xmath32 one obtains the usually assumed cutoff at @xmath33 . in the _ 3d _ isotropic case , the same cutoff applies also to @xmath34 and @xmath35 so that for the _ 3d _ integration in the @xmath27-space one has to set the cutoff limit for the modulus @xmath36 . with the change of variable @xmath37 one obtains @xmath38}\,dq \,\,\ , , \label{3 - 2}\ ] ] where @xmath39 is the temperature dependent cutoff limit in the @xmath40-space , and @xmath41 is a dimensionless variable which depends on frequency and temperature as independent experimental variables . for the @xmath6 case ( @xmath42 ) , and no cutoff ( @xmath31 ) , one finds from eq . ( [ 3 - 2 ] ) @xmath43 which reduces to the well known aslamazov - larkin result @xcite provided that relaxational dynamics is assumed ( @xmath44 ) , and @xmath45 is taken only in the gaussian limit as @xmath46 . however , with a finite cutoff parameter @xmath30 one obtains @xmath47 \,\,\ , . \label{3 - 5}\ ] ] this result has been obtained by hopfengrtner et . @xcite except that they used only the gaussian limit @xmath46 for the reduced correlation length @xmath45 . their analysis has shown that the cutoff plays no role exactly at @xmath4 since @xmath48 regardless of @xmath30 . however , at any temperature above @xmath4 one gets a finite @xmath49 and the value of the conductivity is lowered with respect to the result given by eq . ( [ 3 - 4 ] ) . their conclusion was that the gaussian fluctuations with no cutoff yield an overestimated fluctuation conductivity . in this paper we are primarily interested in the ac case . before integrating eq . ( [ 3 - 2 ] ) with @xmath50 , we find the corresponding expression for the imaginary part @xmath10 . we can apply kramers - kronig relations to each of the fourier components in eq . ( [ 3 - 1 ] ) , and carry out the summation . this is equivalent to a calculation of the kernel @xmath51 for @xmath10 from the kernel @xmath52 used in eq . ( [ 3 - 2 ] ) , namely @xmath53 with the kernel @xmath54 , the imaginary part of the fluctuation conductivity can be calculated for any cutoff parameter @xmath30 @xmath55}\,dq \,\,\ , . \label{3 - 7}\ ] ] finally , the complex fluctuation conductivity can be written in the form @xmath56 \,\,\ , . \label{3 - 8}\ ] ] the prefactor is equal to the @xmath6 result with no cutoff effect as in eq . ( [ 3 - 4 ] ) . the functions @xmath57 are given by the following expressions @xmath58 \,\,\ , , \label{3 - 9}\ ] ] @xmath59 \,\,\ , , \label{3 - 10}\ ] ] where we used the following shorthand notations @xmath60 @xmath61 @xmath62 it can be easily verified that the @xmath57-functions given by eq . ( [ 3 - 9 ] ) and eq . ( [ 3 - 10 ] ) have proper limits . in the @xmath6 limit ( @xmath63 ) , one finds that @xmath64 , and @xmath65 leads to the @xmath6 result of eq . ( [ 3 - 5 ] ) . one can also verify that the @xmath0 results obtained previously by schmidt @xcite and dorsey @xcite can be recovered from our eq . ( [ 3 - 9 ] ) and eq . ( [ 3 - 10 ] ) in the limit @xmath31 , i. e. when no cutoff is made . the effects of the cutoff are not trivial in the @xmath0 case . it is essential to examine those effects in detail as they have strong bearing on the analysis of the experimental data . the prefactor in eq . ( [ 3 - 8 ] ) depends only on temperature while the cutoff parameter @xmath30 is found only in the @xmath66-functions . therefore , the effects of the cutoff can be studied through the @xmath67-functions alone . we can look at the temperature and frequency dependences of these functions with and without the cutoff . [ fig1](a ) shows a set of @xmath68 curves as functions of @xmath45 for three different frequencies . far above @xmath4 the relaxation time @xmath69 is so short that @xmath70 for any of the chosen frequencies . therefore the response of the system is like in the @xmath6 case . with no cutoff , @xmath68 saturates to unity ( dashed lines in fig . [ fig1](a ) ) . this limit is required in order that @xmath8 from eq . ( [ 3 - 8 ] ) becomes equal to @xmath71 in eq . ( [ 3 - 4 ] ) . if a cutoff with a finite @xmath30 is included , @xmath68 decays at higher temperatures ( solid lines in fig . [ fig1](a ) ) . the reduction of @xmath68 is more pronounced at smaller values of @xmath72 since the integration in the @xmath40-space is terminated at a lower value @xmath73 . at higher temperatures the conductivity @xmath8 at any frequency behaves asymptotically as @xmath74 given by eq . ( [ 3 - 5 ] ) . at temperatures closer to @xmath4 the relaxation time @xmath69 increases as @xmath75 with increasing correlation length according to eq . ( [ 3 - 1a ] ) which is usually termed critical slowing down . when @xmath76 for a given frequency , @xmath68 is sharply reduced and vanishes in the limit of @xmath4 . with the diverging prefactor in eq . ( [ 3 - 8 ] ) it can still yield a finite @xmath8 at @xmath4 . it may appear from fig . [ fig1](a ) that cutoff makes no effect when @xmath4 is approached , but we show later that an important feature still persists in @xmath8 . obviously , at lower operating frequencies one needs to approach @xmath4 closer so that the critical slowing down could reach the condition @xmath76 . one can see from fig . [ fig1](a ) that for frequencies below 1 ghz one would have to approach @xmath4 closer than 1 mk in order to probe the critical slowing down in fluctuations . the higher the frequency , the farther above @xmath4 is the temperature where the crossover @xmath76 occurs . this feature expresses the scaling property of the conductivity in frequency and temperature variables . however , the scaling property holds strictly only in the absence of the cutoff . namely , if one sets @xmath77 , the function @xmath68 depends only on the scaling variable @xmath78 . [ fig1](b ) shows the same set of curves as in fig . [ fig1](a ) , but plotted versus @xmath78 . the three dashed curves from fig . [ fig1](a ) coalesce into one dashed curve in fig . [ fig1](b ) , thus showing the scaling property in the absence of cutoff . however , the full lines representing the functions @xmath68 with a finite cutoff parameter @xmath30 do not scale with the variable @xmath78 . the reason is that the function @xmath68 then depends also on @xmath49 , which itself is not a function of @xmath78 . namely , the cutoff in the @xmath40-space depends on the properties of the sample , and on the temperature , but not on the frequency used in the experiment . hence , the cutoff brings about a breakdown of the scaling property in frequency and temperature . the effect is more pronounced at temperatures farther above @xmath4 where the cutoff is stronger . the properties of the function @xmath79 are shown in fig . [ fig2 ] for the same set of three measurement frequencies as in fig . [ fig1 ] . when plotted versus @xmath45 , the function @xmath79 exhibits a maximum at the point where the corresponding function @xmath68 shows the characteristic crossover due to @xmath80 as discussed above . when @xmath4 is approached , @xmath79 tends to zero . when @xmath79 is multiplied with the diverging prefactor in eq . ( [ 3 - 8 ] ) , one finds a finite @xmath10 at @xmath4 . far above @xmath4 , the function @xmath79 vanishes , regardless of the cutoff . this is consistent with the behavior of @xmath68 . namely , at high enough temperatures , @xmath68 acquires asymptotically the @xmath6 value , as seen in fig . obviously , the imaginary part of the conductivity must vanish when @xmath6 like limit is approached . the decrease of the function @xmath79 at higher temperatures is very rapid so that the effects of the cutoff are unnoticeable on the linear scale . only with the logarithmic scale used in the inset to fig . [ fig2](a ) , one observes that the cutoff effects are present also in @xmath79 , though by a very small amount . [ fig2](b ) shows the scaling property of @xmath79 with no cutoff and its breakdown when cutoff is included . we have noted above that both , @xmath68 and @xmath79 tend to zero when @xmath4 is approached . also , the effect of cutoff is seen to be small in that limit . yet , these functions are multiplied by the diverging prefactor in eq . ( [ 3 - 8 ] ) , and then may yield finite @xmath8 and @xmath10 . a careful analysis is needed in order to find the phase @xmath1 of the complex conductivity ( @xmath81 ) at @xmath4 . for a _ 3d _ isotropic superconductor , dorsey @xcite has predicted @xmath82 , i. e. @xmath83 at @xmath4 . his result was obtained with no cutoff and it remains to be seen if this property is preserved even when a finite cutoff is made . [ fig3](a ) shows @xmath68 and @xmath79 as functions of @xmath84 for 100 ghz frequency . the effects of cutoff on @xmath79 are noticeable only far above @xmath4 . closer to @xmath4 , the curves for @xmath79 calculated with , and without cutoff , are indistinguishable . in contrast , a finite cutoff reduces the values of @xmath68 even in the limit of @xmath4 . as a result , the final cutoff parameter @xmath30 yields a crossing of the curves for @xmath68 and @xmath79 at some temperature slightly above @xmath4 . it is better seen on an enlarged scale in fig . [ fig3](b ) . this is a surprising result which has bearing on the experimental observations . due to the cutoff , the condition @xmath85 ( @xmath86 ) is reached at a temperature slightly above @xmath4 . since both , @xmath68 and @xmath79 are multiplied with the same prefactor in eq . ( [ 3 - 8 ] ) , one finds that the crossing of @xmath87 and @xmath88 does not occur at the peak of @xmath89 , but at a slightly higher temperature . exactly at @xmath4 , @xmath10 is higher than @xmath8 since a finite cutoff parameter reduces @xmath8 , but makes no effect on @xmath10 . the observation that the cutoff brings about a reduction of @xmath8 at @xmath4 is worth further investigation since it can be measured experimentally . [ fig4](a ) shows the ratio @xmath90 ( equal to @xmath91 ) at temperatures approaching @xmath4 . with no cutoff ( dashed lines in fig . [ fig4](a ) ) , this ratio reaches unity regardless of the frequency used . a finite cutoff parameter ( @xmath92 in fig . [ fig4](a ) ) makes the ratio equal to unity at a temperature slightly above @xmath4 , and in the limit of @xmath4 the ratio saturates at some higher value . the saturation level is seen to be higher when a higher frequency is used . one can find analytical expansions of the @xmath67-functions in the limit of @xmath4 ( @xmath93 ) . the leading terms are @xmath94 where we used the notation @xmath95 @xmath96 the parameter @xmath97 depends on the frequency @xmath98 and the cutoff parameter @xmath30 @xmath99 both functions tend to zero in the limit of @xmath4 ( @xmath100 ) , but their ratio is finite and depends on the parameter @xmath97 . fig . [ fig4](b ) shows the plot of @xmath101 at @xmath4 as the function of @xmath97 . one can observe that for a given cutoff parameter @xmath30 , the ratio @xmath101 at @xmath4 increases at higher frequencies ( lower @xmath97 ) . the limits at @xmath4 in fig . [ fig4](a ) represent only three selected points on the curve for @xmath101 in fig . [ fig4](b ) . in a given experiment , the ratio @xmath102 at @xmath4 can be directly determined from the experimental data so that the corresponding value of the parameter @xmath97 can be found uniquely from the curve of @xmath103 in fig . [ fig4](b ) , and the cutoff parameter @xmath30 is obtained using eq . ( [ 3 - 18 ] ) . we should note that @xmath30 is a temperature independent parameter . it can be determined by the above procedure from the experimental data at @xmath4 , but it controls the cutoff at all temperatures . one may observe from eq . ( [ 3 - 14 ] ) that in the limit of @xmath4 the leading terms in the expansions of the @xmath67-functions behave as @xmath104 . taking into account the prefactor in eq . ( [ 3 - 8 ] ) one finds that @xmath8 and @xmath10 can have finite nonzero values at @xmath4 only if @xmath105 , i. e. for the purely relaxational dynamics . we have assumed this case in all the figures of this section . from the experimental data at @xmath4 one can determine also the parameter @xmath16 . using eq . ( [ 3 - 8 ] ) and the @xmath106-functions in eq . ( [ 3 - 14 ] ) , one obtains finite conductivities at @xmath4 @xmath107 where @xmath108 as explained above , from the ratio of the experimental values @xmath109 at @xmath4 one can determine the parameter @xmath97 , and the values of @xmath110 can then be calculated from eq . ( [ 3 - 20 ] ) . the remaining unknown parameter @xmath16 can be obtained using eq . ( [ 3 - 19 ] ) and either of the experimental values of @xmath111 or @xmath112 . it is also interesting to look at the plots of @xmath113 in fig . [ fig4](b ) . one can observe that @xmath114 saturates to unity already at small values of @xmath115 . on the other hand , @xmath116 is smaller than unity at any finite value of @xmath97 , in conformity with the ratio @xmath117 at @xmath4 . at this point it is useful to find the expected range of the values of @xmath97 encountered in the experiments . for the microwave frequencies in the range 1 - 100 ghz , with @xmath92 , and @xmath118 , one finds that @xmath97 is in the range 9 - 90 . according to fig . [ fig4](b ) , @xmath119 in this range . this means that the cutoff makes no effect on @xmath79 , and only @xmath68 is reduced , in conformity with the calculated curves shown in fig . most high-@xmath4 superconductors are anisotropic , some of them even having a high value of the anisotropy parameter @xmath120 . therefore , for practical purposes one needs adequate expressions for the @xmath0 fluctuation conductivity . the real part of the fluctuation conductivity in the ab - plane is obtained using the kubo formalism as in the isotropic case . one obtains @xmath121 } \,\,\ , . \label{4 - 1}\ ] ] taking @xmath122 , and substituting the variables @xmath123 and @xmath124 , one can evaluate the sum in eq . ( [ 4 - 1 ] ) by integration in the @xmath125-plane , and along the @xmath126-axis @xmath127 } \,d q_{ab}\,d q_c \,\,\ , , \label{4 - 2}\ ] ] where we allowed a cutoff @xmath128 in the @xmath125-plane , and a possibly different cutoff @xmath129 along the @xmath126-axis . the dimensionless parameter @xmath78 is the same as given by eq . ( [ 3 - 3 ] ) . we use the notation @xmath84 for both @xmath130 and @xmath131 . we may briefly examine the @xmath6 case ( @xmath132 ) . with no cutoff one obtains @xmath133 which reduces to the aslamazov - larkin result for @xmath134 ( relaxational dynamics ) and @xmath84 taken in the gaussian limit . note that the fluctuation conductivity in the @xmath135-plane depends on @xmath136 . finite cutoff parameters reduce the fluctuation conductivity when the temperature is increased above @xmath4 @xmath137 \,\,\ , .\end{aligned}\ ] ] this expression has not been reported in the previous literature . the analysis of a @xmath6 fluctuation conductivity is difficult because of the number of unknown parameters . the @xmath0 fluctuation conductivity can be obtained from the integral in eq . ( [ 4 - 2 ] ) for the real part while the imaginary part is obtained by the procedure analogous to that of the isotropic case described in the preceding section @xmath138}\ , d q_{ab}\,d q_c \,\,\ , . \label{4 - 4a}\ ] ] the full expression can again be written in the form @xmath139 \,\,\ , . \label{4 - 5}\ ] ] the @xmath67-functions for the _ 3d _ anisotropic case are found to be @xmath140 \,\,\ , , \end{aligned}\ ] ] @xmath141 \,\,\ , , \end{aligned}\ ] ] where we used the shorthand notations for @xmath142 as in eq . ( [ 3 - 11 ] ) , and the following @xmath143 @xmath144 } { ( 1+q_{ab}^2+q_c^2)^2\,[\omega^2+(1+q_c^2)^2]}\right ) } \,\,\ , , \label{4 - 9}\ ] ] @xmath145 @xmath146 @xmath147 @xmath148 @xmath149 @xmath150 @xmath151 the effects of cutoff are similar as those described at length in the preceding section for the simpler case of _ 3d _ isotropic superconductors . in this section we discuss only the modifications in the limit @xmath152 where the relevant parameters can be determined . the @xmath106-functions can be expanded in the limit of @xmath4 ( @xmath153 ) , and the leading terms are @xmath154 \displaystyle \frac{1}{\sqrt{\omega } } \,\,\ , , \end{aligned}\ ] ] @xmath155 \displaystyle \frac{1}{\sqrt{\omega } } \,\,\ , , \end{aligned}\ ] ] where we used the following shorthand notations @xmath156 @xmath157 @xmath158 @xmath159 @xmath160 @xmath161 @xmath162 @xmath163 } { ( w_{ab}^2+w_c^2)^2(1+w_c^4)}\right ) } \,\,\ , . \label{4 - 25}\ ] ] the cutoff parameters appear in @xmath164 @xmath165 we note that in the anisotropic case , the @xmath67-functions behave also as @xmath166 when @xmath167 . as already discussed in the previous section , this implies that finite nonzero @xmath168 and @xmath169 can be obtained only for @xmath134 ( relaxational model ) . since the available experimental data in anisotropic high-@xmath4 superconductors @xcite show finite nonzero @xmath168 and @xmath169 , we can adopt @xmath44 in the remainder of this section . in analogy to the _ 3d _ isotropic case described in the preceding section , one may define the functions @xmath170 so that the conductivities at @xmath4 are given by @xmath171 the ratio of experimental values @xmath172 at @xmath4 does not define uniquely the cutoff parameters @xmath173 and @xmath174 . it puts , however , a constraint on their choice . [ fig5](a ) shows the plot of @xmath175 given by eq . ( [ 4 - 28 ] ) as a function of two variables , @xmath176 and @xmath177 . it is evident that a fixed value of @xmath178 defines a simple curve of the possible choices of ( @xmath176,@xmath177 ) . [ fig5](b ) shows a selection of such curves for @xmath1791.05 , 1.1 , 1.15 , and 1.2 . the dashed line marks the condition @xmath180 ( @xmath181 ) . experimentally , one has to probe the possible choices for ( @xmath176 , @xmath177 ) and look at the fits of the theoretical curves to the experimental data at @xmath182 . the parameter @xmath136 in eq . ( [ 4 - 5 ] ) can be obtained once the choice for ( @xmath176 , @xmath177 ) is made . note that in practical applications of the above theory one needs measurements where the microwave current flows only in the ab - plane . particularly suitable for this purpose are the measurements in which the superconducting sample is placed in the antinode of the microwave electric field @xmath183 in the cavity . @xcite superconducting transition does not occur in a strictly _ 2d _ system . however , if the sample is a very thin film so that its thickness is much smaller than the correlation length , the fluctuations will be restricted within the film thickness @xmath184 in one direction , and develop freely only in the plane of the film . using the formalism described in the preceding sections , we find that the fluctuation conductivity is given by @xmath185}\,dq \,\,\ , , \label{5 - 1}\ ] ] @xmath186}\,dq \,\,\ , , \label{5 - 2}\ ] ] @xmath187 \,\,\ , , \label{5 - 3}\ ] ] where @xmath188 and @xmath189 . the prefactor is the aslamazov - larkin result for the _ 2d _ case with no cutoff , and the gaussian form @xmath190 replaced by the more general expression @xmath191 . the @xmath106-functions are given by @xmath192 } { ( 1+q^2+q_{n}^2)^2[\omega^2+(1+q_{n}^2)^2]}\right)}\right ] \,\,\ , , \end{aligned}\ ] ] @xmath193\right . \label{5 - 6}\\ \nonumber & \left . \displaystyle -\ln{\left(\frac{(1+q_{n}^2)^2[\omega^2+(1+q^2+q_{n}^2)^2 ] } { ( 1+q^2+q_{n}^2)^2[\omega^2+(1+q_{n}^2)^2]}\right ) } -2\,\frac{q^2}{1+q^2+q_{n}^2}\right ] \,\,\ , .\end{aligned}\ ] ] the summation over @xmath194 in eq . ( [ 5 - 3 ] ) has to carried out until the factor @xmath195 reaches some cutoff value @xmath30 , which is of the order of unity . if the film thickness is large ( @xmath196 ) , one has to sum up to a high @xmath197-value . in such cases , the summation is well approximated by an integration , and one retrieves the _ 3d _ case of the preceding section . the _ 2d _ character is better displayed when the film thickness is comparable to @xmath16 . then , only a few terms have to be taken into account . in the extreme case of @xmath198 , only the @xmath199 term is found below the cutoff limit . the zero frequency limit ( @xmath200 ) yields @xmath201 the @xmath199 term yields the previous result of hopfengrtner et al . @xcite and gauzzi et al . @xcite in the limit of @xmath4 ( @xmath93 ) one obtains @xmath202 } { ( 1+w_{n}^4)(w^2+w_{n}^2)^2}}\right ] \,\,\ , , \label{5 - 8}\ ] ] @xmath203 } { ( 1+w_{n}^4)(w^2+w_{n}^2)^2}}\right ] \,\,\ , , \label{5 - 9}\ ] ] where we used the notation @xmath204 @xmath205 one may observe that for @xmath199 the real part of the conductivity is finite , but the imaginary part diverges . it is due to the logarithmic term in eq . ( [ 5 - 9 ] ) . this is an unphysical result . it may indicate that the @xmath199 term is not physically acceptable , or that the _ 2d _ model should not be applied exactly at @xmath4 . the relevance of the theoretical expressions derived in the preceding sections can be demonstrated by comparison of the calculated and experimentally measured @xmath206 fluctuation conductivity . as an example we present here an analysis of the data in @xmath207 thin film . the experimental results of the complex conductivity measured at 9.5 ghz are shown in fig . [ fig6](a ) . the main features are the same as reported previously in single crystals of high-@xmath4 superconductors.@xcite we have to note that in our measurement the thin film was positioned in the center of an elliptical microwave cavity resonating in @xmath208te@xmath209 mode , and oriented in such a way that the electric field @xmath183 was in the _ ab_-plane . thus the in - plane conductivity was measured and the application of the theoretical expressions of the preceding sections is appropriate . other experimental details have been reported previously.@xcite in this section we are interested in the fluctuation conductivity near @xmath4 which is shown on an enlarged scale in fig . [ fig6](b ) . the real part of the conductivity has a maximum when the coherence length diverges . since the critical temperature of a phase transition is characterized by the divergence of the coherence length , we use the maximum of @xmath8 in fig . [ fig6](b ) to determine @xmath210 k. one can also observe in fig . [ fig6](b ) that the imaginary part of the conductivity crosses the real part at a temperature slightly above @xmath4 . this is a direct experimental evidence of the short wavelength cutoff as discussed in section [ sect2 ] . the experimental values of @xmath8 and @xmath10 at @xmath4 can be used in the evaluation of the parameters which enter the theoretical expressions of the preceding sections . [ fig7](a ) shows the experimental data above @xmath4 plotted against the reduced temperature @xmath211 . we can analyze this data first by the theoretical expressions which have no cutoff on the fluctuation wavevector . if one sets @xmath212 for the _ 3d _ anisotropic case , it is straightforward to evaluate @xmath136 using eq . ( [ 4 - 29 ] ) and the experimental value of @xmath10 at @xmath4 . we have obtained @xmath213 nm in @xmath207 thin film . once this parameter is determined , the fluctuation conductivity at all temperatures above @xmath4 follows from eq . ( [ 4 - 5 ] ) . the real and imaginary parts of the @xmath214 fluctuation conductivity have to be mutually consistent . this can be exploited in the data analysis . we insert the experimental values of @xmath10 into the imaginary part of eq . ( [ 4 - 5 ] ) , and solve numerically for @xmath215 . then we exploit these same values of the reduced correlation length in the real part of eq . ( [ 4 - 5 ] ) . the calculated @xmath8 is shown by the dotted line in fig . [ fig7](a ) . the calculated line lies far from the experimental points . note in particular that the calculated @xmath8 meets @xmath10 at @xmath4 when no cutoff is included ( cf . section [ sect2 ] ) . besides , the shape of the calculated @xmath8 differs from that of the experimental one . we may conclude that with no cutoff on the fluctuation wavevector the theoretical expression does not describe properly the experimental fluctuation conductivity . a finite cutoff on the fluctuation wavevector improves greatly the agreement of the theory and experiment . from fig . [ fig6](b ) we can evaluate the ratio @xmath216 at @xmath4 . this value yields a constraint on the choices of @xmath173 and @xmath174 as described in section [ sect3 ] and fig . the actual choices are presented in the inset of fig . [ fig7](b ) . for a given choice ( @xmath173 , @xmath174 ) from this constraining line , one has to determine first @xmath136 using eq . ( [ 4 - 29 ] ) and the experimental value of @xmath10 at @xmath4 . then , the temperature dependence of the reduced correlation length @xmath215 is evaluated numerically from the imaginary part of eq . ( [ 4 - 5 ] ) with the selected pair ( @xmath173 , @xmath174 ) and the experimental values of @xmath10 . the obtained values of @xmath215 are finally used to calculate @xmath8 from the real part of eq . ( [ 4 - 5 ] ) . the results vary with the possible choices of the pairs ( @xmath173 , @xmath174 ) from inset of fig . [ fig7](b ) . the best fit of the calculated @xmath8 to the experimental points is shown by the solid line in fig . [ fig7](a ) . it is obtained with the choice ( @xmath217 , @xmath218 ) . it is physically reasonable . with @xmath173 being of the order of unity , the minimum wavelength of the superconducting fluctuations in the _ ab_-plane is given by @xmath219 , which is much larger than the atomic size and could be accepted as a mesoscopic quantity . in contrast , the value of @xmath220 is below the atomic size and , hence , could not be physically accepted for the lower limit of the fluctuation wavelength along the c - axis . therefore one needs a value of @xmath221 so that the minimum wavelength of the superconducting fluctuations @xmath222 along the c - axis becomes also an acceptable mesoscopic quantity . we have tested also a number of other choices of the cutoff parameters . by shifting the choice of the parameters along the constraining curve in the inset of fig . [ fig7]b , one degrades the fit of the calculated @xmath8 to the experimental points . the dashed line in fig . [ fig7](a ) is the calculated @xmath8 using the choice of equal cutoff parameters ( @xmath223 ) permitted by the constraint in the inset of fig . [ fig7](b ) . the fit to the experimental points is seen to be much worse than that of the full line . for the sake of completeness , we show also the result for a choice of the cutoff parameters on the other branch of the constraining curve in the inset of fig . [ fig7](b ) . if one takes ( @xmath224 , @xmath225 ) the calculated @xmath8 is as shown by the dashed - dotted line in fig . [ fig7](a ) . the fit is unsatisfactory . moreover , this choice has to be refuted on the grounds of physically unacceptable minimum wavelength of the fluctuations along the c - axis . also shown in fig . [ fig7](a ) by the short - dotted line is the result obtained by the isotropic _ 3d _ expression in eq . ( [ 3 - 8 ] ) . in this case the parameters @xmath226 nm and @xmath227 are obtained straightforwardly from eq . ( [ 3 - 19 ] ) and the experimental values of @xmath8 and @xmath10 at @xmath4 . the fit in fig . [ fig7](a ) is obviously not good . it is also seen that the expressions for the anisotropic _ 3d _ case always yield curves which are different from that of the isotropic _ 3d _ case . the complexity of the anisotropic _ 3d _ expressions elaborated in section [ sect3 ] is not futile . indeed , we find that these expressions must be used when analyzing an anisotropic superconductor . fig . [ fig7](b ) shows an enlarged view of the high temperature part where the same curves as in fig . [ fig7](a ) are better distinguished . we have analyzed also the _ 2d _ expressions of section [ sect4 ] . fig . [ fig8](a ) shows again the same experimental data as in fig . [ fig7](a ) , but fitted with @xmath228 term of the _ 2d _ expansion in eq . ( [ 5 - 3 ] ) . the parameter @xmath184 has been chosen so as to optimize the fit to the experimental values of @xmath8 . the resulting curve in fig . [ fig8](a ) was obtained with @xmath229 nm . the _ 2d _ results are not so sensitive to the fluctuation wavevector cutoff as those of the _ 3d _ case . the curves obtained with no cutoff ( @xmath230 ) and with @xmath231 are practically indistinguishable in fig . [ fig8](a ) . one may conclude that closer to @xmath4 the bscco superconductor clearly does not behave as a _ 2d _ system . however , at higher temperatures both , _ 2d _ and _ 3d _ expressions yield almost equally good fits to the experimental values , as seen in fig . [ fig8](b ) . thus the dimensionality of the fluctuations at higher temperatures can not be resolved from the @xmath26 fluctuation conductivity . finally , we remark that the above analysis could explain very well the experimental @xmath26 fluctuation conductivity above @xmath4 in the bscco thin film using the aslamazov - larkin type expressions with wavevector cutoff as deduced in the preceding sections of this paper . the other contributions such as maki - thomson and one electron density of states , mentioned in section [ sect2 ] , are not necessary over most of the temperature range covered in the experiment . this is in accord with the recent microscopic calculation@xcite proving that these contributions may cancel in the ultraclean case of nonlocal electrodynamics . however , they may play a role at high enough temperatures where the above calculated curves depart from the experimental data . their analysis is beyond the scope of the present paper . we have presented full analytical expressions for the @xmath0 fluctuation conductivity in _ isotropic , _ 3d _ anisotropic , and _ 2d _ superconductors . the effects of the short wavelength cutoff in the fluctuation spectrum on the @xmath6 and @xmath0 conductivities were discussed in detail . the short wavelength cutoff brings about a breakdown of the scaling property in frequency and temperature . it also has a small , but experimentally very important effect on @xmath8 . due to a finite cutoff parameter @xmath30 , the value of @xmath8 at @xmath4 is lower than that of @xmath10 . in the simpler _ isotropic case , this observation can be used to determine @xmath30 directly from the experimental data at @xmath4 . in the _ 3d _ anisotropic case , one obtains a constraint on the choices of ( @xmath173 , @xmath174 ) . the useful feature of @xmath0 fluctuation conductivity measurements is that @xmath4 can be determined directly from the experimental data . moreover , the expressions derived in this paper enable the determination of @xmath16 ( _ 3d _ isotropic ) , or @xmath232 ( _ 3d _ anisotropic ) from the experimental values @xmath233 . thus , we establish that the analysis of @xmath0 fluctuation conductivity requires no free fit parameters in the _ 3d _ isotropic case , and only one ( @xmath173 , @xmath174 ) in the _ 3d _ anisotropic case . we have shown that the anisotropic _ 3d _ expressions with an appropriate cutoff of the fluctuation wavevector can account for the experimental fluctuation conductivity in a bscco thin film within a large temperature range above @xmath4 . the _ 2d _ expression is less sensitive to the cutoff and was found to match the experimental data only at higher temperatures . n . peligrad and m. mehring acknowledge support by the deutsche forschungsgemeinschaft ( dfg ) project nr . me362/14 - 2 . a. duli acknowledges support from the croatian ministry of science , and dlr stiftung ( project nr . kro00498 ) . -curves for the _ 3d _ isotropic case calculated from eq . ( [ 3 - 9 ] ) for a finite cutoff parameter @xmath234 ( full lines ) , and for @xmath235 ( dashed lines ) . the variable is @xmath45 in ( a ) , and @xmath78 in ( b ) . the curves are labelled by the frequency @xmath236 used in the calculations.,scaledwidth=80.0% ] -curves for the _ 3d _ isotropic case calculated from eq . ( [ 3 - 10 ] ) for the same parameters as in fig . [ fig1 ] . the effects of a finite cutoff are small and can be seen only on logarithmic scales used in the insets.,scaledwidth=80.0% ] and @xmath79 of the _ isotropic case calculated with the choice @xmath237 ghz in the limit @xmath238 . the dashed lines are the @xmath239-curves calculated with no cutoff ( @xmath235 ) , and the full lines include a finite cutoff ( @xmath234 ) . ( b ) enlarged section which shows the crossing of @xmath68 and @xmath79 ( @xmath240 , see text ) at a temperature slightly above @xmath4 . the two dashed lines are indistinguishable in this temperature range.,scaledwidth=80.0% ] ( equal to @xmath241 ) for the _ 3d _ isotropic case at temperatures approaching @xmath4 . with no cutoff ( dashed lines ) the ratio tends to unity for all frequencies . with a finite cutoff ( @xmath234 ) the ratio equals unity at a temperature slightly above @xmath4 , dependent on the frequency . in the limit @xmath238 the ratio saturates to a frequency dependent value larger than unity . ( b ) the ratio @xmath242 and @xmath243 given by eq . ( [ 3 - 20 ] ) . the variable @xmath97 is defined by eq . ( [ 3 - 18]).,scaledwidth=80.0% ] for _ 3d _ anisotropic case as a function of @xmath176 and @xmath177 ( cf . ( [ 4 - 28 ] ) ) . ( b ) selection of curves for fixed ratios of @xmath244 indicated by numbers . the dashed line marks the condition @xmath245 ( @xmath246).,scaledwidth=80.0% ] above @xmath4 plotted versus the reduced temperature @xmath247 . various lines are the conductivities calculated in the _ 3d _ cases as described in the text . ( b ) enlarged view of the high temperature part of the same curves as in ( a ) . the constraining curve for the choices of the cutoff parameters @xmath173 and @xmath174 resulting from the experimental ratio @xmath248 at @xmath4 is shown in the inset.,scaledwidth=80.0% ]
the short wavelength cutoff has been introduced in the calculation of @xmath0 fluctuation conductivity of superconductors . it is shown that a finite cutoff leads to a breakdown of the scaling property in frequency and temperature . also , it increases the phase @xmath1 of the complex conductivity ( @xmath2 ) beyond @xmath3 at @xmath4 . detailed expressions containing all essential parameters are derived for 3d isotropic and anisotropic fluctuation conductivity . in the _ 2d _ case we obtain individual expressions for the fluctuation conductivity for each term in the sum over discrete wavevectors perpendicular to the film plane . a comparison of the theory to the experimental microwave fluctuation conductivity is provided .
anaesthesia for shoulder arthroscopic surgeries is challenging due to factors such as difficulty in patient positioning , remote access for airway and complications specific to the procedure . bones bleed at normotension and the shoulder joint is a highly vascular area . an exceptional problem faced during arthroscopic surgery of the shoulder is the inability to use a tourniquet to control bleeding , thereby necessitating the use of manoeuvres like inter - scalene block , adrenaline in saline irrigation , or hypotensive anaesthesia , to create a bloodless field for adequate visualisation of the joint . literature does mention the use of inhaled anaesthetic techniques combined with pharmacological agents ( isoflurane and -blocker , labetalol ) to achieve target blood pressures during such surgeries . there is little information on the use of intravenous propofol for shoulder arthroscopy as the primary anaesthetic agent . souron and colleagues , reported the use of target control infusion ( tci ) of propofol as a sedative during shoulder surgery under inter - scalene brachial plexus block . tci propofol and single agent anaesthesia with sevoflurane have been compared by different authors , for spine surgeries , but in patients who received no regional anaesthesia . medline search did not show any studies comparing the use of tci propofol versus conventional inhalational techniques for shoulder arthroscopic surgeries in patients who received concomitant inter - scalene brachial plexus block . our study was aimed to compare the efficacy ( in terms of achieving the haemodynamic status required ) and convenience ( with respect to manipulations required by anaesthesiologist for maintaining the blood pressures or by surgeon , changing the operative environment ) of tci propofol and inhalational agent sevoflurane in patients undergoing shoulder arthroscopic surgery after preliminary regional inter - scalene blockade . thirty seven consecutive patients who were anaesthetised by a single anaesthesiologist and operated upon by a single surgeon , undergoing shoulder arthroscopic surgery over a thirteen month period ( november , 2010-december , 2011 ) were considered for the study . a minimum of sixteen patients were required in each group in order to detect the mean difference of 10 mm of hg blood pressures ( power 80% , 0.05 , 0.20 , with standard deviation of 10 in each group ) . selection of the patients was done as shown in the consort chart [ figure 1 ] . patients having american society of anesthesiologists ( asa ) status 1 and 2 were included in our study . since preliminary inter - scalene block formed an essential part of the anaesthetic procedure in all subjects , 3 patients in whom the regional block was considered less than optimally effective were excluded from the study . incidentally , one of these 3 had severe local pain at the operative site soon after termination of anaesthesia , confirming the ineffectiveness of the block and unsuitability for inclusion in the study . seventeen of the thirty four patients who qualified for inclusion underwent anaesthesia using tci propofol and an equal number was subjected to inhalational anaesthesia with sevoflurane . after pre - operative assessment and recording of baseline vitals , patients were pre - medicated with tab . target control infusion ; isb interscalene brachial plexus block ; n number of patients in the operating room , patients were administered intravenous ( iv ) inj . fentanyl 2 g / kg . inter - scalene block ( modified winnie 's ) was performed using the nerve locator , in supine position , with a local anaesthetic mixture containing 6 ml of lignocaine 2% , 35 ml of bupivacaine 0.25% and hyaluronidase 1500 international units . the effectiveness of the block was confirmed by abolition of sensations ( pinprick ) over c4-c7 dermatomes and/or free and painless ( passive ) abduction in patients with painful shoulders . propofol 2 mg / kg ( sevoflurane group ) or with tci pump device ( evadrop tci syringe pump , schiller , uk ) , ( tci propofol group ) after confirming the effectiveness of regional anaesthesia . the patient 's demographic data was entered into the tci unit and to facilitate the induction , the target propofol plasma concentration for induction , 8 g / ml was used . tracheal intubation was facilitated using iv inj.vecuronium or rocuronium in 2 ed95 doses and ventilation was aimed to achieve normo - carbia . three lead electrocardiogram , spo2 , non - invasive blood pressure ( nibp ) , end tidal co2 and inhalational agent monitoring were done during the entire procedure . nibp recording was done at 3 minutes intervals in the non- operative upper arm . with the patient in the lateral decubitus position , anaesthesia was maintained using either tci propofol ( target plasma concentration of 3 g / ml , in tci propofol group ) or 1.2 - 1.5 minimum alveolar concentration ( mac ) of sevoflurane ( datum vaporiser , meditec england , abbot ltd , in sevoflurane group ) . age related iso - mac sevoflurane concentrations were used to achieve the desired concentrations ( 0.8% ( minimum)-2% ( maximum ) , of expired concentrations ) in the sevoflurane group . muscle paralysis was achieved with bolus doses of inj.vecuronium , and controlled ventilation was carried out throughout the procedure . systolic blood pressure ( sbp ) , diastolic blood pressure ( dbp ) , mean blood pressure ( mbp ) and heart rate were recorded every 3 minute during the entire procedure . the surgical period was considered from the time of insertion of the arthroscope to its removal . prior to insertion of the arthroscope , efforts were made to attain the target systolic pressure 20 - 25% below the baseline sbp using the following methods . method a ( anaesthetic depth increase ) included administering additional doses of fentanyl ( 1 - 2 g / kg ) with or without propofol ( 1 mg / kg ) or temporarily ( for approximately 10 minutes ) increasing the concentrations of sevoflurane ( 1.5 total mac , 3.5% , maximum ) . method b included pharmacological intervention using either a -blocker ( metaprolol , 3 - 5 mg or esmolol , 20 mg iv ) or iv nitroglycerine boluses ( 25 - 50 g ) . if target sbp was not achieved with the above methods within the next 10 - 12 minutes , infusion of nitroglycerine or sodium nitroprusside would be considered ( method c ) . any adverse events like persistent hypotension ( > 2 readings of lower blood pressures than target or map < 60 mm of hg ) or severe bradycardia ( heart rate of < 40 beats / minute ) would be treated accordingly ( saline bolus , inj . initial pump pressure was 50 mm of hg and flow at 50% was maintained throughout the procedure whenever visualisation was adequate . a red - out period was considered when joint space visualisation was impossible owing to the excessive bleeding from bone or soft tissue . such red - outs were recorded only during the on - going surgical process and not during insertion of the scope or with pump in the off mode . the amount of saline irrigation fluid used ( in litres ) and the duration of surgery were noted . the measurable hb of saline was quantified in both groups . at the end of the procedure , the visual score , grading the visibility of the joint space during surgery by the surgeon and anaesthesiologist separately , was documented ( excellent=4 , good=3 , adequate=2 , poor=1 ) . extreme care of the patient was taken during the procedure to avoid hypothermia , urinary bladder distension , position related injuries etc . data are presented as mean with standard deviation ( sd ) and 95% confidence intervals ( ci ) . student t test was used for comparison of haemodynamic data and chi - square test was used for visual score comparison . the demographic characteristics with sex distribution , age , weight , duration of surgery and the variety of surgical procedures are detailed in table 1 . there were no significant differences between the groups with respect to patient characteristics , type of surgery performed or baseline vitals [ table 1 ] . the primary variable was the blood pressure and the highest mean values of sbp , dbp and mbp of the propofol group were significantly lower than those of the sevoflurane group ( p=0.002 . , 038 and 0.006 respectively ) though the lowest achieved mean values of dbp and mbp were not [ table 2 ] . also , the mean of means of sbp as well as the mbp were significantly lower in the propofol group ( p=0.009 . however , there were no differences either in highest or lowest mean heart rates achieved and mean of mean heart rates recorded between the groups . values are meansd , ( 95% confidence intervals ) haemodynamic data for the duration of the study . values are meansd , ( 95% confidence intervals ) a higher number of patients in the sevoflurane group ( 65% versus 18% of propofol group ) required either anaesthetic intervention , pharmacological manipulation or both to achieve the desired blood pressure . accounting for the higher number of hypotensive episodes needing intervention in the propofol group , fentanyl consumption was higher in the sevoflurane group , though they did not differ statistically . the volume of saline irrigant consumed was significantly higher in the sevoflurane group ( p=0.02 ) . hb of the saline irrigation return was measurable in a higher number of patients in the sevoflurane group ( 81% of sevoflurane group ( maximum , 0.4 gm / dl ) versus 41% of patients belonging to propofol group ( maximum , 0.1 gm / dl ) ) . better visual scores by both the surgeon and the anaesthesiologist were recorded in the propofol group ( p<0.001 ) . one ( 6% ) incidence of red - out was observed in a sevoflurane group patient . intra - operatively , the surgeon requested an increase of pump pressure and flow in 35% of the sevoflurane group patients for better visualisation of joint space ( none in the propofol group ) , [ table 4 ] . anaesthetic / surgical factors ; values absolute or meansd , ( 95% confidence intervals ) no immediate perioperative surgical or anaesthetic complications were noted . reduction of sbp or mbp ( 20 - 25% of baseline in a normotensive individual ) decreases bleeding from joint bones and improves visualisation during shoulder arthroscopic surgery . abolition of painful stimuli employing the inter - scalene block , supplements the hypotensive effect of sufficiently deep anaesthesia . all our patients had complete inter - scalene block , were deeply anaesthetised and maintained using intravenous propofol or sevoflurane with adequate but pre - defined concentrations . total intravenous anaesthesia constitutes nearly 25% of all anaesthetic administrations in today 's world and tci propofol is well described for a variety of surgeries or procedures at different doses . vincent and colleagues used tci propofol for sedation in patients undergoing shoulder arthroscopic surgery under regional anaesthesia at target plasma concentrations of 0.8 - 0.9 g / ml . when tci propofol is used alone , a plasma concentration of 4 - 6 g / ml is necessary to maintain the necessary depth of anaesthesia in asa 1 patients . the concomitant use of n2o reduces this requirement to as low as 2.5 g / ml . a combination of 67% nitrous oxide and fentanyl reduces the ec50 ( the effective concentration at which 50% of patients do not respond to a painful stimulus ) by approximately 30% , akin to iso - mac values for inhalational anaesthetics . considering these factors , a target plasma concentration of 3 g / ml has been used in our patients . propofol used alone or in combination with fentanyl demonstrates profound hypotensive effect in patients with abolished pain signals . during maintenance of anaesthesia with an infusion propofol infusion at 100 g / kg / min results in a significant decrease in systemic vascular resistance ( svr ) without altering the cardiac or the stroke index . but infusion of lower doses of propofol ( 54 - 100 g / kg / min ) with concomitant use of narcotics and n2o , selectively reduces cardiac output and stroke volume without altering svr . these cardiovascular effects of propofol greatly favored the achievement of target blood pressures in our patients . a synergistic role of the regional anaesthetic drugs in potentiating these effects can not however be ruled out . the higher incidence of hypotension episodes needing intervention in the propofol group seems to be a direct reflection of its more profound hypotensive effects . like propofol , sevoflurane too depresses the intrinsic inotropic state in isolated myocardium and this action plays an important role in determining the haemodynamic effects of this volatile agent in humans , with or without heart disease . in animals , sevoflurane decreases myocardial contractile function to approximately 40 - 45% of control values only at 1.75 mac in the presence or absence of autonomic nervous system tone . we have used iso - mac values of sevoflurane and at 1 - 1.5 , the myocardial depression induced is lower , but associated with a definite reduction in mbp . though it is arguable that the use of iso - mac values of sevoflurane does not induce hypotension equivalent to that of 1.75 mac , efforts were made to deepen the anaesthesia using 1.5 mac sevoflurane , prior to a pharmacological intervention in our patients . a higher number of anaesthetic interventions to achieve target sbp as well as the larger sds in the sevoflurane group possibly indicate an inconsistency and non - uniformity in the cardiovascular actions of the inhalational agent as compared to propofol . interestingly , heart rates did not vary between the groups though mean heart rates remained much lower than baseline values . these effects were explained by the action of propofol on the baroreflex and cardiac parasympathetic tone , previously . but for the significantly larger number of pharmacological interventions in the sevoflurane group , we believe , heart rates would have been higher in this group as compared to the group receiving propofol . lower visual scores , linked to bleeding within the joint space during arthroscopy , are best correlated with hb measurement of the saline irrigation return . considering the massive dilutional factor , importance accorded to the actual quantification of hb levels is questionable . estimation of bleeding within the joint space during arthroscopy has been attempted by various authors , using a product of irrigant fluid and hb measurements . measurable hb , which we considered therefore as a relevant factor signifying bleeding into the joint space correlated well with the comparative fall in blood pressures observed between the two groups . morrison and colleagues , explain the relationship between the blood pressure and visual clarity during shoulder arthroscopy . extravasation of fluid into the periarticular tissues can occur , giving rise to what is described as the vicious cyclic event ( venous congestion - intra - articular bleed - demand for increased pump pressure and flow - further extravasation ) leading to a self - perpetuating venous ooze that hampers vision . as we believe , the fluid ingression into the chest wall , neck , and supra - scapular regions , which are outside the area covered by the inter - scalene block , may provoke intense pain signals that accentuate blood pressures and promote further haemorrhage . we observed higher demands for increased pressure , irrigant flow and consequently an increased saline consumption in the sevoflurane group patients despite similar durations of surgery within the two groups . it is however possible that joint - specific factors like fibrosis , inflammation and other operating room circumstances could affect the duration of surgery and quantity of saline used , and a particular anaesthetic technique alone can not be the sole factor taken into account to explain our observations . the only incident of red out was a possible venous bleed , managed by gentle external compression and probably unrelated to haemodynamics of the individual . the well described variability that exists with the use of the calibrated vaporizer can be associated , though to a less degree with target controlled drug delivery too . however , in patients with complete inter - scalene regional blockade , factors like operating time , surgical stress and variability in individual pain responses do not mandate vigilant titration of intravenous propofol by the anaesthesiologist . besides , the proven advantages of the tci pump with respect to rapidity of induction and recovery , stable maintenance , fewer post - operative adverse effects and earlier discharge from the post - anaesthesia care unit , lend additional support to our results favoring the technique . tci propofol appears to be superior to sevoflurane anaesthesia in inter - scalene blocked patients undergoing shoulder arthroscopy both as regards the efficacy as well as the convenience of maintaining low bp during surgery . directly , it seems to be associated with less intra - articular bleeding and improved visualisation during the procedure . indirectly , it reduces interventions by the anaesthesiologist and minimizes requests by the surgeon to manipulate the pump settings . the acclaimed advantages of tci pump over the vaporizer may further support the use of tci propofol anaesthesia for shoulder arthroscopic procedures .
background : one of the challenges of anaesthesia for shoulder arthroscopic procedures is the need for controlled hypotension to lessen intra - articular haemorrhage and thereby provide adequate visualisation to the surgeon . achievement of optimal conditions necessitates several interventions and manipulations by the anaesthesiologist and the surgeon , most of which directly or indirectly involve maintaining intra - operative blood pressure ( bp ) control.aim:this study aimed to compare the efficacy and convenience of target controlled infusion ( tci ) of propofol and inhalational agent sevoflurane in patients undergoing shoulder arthroscopic surgery after preliminary inter - scalene blockade.methods:of thirty four patients studied , seventeen received tci propofol ( target plasma concentration of 3 g / ml ) and an equal number , sevoflurane ( 1.2 - 1.5 minimum alveolar concentration ) . n2o was used in both groups . systolic , diastolic , mean blood pressures and heart rate were recorded regularly throughout the procedure . all interventions to control bp by the anaesthesiologist and pump manipulation requested by the surgeon were recorded . the volume of saline irrigant used and the haemoglobin ( hb ) content of the return fluid were measured.results:tci propofol could achieve lower systolic , mean bp levels and the number of interventions required was also lower as compared to the sevoflurane group . the number of patients with measurable hb was lower in the tci propofol group and this translated into better visualisation of the joint space . a higher volume of saline irrigant was required in the sevoflurane group . no immediate peri - operative anaesthetic complications were noted in either category.conclusion:tci propofol appears to be superior to and more convenient than sevoflurane anaesthesia in inter - scalene blocked patients undergoing shoulder arthroscopy .
CLOSE The Cubs pull off a thrilling win in 10 innings over the Indians to end their 108-year World Series title drought. USA TODAY Sports' Steve Gardner recaps how it all went down, and what Chicago and Cleveland can look forward to. USA TODAY Sports Steve Bartman during the infamous foul ball interference play in the 2003 NLCS. (Photo: Morry Gash, AP) A man serving as a spokesman for Steve Bartman says the infamous Chicago Cubs fan did not express relief after the team beat the Cleveland Indians in Game 7 to win the World Series when he spoke to Bartman on the phone. “He was just overjoyed that the Cubs won, as all the Cubs fans are,’’ Frank Murtha, a lawyer who has served as Bartman’s spokesman, told USA TODAY Sports on Thursday. But Bartman, blamed for contributing to the Cubs’ 108-year drought since their last World Series title after he interfered with a foul ball during the 2003 National League Championship Series — when the Cubs failed again to reach the World Series — isn’t about to make any public appearances. “We don’t intend to crash the parade,’’ Murtha said. “The one thing that Steve and I did talk about was if the Cubs were to win, he did not want to be a distraction to the accomplishments of the players and the organization.’’ Bartman has not granted any interviews since the 2003 incident but has continued to live and work in the Chicago area, Murtha said. PHOTOS: Cubs celebrate first World Series title in 108 years ||||| During a Major League Baseball (MLB) postseason game played between the Chicago Cubs and the Florida Marlins on October 14, 2003, at Wrigley Field in Chicago, Illinois, spectator Steve Bartman disrupted the game by intercepting a potential catch. The incident occurred in the eighth inning of Game 6 of the National League Championship Series (NLCS), with Chicago ahead 3–0 and holding a three games to two lead in the best-of-seven series. Moisés Alou attempted to catch a foul ball off the bat of Marlins second baseman Luis Castillo. Bartman reached for the ball, deflected it, and disrupted the potential catch. If Alou had caught the ball, it would have been the second out in the inning and the Cubs would have been just four outs away from winning their first National League pennant since 1945. The Cubs ended up surrendering eight runs in the inning, and lost the game 8–3. When they were eliminated in the seventh game the next day, the incident was seen as the "first domino" in the turning point of the series.[1] In 2011, ESPN produced a documentary film exploring the subject as part of its 30 for 30 series. Titled Catching Hell, the film drew similarities between Bill Buckner's fielding error late in Game 6 of the 1986 World Series and the Bartman incident. It explored the incident from different perspectives.[2] In an effort to reconcile with Bartman and to put the incident behind them, the Chicago Cubs awarded Bartman a 2016 World Series ring. [3] Foul ball incident [ edit ] Fan Steve Bartman and Moises Alou both attempt to catch the foul ball hit by Luis Castillo during Game 6 of the 2003 NLCS at Wrigley Field in Chicago. At the time of the incident, Mark Prior was pitching a three-hit shutout[clarification needed] for the Cubs in the eighth inning. The Cubs led the game 3–0 and held a series lead of three games to two. They were five outs away from reaching the World Series for the first time since 1945; the Cubs had not been baseball's champions since 1908. Luis Castillo was at bat for the Marlins with one out, and a full count, with teammate Juan Pierre on second base.[4] Bartman was sitting in the front row along the left field corner wall behind the on-field bullpen when a pop foul off the bat of Castillo drifted toward his seat. Cubs left fielder Moisés Alou approached the wall, jumped, and reached for the ball. Bartman attempted to catch the ball, failed to secure it, and in the process deflected it away from Alou's glove. Alou slammed his glove down in frustration and shouted at several fans. The Cubs, in particular Alou and Prior, argued for fan interference, but umpire Mike Everitt ruled there was no interference because the ball had broken the plane of the wall separating the field of play from the stands and entered the stands.[5] Cubs manager Dusty Baker did not see the play as it happened, because the curvature of the Cubs dugout blocked his view.[6] Everitt's ruling has been heavily scrutinized over the years. For example, the authors of Mad Ball: The Bartman Play argue that photographs show Bartman's arms extending into the playing field and that Castillo should have been called out due to fan interference.[7] On Fox, Thom Brennaman was commentating, saying "Again in the air, down the left field line. Alou... reaching into the stands... and couldn't get it and he's livid with a fan![8] Aftermath [ edit ] For the Cubs and Marlins [ edit ] Following the incident, the Marlins scored eight runs:[9] The next night, back at Wrigley Field, Florida overcame Kerry Wood and a 5–3 deficit to win 9–6, and win the pennant.[10] The Marlins went on to win the 2003 World Series, beating the New York Yankees, four games to two. The Cubs did not win another playoff game after the incident until 2015 when they defeated the Pittsburgh Pirates in the wild card game, but they were swept in the NLCS by the New York Mets. In 2016, the team advanced to the World Series for the first time since 1945, ending a 71-year-old drought. By some accounts, this represented the end of this particular "curse",[clarification needed] since the Cubs had won the NLCS pennant, which they were unable to do in 2003. The Cubs then overcame a 3 games to 1 deficit to defeat the Cleveland Indians in the 2016 World Series, ending their 108-year drought, which was reported by media (and voiced by fans) to be the end of this curse, and others including the Curse of the Billy Goat.[11] For Bartman [ edit ] Bartman remained seated as Fox repeatedly broadcast live shots of him between multiple replays of the foul ball. The somber image of Bartman wearing a Cubs baseball cap, glasses, headset, and green turtleneck shirt became memorable. Because there were no replay boards or JumboTrons in Wrigley Field at the time, no one in the crowd knew of Bartman until many of the attendees' friends and family members, who were watching the game on TV, started calling them on cell phones, informing them of Bartman and his appearance. Many Cubs fans began pointing toward Bartman, repeatedly chanting "asshole". Bartman had to be led away from the park under security escort for his own safety as many Cubs fans shouted insults toward him and others threw debris, with one fan even dumping a cup of beer on him. Security escorted Bartman and two people who accompanied him to the game toward the exit tunnel from the field. News footage of the game showed him surrounded by security as passersby pelted him with drinks and other debris. Bartman's name, as well as personal information about him, appeared on Major League Baseball's online message boards minutes after the game ended.[12] As many as six police cars gathered outside his home to protect Bartman and his family following the incident.[13] Afterwards, then-Illinois Governor Rod Blagojevich suggested that Bartman join a witness protection program, while then-Florida Governor Jeb Bush offered Bartman asylum.[1] After the incident, Bartman released a statement, saying he was "truly sorry." He added, "I had my eyes glued on the approaching ball the entire time and was so caught up in the moment that I did not even see Moisés Alou, much less that he may have had a play."[13] Trying to maintain a low profile, Bartman declined interviews, endorsement deals, and requests for public appearances, and his family changed their phone number to avoid harassing phone calls.[14] He requested that any gifts sent to him by Florida Marlins fans be donated to the Juvenile Diabetes Research Foundation.[15] In July 2008, Bartman was offered $25,000 to autograph a picture of himself at the National Sports Collectors Convention in Rosemont, Illinois, but he refused the offer.[16] He declined to appear as a VIP at Wrigley Field. In 2011, eight years after the incident, he declined to appear in an ESPN documentary, and he declined a six-figure offer to appear in a Super Bowl commercial.[17] Many fans associated the Bartman incident with the Curse of the Billy Goat, allegedly laid on the Cubs during the 1945 World Series after Billy Sianis and his pet goat were ejected from Wrigley Field. The Cubs lost that series to the Detroit Tigers in seven games and did not return to the World Series until 2016.[1] Bartman was also compared to the black cat that ran across Shea Stadium near an on-deck Ron Santo during a September 9, 1969, regular season game between the Cubs and the New York Mets. The Cubs were in first place at the time, but after the cat appeared, the Cubs lost the game and eventually fell eight games behind the Mets in the standings, missing that season's playoffs.[18] On Fox, Thom Brennaman said of the incident, as well as the Marlins' subsequent rally: "It's safe to say that every Cubs fan has to be wondering right now, is the Curse of the Billy Goat alive and well?"[8] As of 2013, Bartman still lived in Chicago, worked for a financial services consulting company, declined interviews and, although still a Cubs fan, had not returned to Wrigley Field.[19] On October 2, 2015, Cubs fan Keque Escobedo created a GoFundMe campaign seeking $5,000 in donations to send Bartman to the 2015 National League Wild Card Game against the Pittsburgh Pirates. When the campaign was "more than halfway" finished, Bartman declined the offer. The funds went to the Alzheimer's Association instead.[20] Nonetheless, a fan did dress as Bartman from his appearance at the 2003 NLCS during the game at PNC Park, fooling some to believe that Bartman did appear in Pittsburgh.[21] During the 2016 Chicago Cubs season, Bartman received renewed media attention as the Cubs progressed through the playoffs. In mid-October 2016, at the time the Chicago Cubs were playing in the 2016 NLCS, the Chicago Sun-Times reached out to Bartman's attorney and spokesman, Frank Murtha. As is usual, Bartman through Murtha declined an interview.[22] On Saturday, October 22, 2016, the Cubs were again at home with a 3–2 lead in Game 6 of an NLCS, similar to the 2003 NLCS game. The Cubs ended up winning that game and moving onto the 2016 World Series. After winning the pennant, many Cubs fans petitioned for the team to allow Bartman to throw out a first pitch during the 2016 World Series. However, Murtha told CNN that Bartman did not want to be in the spotlight, and that there is "probably a slim, none, and no chance" that Bartman would agree to throw out a first pitch.[23] The World Series went on without Bartman making any public appearances. On Wednesday Night, November 2, 2016, the Chicago Cubs won the World Series for the first time since 1908, ending the so-called Curse of the Billy Goat. After their World Series win, Murtha said, "[Bartman] was just overjoyed that the Cubs won, as all the Cubs fans are." Further, when calls were made for Bartman to be a part of the victory parade, or other similar ideas, "The one thing that Steve and I did talk about was if the Cubs were to win, he did not want to be a distraction to the accomplishments of the players and the organization."[24] Destruction of the Bartman ball [ edit ] The loose ball was snatched up by a Chicago lawyer and sold at an auction in December 2003. Grant DePorter purchased it for $113,824.16 on behalf of Harry Caray's Restaurant Group. On February 26, 2004, it was publicly detonated by special effects expert Michael Lantieri.[25][26] In 2005, the remains of the ball were used by the restaurant in a pasta sauce. While no part of the ball itself was in the sauce, the ball was boiled and the steam captured, distilled, and added to the final concoction.[27] Today, the remains of the ball are on display at the Chicago Sports Museum, while further remains are amid various artifacts at the restaurant itself.[28] The Bartman seat [ edit ] Steve Bartman seat, Aisle 4, Row 8, Seat 113. In the years following the incident, the seat Bartman sat in – Aisle 4, Row 8, Seat 113 – became a tourist attraction at Wrigley Field, especially during Game 6 of the 2016 NLCS.[29][30] Moisés Alou [ edit ] In April 2008, Moisés Alou was quoted by the Associated Press as saying, "You know what the funny thing is? I wouldn't have caught it, anyway."[31] However, Alou later disputed that story: "I don't remember that," he said to a writer from The Palm Beach Post. "If I said that, I was probably joking to make [Bartman] feel better. But I don't remember saying that."[32] Alou added, "It's time to forgive the guy and move on."[32] In the 2011 documentary Catching Hell, Alou states, "I'm convinced 100% that I had that ball in my glove."[33] Defense of Bartman [ edit ] After the incident, the Cubs issued the following press release:[34] The Chicago Cubs would like to thank our fans for their tremendous outpouring of support this year. We are very grateful. We would also like to remind everyone that games are decided by what happens on the playing field—not in the stands. It is inaccurate and unfair to suggest that an individual fan is responsible for the events that transpired in Game 6. He did what every fan who comes to the ballpark tries to do—catch a foul ball in the stands. That's one of the things that makes baseball the special sport that it is. This was an exciting season and we're looking forward to working towards an extended run of October baseball at Wrigley Field. Several Cubs players publicly absolved Bartman of blame. Mark Prior said, "We had chances to get out of that situation. I hung an 0–2 curveball to [Ivan] Rodriguez that he hit for a single. Alex Gonzalez, who's a sure thing almost at shortstop, the ball came up on him... and things just snowballed. Everybody in the clubhouse and management knows that play is not the reason we lost the game."[35] Former Cubs pitcher Rick Sutcliffe said that the crowd's reactions to Bartman "crushed [him]". "Right after I saw what happened with the fan, I woke up the next morning and told my wife that if the Cubs asked me to throw out the first pitch in the World Series, I was going to take that fan out to the mound with me," he said.[36] Baseball commissioner Bud Selig also came to Bartman's defense, telling an interviewer, "[W]hile I understand that people felt so strongly and that their hearts were just breaking, to blame this young man, who is the most devoted Cub fan... it's just unfair. When I read his statement, it broke my heart. ... If you want to blame the Curse of the Bambino and the goat in Chicago or a series of other things, that's fine. But blaming Steve Bartman is just not right."[37] Several of Bartman's friends and family members spoke out in the days following the incident. His father told the Chicago Sun-Times, "He's a huge Cubs fan. I'm sure I taught him well. I taught him to catch foul balls when they come near him." A neighbor added, "He's a good kid, a wonderful son, never in any trouble. I don't think he should be blamed at all. People reach for balls. This just happened to be a little more critical. If Florida didn't score all the runs, you wouldn't be standing here."[13] Sun-Times sports columnist Jay Mariotti wrote, "A fan in that situation should try his best to get out of the way, even if he isn't of the mind to see Alou approaching, as Bartman claims. Still, he's also a human being who was reacting in a tense, unusual moment. And the resulting verbal abuse and trash-hurling, followed by the Neanderthal threats and creepy reaction on the Internet, hasn't reflected well on Chicago's sports culture. As it is, everyone thinks the prototypical local fans are those mopes from the Superfans skits on Saturday Night Live."[35] In a 2011 interview on ESPN's Pardon the Interruption, Cubs President Theo Epstein expressed a desire for the team to reach out to Bartman. "From afar, it seems like it would be an important step. Maybe a cathartic moment that would allow people to move forward together. I'm all about having an open mind, an open heart and forgiveness. Those are good characteristics for an organization to have as well. He's a Cubs fan. That's the most important thing," said Epstein.[38] In 2012, former Cubs player Doug Glanville said, "[I]t was easy to look at Steve Bartman [...] But that was not the whole story by a long shot." He argued that the Cubs lost the momentum of the series when Marlins ace Josh Beckett shut down the Cubs in Game 5. Glanville drew parallels between that game and Barry Zito's game-winning performance in Game 5 of the 2012 NLCS.[39] 2016 Cubs victory in World Series [ edit ] Through spokesman Frank Murtha, Bartman congratulated the Cubs in their World Series championship victory over the Cleveland Indians. Murtha did not state if Bartman watched the series, but did say that Bartman did not attend the Cubs victory parade held in Chicago.[40] MLB.com and ESPN have both reported that Cubs owner Tom Ricketts has shown interest in reaching out to Bartman for some closure, "at the right time".[41][42] Later on, Cubs president Theo Epstein stated that Bartman is "welcome to come back" but at his own discretion and that he should be left alone.[43] Bartman received a championship ring from Cubs owner Tom Ricketts and the Ricketts family as a special gift on July 31, 2017.[44] The Cubs said in a statement, "We hope this provides closure on an unfortunate chapter of the story that has perpetuated throughout our quest to win a long-awaited World Series. While no gesture can fully lift the public burden he has endured for more than a decade, we felt it was important Steve knows he has been and continues to be fully embraced by this organization. After all he has sacrificed, we are proud to recognize Steve Bartman with this gift today."[44] Bartman released a statement, saying "Although I do not consider myself worthy of such an honor, I am deeply moved and sincerely grateful to receive an official Chicago Cubs 2016 World Series Championship ring. I am fully aware of the historical significance and appreciate the symbolism the ring represents on multiple levels. My family and I will cherish it for generations. Most meaningful is the genuine outreach from the Ricketts family, on behalf of the Cubs organization and fans, signifying to me that I am welcomed back into the Cubs family and have their support going forward. I am relieved and hopeful that the saga of the 2003 foul ball incident surrounding my family and me is finally over."[44] See also [ edit ] ||||| Chicago Cubs fans have waited 108 years for Friday's World Series championship parade that will go through the city, so it's fair to think that we're talking millions of people will be along the parade route as Joe Maddon, Anthony Rizzo, Dexter Fowler, Kris Bryant and the rest of the squad show off the team's new trophy. It will be the greatest parade in the city's history since Ferris Bueller hijacked a float to sing "Twist and Shout," but one person that probably won't be there to see it (unless he goes incognito and just blends in with the crowd, which is totally possible) is Steve Bartman. Bartman, the person who fans have used as an excuse for the team's 2003 playoff collapse after he reached out for a foul ball that Cubs outfielder Moisés Alou was attempting to catch, issued a statement through Frank Murtha, a lawyer who has served as Bartman's spokesman. "We don’t intend to crash the parade," Murtha said. "The one thing that Steve and I did talk about was, if the Cubs were to win, he did not want to be a distraction to the accomplishments of the players and the organization." The Steve Bartman incident was another in a series of unfortunate events that fans of the team have used to try and rationalize the fact that the team couldn't win a title for over a century. Black cats and goats had been blamed along the way, but Bartman, a born-and-raised Cubs fan from the Chicago suburbs, unfortunately became the living embodiment of Cubs fans' frustrations to the point where he had to go into hiding. The general feeling among most fans is that all is forgiven, while others (this writer included) believe Bartman was unfairly vilified. Bartman, after becoming infamous in the early 2000s, went out of his way to stay out of sight. He didn't want to do interviews, didn't sign some massive book deal to tell his side of the story and didn't come out of the shadows as the Cubs fortunes looked to be changing. This latest chapter, with Bartman saying he doesn't want to take any of the spotlight away from the team he loves, should show Cubs fans once and for all that they've been very wrong about Steve Bartman. Chicago Cubs celebrate after winning in the 2016 World Series. ||||| Published on Nov 24, 2014 Chicago fan Steve Bartman interferes with Moises Alou, extending Luis Castillo's at bat Check out http://m.mlb.com/video for our full archive of videos, and subscribe on YouTube for the best, exclusive MLB content: http://youtube.com/MLB About MLB.com: Commissioner Allan H. (Bud) Selig announced on January 19, 2000, that the 30 Major League club owners voted unanimously to centralize all of Baseball's internet operations into an independent technology company. Major League Baseball Advanced Media (MLBAM) was formed and charged with developing, building and managing the most comprehensive baseball experience available on the internet. In August 2002, MLB.com streamed the first-ever live, full length MLB game when the Texas Rangers and New York Yankees faced off at Yankee Stadium. Since that time, millions of baseball fans around the world have subscribed to MLB.TV, the live video streaming product that airs every game in HD to nearly 400 different devices. MLB.com also provides an array of mobile apps for fans to choose from, including At Bat, the highest-grossing iOS sports app of all-time. MLB.com features a stable of club beat reporters and award-winning national columnists, the largest contingent of baseball reporters under one roof, who deliver over 100 original articles every day. MLB.com also offers extensive historical information and footage, online ticket sales, official baseball merchandise, authenticated memorabilia and collectibles and fantasy games. Major League Baseball consists of 30 teams split between the American and National Leagues. The American League, originally founded in 1901, consists of the following teams: Baltimore Orioles; Boston Red Sox; Chicago White Sox; Cleveland Indians; Detroit Tigers; Houston Astros; Kansas City Royals; Los Angeles Angels of Anaheim; Minnesota Twins; New York Yankees; Oakland Athletics; Seattle Mariners; Tampa Bay Rays; Texas Rangers; and Toronto Blue Jays. The National League, originally founded in 1876, consists of the following teams: Arizona Diamondbacks; Atlanta Braves; Chicago Cubs; Cincinnati Reds; Colorado Rockies; Los Angeles Dodgers; Miami Marlins; Milwaukee Brewers; New York Mets; Philadelphia Phillies; Pittsburgh Pirates; San Diego Padres; San Francisco Giants; St. Louis Cardinals; and Washington Nationals. Visit MLB.com: http://mlb.mlb.com Subscribe to MLB.TV: mlb.tv Download MLB.com At Bat: http://mlb.mlb.com/mobile/atbat Get tickets: http://mlb.mlb.com/tickets Official MLB Merchandise: http://mlb.mlb.com/shop Join the conversation! Twitter: http://twitter.com/mlb Facebook: http://facebook.com/mlb Instagram: http://instagram.com/mlb Google+: https://plus.google.com/+MLB Tumblr: http://drawntomlb.com/ Pinterest: http://pinterest.com/MLBAM
– Chicago has been waiting for him to come out of hiding for more than a decade—but it doesn't look like the man who was blamed for contributing to the Cubs' 2003 National League Championship loss will be out and about anytime soon, even though his team finally won the World Series. Steve Bartman raised his fellow fans' ire during the 2003 playoffs, when he reached out for and deflected a foul ball that Chicago outfielder Moises Alou was going for, per Rolling Stone. The Cubs ended up losing that game and that series, and Bartman became a Chicago pariah, even having to go into hiding. But even though Wednesday's World Series win should have meant an end to his own curse—Bartman "was just overjoyed that the Cubs won, as all the Cubs fans are," a lawyer who acts as his spokesman told USA Today Sports on Thursday—the once-disgraced fan has no intention of popping champagne bottles and throwing confetti in public. "We don't intend to crash the parade," the spokesman says. "The one thing that Steve and I did talk about was if the Cubs were to win, he did not want to be a distraction to the accomplishments of the players and the organization." (A man watched Game 7 in a cemetery by his father's grave.)
a 35-year - old female patient presented with recent - onset of pain , redness , and lid swelling in the right eye . four years earlier she had undergone bone marrow transplantation ( bmt ) for aml , with successful remission . she was under treatment with oral steroids for graft - versus - host - reaction ( gvh ) . her best - corrected vision in the right eye was counting fingers and in the left eye was 20/20 . the right eye showed lid edema , conjunctival and ciliary congestion , chemosis , a paracentral corneal epithelial defect , 3 + cells in the anterior chamber ( ac ) with fibrin strands and hypopyon and a posterior subcapsular cataract . applanation intraocular pressure ( iop ) was 34 mmhg in the right eye and 30 mmhg in the left eye . right fundus could not be seen clearly due to media opacities produced by a combination of corneal epithelial defect , ac reaction , and cataract . initial diagnosis of a corneal epithelial defect in the right eye , and bilateral glaucoma was made . she was started on topical antibiotics ( gatifloxacin ) , ocular lubricants ( carboxy - methylcellulose ) , and mydriatic - cycloplegics ( 1% atropine ) for the right eye and topical glaucoma medications ( 0.5% timolol ) bilaterally . on follow - up , the epithelial defect had healed and the lid edema , redness , and iop had reduced . she was prescribed topical steroids ( 1% prednisolone acetate ) and was reviewed at close intervals . nevertheless , the hypopyon was not responsive even after a week of treatment [ fig . 1 ] . lid edema and congestion have reduced after treating glaucoma and ocular surface disease due to graft - versus - host disease ( gvh ) due to failure of conventional treatment , masquerade syndrome was suspected and investigated . b - scan ultrasonography showed a few intravitreal echoes , choroidal thickening , and exudative retinal detachment [ fig . 2 ] . b - scan shows choroidal thickening and elevated retina inferiorly suggestive of choroidal infiltration and exudative retinal detachment aspiration of the ac infiltrate was performed under aseptic conditions . cytology of anterior chamber ( ac ) aspirate shows malignant cells confirming the rare location of ocular relapse in acute myeloid leukemia ( aml ) based upon the cytology and ultrasonography , we arrived at a diagnosis of relapse of aml involving the ac and choroid . it is extremely rare for relapse of aml to present with anterior segment manifestations . in a prospective 2-year study of 53 patients undergoing treatment of aml , no patient presented with hypopyon uveitis . our patient presented with multiple possible causes for visual loss and pain including corneal epithelial defect , ac reaction , and posterior subcapsular cataract and glaucoma . the patient was taking oral steroids for gvh , which confounded her initial clinical picture . in leukemic patients who have undergone bmt , the occurrence of gvh can further mask a classic presentation of masquerade syndrome by causing a congested , painful eye . increase in iop can to be due to the presence of tumor cells in the ac . however , in this patient , iop was high in both eyes at initial presentation . the secondary glaucoma was possibly due to a combination of tumor infiltration affecting the right eye and systemic steroid therapy affecting both eyes . posterior segment infiltration could not be visualized due to a combination of corneal , lenticular , and ac pathology . a high index of suspicion is necessary in patients with leukemia who are in remission . in such patients , mumps uveitis and chemical reiter 's syndrome may further complicate the diagnosis . in patients with a malignant cause of masquerade syndrome , cytologic analysis of intraocular fluids has been reported to be an essential diagnostic procedure with positive yield in 64% of cases . although cd56 expression of tumor cells is believed to be associated with cns involvement , immunophenotyping could not be done in our case . since there are few references of immunophenotyping of hypopyon cells in aml , this association with cd56 is yet to be confirmed . it is postulated that the blood - ocular barrier may be responsible for creating a pharmacological sanctuary , resulting in suppression , but not eradication of malignant cells by chemotherapeutic agents . patients with aml treated with radiotherapy and chemotherapy developed leukemic hypopyon 3 months to 9 years after initial diagnosis . all reviewed patients in this report died within 1 year of developing leukemic hypopyon . although the treatment of aml has changed since , their conclusion that leukemic hypopyon is associated with systemic leukemic relapse even if systemic examination reveals no leukemia , except for that in the ac , remains valid today . patients with hypopyon uveitis may present with or without the classical clinical signs of masquerade syndrome depending upon coexisting ocular pathology . judicious use of cytology was useful in differentiating it from true inflammation and confirming this rare complication of aml . timely diagnosis is essential , particularly if the suspicion of ocular or systemic malignancy needs to be validated .
anterior segment infiltration in acute myeloid leukemia ( aml ) presenting as hypopyon uveitis is very rare . we report this case as an uncommon presentation in a patient on remission after bone marrow transplant for aml . in addition to the hypopyon , the patient presented with red eye caused by ocular surface disease due to concurrent graft - versus - host disease and glaucoma . the classical manifestations of masquerade syndrome due to aml were altered by concurrent pathologies . media opacities further confounded the differential diagnosis . we highlight the investigations used to arrive at a definitive diagnosis . in uveitis , there is a need to maintain a high index of clinical suspicion , as early diagnosis in ocular malignancy can save sight and life .
BAKERSFIELD, Calif. (KBAK/KBFX) — A banquet room at The Mark Restaurant was filled with guests having dinner when patrons and workers alike noticed someone in distress. "Next thing you know, my server J.R. hears, 'She's choking! She's choking!'" described Bo Fernandez, restaurant general manager and executive chef. Former Kern County Supervisor Pauline Larwood, along with her husband Tom, had joined others Monday at the downtown spot for dinner after having attended a symposium on valley fever. Larwood began choking on a piece of steak. "Somebody tried to give her the Heimlich maneuver, and they weren't big enough," said Fernandez. Restaurant supervisor J.R. Gonzalez also tried to help, but to no avail. Fortunately for Larwood, there were numerous doctors in the house because of the Valley Fever Symposium. Among the diners was Dr. Royce Johnson, chief of infectious diseases at Kern Medical Center. Johnson sprang into action to perform an emergency tracheotomy with makeshift tools he had at his disposal. "The doctor said, 'Let's put her on the ground,'" Fernandez described, "and he made an incision with a knife." Fernandez motioned how the doctor inserted the knife to Larwood's throat area. Johnson then inserted the hollow cylinder of a pen to act as a breathing tube, according to Fernandez. "It took a little bit, but finally their gasp comes," he said. Also helping out during the procedure were Dr. Tom Frieden, director of the Centers for Disease Control and Prevention, and Dr. Paul Krogstad, who were participants at the Valley Fever Symposium, according to The Bakersfield Californian. Larwood is recovering at a local hospital. ||||| The dramatic incident took place at a restaurant where top officials had gathered after leaving the landmark symposium on valley fever held in Bakersfield on Monday and Tuesday. Some of the nation’s most accomplished physicians were in the room. A local doctor is being hailed as a hero after he used a folding pocket knife and pen to perform an emergency tracheotomy on a former Kern County supervisor at a downtown Bakersfield restaurant Monday night. Pauline Larwood, who was Kern County’s first female supervisor and currently serves as a community college trustee, was eating dinner at The Mark restaurant with some of the doctors, experts, politicians and others in town for the symposium when she began choking. After the Heimlich maneuver failed to open Larwood’s airway, witnesses said, Dr. Royce Johnson, professor of medicine at UCLA and Kern Medical Center’s chief of infectious diseases, used a friend’s knife to make an incision in Larwood’s throat to allow the insertion of the hollow cylinder of a pen as a breathing tube. The procedure succeeded and Larwood was rushed to Mercy Hospital Downtown. By Tuesday, her son said, she was doing fine. Johnson had appeared onstage Monday at the valley fever conference with Dr. Thomas Frieden, director of the Centers for Disease Control and Prevention, and Dr. Francis Collins, director of the National Institutes of Health. The CDC chief monitored Larwood’s pulse during the incident. Collins was also present at the dinner. Following a forum and survivors reception, about 55 people dined together in the downtown restaurant’s banquet room, including farming and business moguls Lynda and Stewart Resnick, and Rep. Kevin McCarthy, R-Bakersfield, said The Mark’s General Manager Ro Fernandez. At least two members of McCarthy’s security detail were also present. The entrees had just been served — steak, chicken or salmon — but Fernandez said he wasn’t sure which dish Larwood had chosen. Assemblywoman Shannon Grove, R-Bakersfield, said she was seated at a table with the Larwoods when the incident occurred. She said her husband, Rick Grove, and state Sen. Jean Fuller, R-Bakersfield, were seated on the other side of the table. Grove said her husband suddenly jumped up, ran to Pauline Larwood and tried to perform the Heimlich maneuver. He called for a doctor and Johnson attempted the technique as well. “She had already started turning a real like blue, her fingers and her lips,” Grove said. As Grove called 911, she watched in amazement as Larwood was laid back in a chair and Johnson began performing the emergency procedure. “He didn’t scream; he just said, ‘I need a knife,’” Grove said. Grove called Johnson a hero. “It was really unreal how calm (the situation) was,” she said. The folding knife Johnson used came from Dr. Thomas Farrell Jr., a retired physician and friend of Johnson’s, who said he always carries the knife. Farrell said Larwood’s skin turned blue and she lost consciousness. Her teeth were clenched so tightly he could not work to clear the blockage. As several physicians gathered around Larwood, Dr. Paul Krogstad, a professor of pediatrics and pharmacology at the David Geffen School of Medicine at UCLA, said someone called for a pen, and when one was handed to him, he broke it in half and placed it in the incision Johnson had made. “I was sort of looking at her breathing, Royce is blowing into this tracheotomy that he performed and the CDC director (Frieden) is checking her pulse,” Krogstad said. “She came around.” "She was fortunate that somebody as bold as Dr. Johnson jumped in,” he added. “By the time I got there, he already had a plan going and Dr. Frieden and I just assisted." Nevertheless, the doctors worked as a team. Frieden called out that Larwood did not need chest compressions and that she had a good pulse, Krogstad recalled. “I've never seen that done in public before but it made good sense,” Krogstad said of the tracheotomy. It was ”a pretty drastic measure,” he added, but “everyone knew what they were doing.” Before the ambulance even arrived, Larwood was sitting up, talking and fully conscious, Krogstad said. “She pinked up, her skin looked good pretty quickly,” said David Larwood, Tom and Pauline Larwood's son. Throughout the incident, Tom Larwood remained absolutely calm. Johnson declined to comment. David Larwood said his mother was taken to Mercy Hospital, where she stayed overnight. He said his father was with her Tuesday and she would probably be able to go home later that day. Larwood, 71, served on the Kern County Board of Supervisors from 1983 to 1994. She currently serves on the Kern Community College District’s board of trustees, another elected position. Meanwhile, on Tuesday, Fernandez complimented McCarthy’s security detail for keeping people calm and allowing rescuers to do what they needed to do. The crisis and the arrival of an ambulance brought an early end to the evening, Fernandez said. But everyone was relieved that Larwood appeared to be OK. “It must have been quite an evening for all of them,” he said.
– It's a regular occurrence in TV and movies, but a less common scene in real life: A doctor in Bakersfield, Calif., saved a choking woman's life at a restaurant by using a pocket knife and pen to perform an emergency tracheotomy, the Bakersfield Californian reports. If there is such a thing as ideal circumstances for choking, they're probably these: The incident happened during dinner on Monday after a medical symposium, so many top physicians—including CDC chief Dr. Thomas Frieden—were in the room when Pauline Larwood, a former county supervisor, started choking at a table (KBAK-KBFX reports that a piece of steak was the culprit). The Heimlich maneuver didn't work, so Dr. Royce Johnson—a professor of medicine at UCLA and the chief of infectious diseases at a local hospital—sprang into action. He borrowed a friend's pocket knife, made an incision in the 71-year-old's throat, and inserted a pen casing to use as a breathing tube. Frieden and Dr. Paul Krogstad, another UCLA medical professor, monitored her pulse and breathing. It worked—before the ambulance had even arrived, Larwood was reportedly sitting up and talking. The tracheotomy was "a pretty drastic measure," says Krogstad, but "everyone knew what they were doing."
endoplasmic reticulum ( er ) stress induced by protein misfolding is an important mechanism in cellular stress in a variety of diseases . when protein folding in the er is compromised , the unfolded proteins accumulate in the er which leads to er stress . er stress triggers the unfolded protein response ( upr ) , a transcriptional induction pathway which is aimed at restoring normal er functioning ( schroder and kaufman 2005 ) . the upr is mediated by three er stress receptors : protein kinase rna - like er kinase ( perk ) , inositol - requiring protein-1 ( ire1 ) and activating transcription factor-6 ( atf6 ) . in the absence of er stress , all three er stress receptors are maintained in an inactive state through their association with the er - chaperone protein grp78 ( bip ) . er stress results in the dissociation of bip from the three receptors , which subsequently leads to their activation ( ron and walter 2007 ) . dissociation of bip from perk leads to autophoshorylation and thereby activation of perk and subsequent phosphorylation of translation initiation factor eif2 , resulting in an inhibition of mrna translation , and eventually in the translation of the transcription factor atf4 . dissociation of bip from atf6 leads to translocation of atf6 to the golgi complex where it is cleaved by proteases into an active transcription factor . active atf6 moves to the nucleus and induces expression of genes with an er stress response element ( erse ) in their promoter such as the er - chaperone protein bip and the transcription factors c / ebp homologous protein ( chop ) and x - box binding protein-1 ( xbp1 ) . dissociation of bip from ire1 leads to the activation of ire1 which cleaves a 26-nucleotide intron from the xbp1 mrna . the spliced xbp1 mrna encodes a stable , active transcription factor that binds to the upre or erse sequence of many upr target genes , leading to transcription of er - chaperone proteins ( ron and walter 2007 ; yoshida et al . thapsigargin blocks the er calcium atpase pump , leading to depletion of er calcium stores and tunicamycin blocks n - linked glycosylation of proteins . both chemicals lead to high levels of stressors which are expected to rapidly activate all three components of the upr ( rutkowski and kaufman 2004 ) . an increasing number of studies have reported the involvement of er stress in a variety of diseases , including cystic fibrosis , alpha-1 antitrypsin deficiency , parkinson 's and alzheimer 's disease . the splicing of xbp1 mrna is considered to be an important marker for er stress ; however , the quantification is difficult because the splicing is mainly visualized by gel electrophoresis after conventional rt - pcr . we have now developed a simple and quantitative method to measure spliced human xbp1 by using quantitative real - time rt - pcr , and we show that the results obtained with this method correlate with data for bip and chop . primary bronchial epithelial cells ( pbec ) were isolated from resected lung tissue obtained from patients undergoing surgery for lung cancer as described previously ( van wetering et al . briefly , the cells were cultured in a 1:1 mixture of dmem ( invitrogen , carlbad , ca , usa ) and begm ( clonetics , san diego , ca , usa ) , supplemented with 0.4% w / v bpe , 0.5 ng / ml egf , 5 g / ml insulin , 0.1 ng / ml retinoic acid , 10 g / ml transferrin , 1 m hydrocortisone , 6.5 ng / ml t3 , 0.5 g / ml epinephrine ( all from clonetics ) , 1.5 g / ml bsa ( sigma , st louis , mo , usa ) , 1 mm hepes ( invitrogen ) , 100 u / ml penicillin and 100 g / ml streptomycin ( cambrex , east rutherford , nj , usa ) . immortalized human renal ptec ( hk-2 , kindly provided by m. ryan , university college dublin , dublin , ireland ) were grown in serum - free dmem / ham - f12 ( bio - whittaker , walkersville , md ) supplemented with 100 u / ml penicillin , 100 g / ml streptomycin ( invitrogen , breda , the netherlands ) , insulin ( 5 g / ml ) , transferrin ( 5 g / ml ) , selenium ( 5 ng / ml ) , triiodothyronine ( 40 ng / ml ) , epidermal growth factor ( 10 ng / ml ) , and hydrocortisone ( 36 ng / ml , all purchased from sigma , zwijndrecht , the netherlands ) . cells from the a549 human lung carcinoma cell line were obtained from the american type culture collection ( atcc , manassas , va ) . the cells were routinely cultured in rpmi 1640 medium ( gibco , grand island , ny ) , supplemented with 2 mm l - glutamine , 100 u / ml penicillin , 100 g / ml streptomycin ( cambrex , east rutherford , nj , usa ) and 10% ( v / v ) heat - inactivated fcs ( gibco ) at 37c in a 5% co2-humidified atmosphere . er stress was induced in epithelial cells by exposure to thapsigargin or tunicamycin . after reaching near - confluence , pbec were exposed to thapsigargin ( 50 nm , sigma ) for various time periods . for the dose response experiment pbec from 3 different donors were stimulated with various concentrations of thapsigargin or tunicamycin ( sigma ) for 6 h. dimethyl sulfoxide ( dmso ) ( merck , darmstadt , germany ) served as a solvent control for both thapsigargin and tunicamycin . the dose response experiments were repeated on hk-2 cells and a549 cells with 2 h stimulation instead of 6 h. this shorter duration of exposure in hk-2 and a549 cells was based on pilot experiments using these cell lines . after stimulation the cells were washed twice with pbs and total mrna was isolated using the rneasy mini kit ( qiagen , valencia ca , usa ) . dnase i amplification grade ( invitrogen ) was used to remove genomic dna . total rna concentration and purity next , cdna synthesis was performed with m - mlv reverse transcriptase ( promega , madison wi , usa ) . to amplify the spliced and unspliced xbp1 mrna gapdh ( forward 5ggatgatgttctggagagcc3 , reverse 5catcaccatcttccaggagc3 ) was used as a loading control . the size difference between the spliced and the unspliced xbp1 is 26 nucleotides . primers were designed to span the 26 base pair intron that is removed by ire1 to obtain the spliced xbp1 mrna ( xbp1spl ) ( forward 5tgctgagtccgcagcaggtg3 and reverse 5gctggcaggctctggggaag3 ) . also specific primers for chop ( forward 5gcacctcccagagccctcactctcc3 and reverse 5gtctactccaagccttccccctgcg3 ) and bip ( hirota et al . 2006 ) were used . quantitative pcr was carried out at 95c for an initial 3 min followed by 40 cycles of denaturation at 95c for 10 s , annealing at 62c for 15 s and extension at 72c for 30 s using iq sybrgreen supermix ( bio - rad , hercules , ca , usa ) . each assay was run on a bio - rad cfx real - time pcr system in triplicates and arbitrary mrna concentrations were calculated by the bio - rad software , using the relative standard curve method . relative mrna concentrations of atp5b and rpl13a ( genorm , primerdesign , southampton , uk ) were used as reference genes to calculate the normalized expression of the xbp1spl mrna . the identity of the pcr products obtained with the xbp1spl primers was verified by dna sequencing . primary bronchial epithelial cells ( pbec ) were isolated from resected lung tissue obtained from patients undergoing surgery for lung cancer as described previously ( van wetering et al . briefly , the cells were cultured in a 1:1 mixture of dmem ( invitrogen , carlbad , ca , usa ) and begm ( clonetics , san diego , ca , usa ) , supplemented with 0.4% w / v bpe , 0.5 ng / ml egf , 5 g / ml insulin , 0.1 ng / ml retinoic acid , 10 g / ml transferrin , 1 m hydrocortisone , 6.5 ng / ml t3 , 0.5 g / ml epinephrine ( all from clonetics ) , 1.5 g / ml bsa ( sigma , st louis , mo , usa ) , 1 mm hepes ( invitrogen ) , 100 u / ml penicillin and 100 g / ml streptomycin ( cambrex , east rutherford , nj , usa ) . immortalized human renal ptec ( hk-2 , kindly provided by m. ryan , university college dublin , dublin , ireland ) were grown in serum - free dmem / ham - f12 ( bio - whittaker , walkersville , md ) supplemented with 100 u / ml penicillin , 100 g / ml streptomycin ( invitrogen , breda , the netherlands ) , insulin ( 5 g / ml ) , transferrin ( 5 g / ml ) , selenium ( 5 ng / ml ) , triiodothyronine ( 40 ng / ml ) , epidermal growth factor ( 10 ng / ml ) , and hydrocortisone ( 36 ng / ml , all purchased from sigma , zwijndrecht , the netherlands ) . cells from the a549 human lung carcinoma cell line were obtained from the american type culture collection ( atcc , manassas , va ) . the cells were routinely cultured in rpmi 1640 medium ( gibco , grand island , ny ) , supplemented with 2 mm l - glutamine , 100 u / ml penicillin , 100 g / ml streptomycin ( cambrex , east rutherford , nj , usa ) and 10% ( v / v ) heat - inactivated fcs ( gibco ) at 37c in a 5% co2-humidified atmosphere . er stress was induced in epithelial cells by exposure to thapsigargin or tunicamycin . after reaching near - confluence , pbec were exposed to thapsigargin ( 50 nm , sigma ) for various time periods . for the dose response experiment pbec from 3 different donors were stimulated with various concentrations of thapsigargin or tunicamycin ( sigma ) for 6 h. dimethyl sulfoxide ( dmso ) ( merck , darmstadt , germany ) served as a solvent control for both thapsigargin and tunicamycin . the dose response experiments were repeated on hk-2 cells and a549 cells with 2 h stimulation instead of 6 h. this shorter duration of exposure in hk-2 and a549 cells was based on pilot experiments using these cell lines . after stimulation the cells were washed twice with pbs and total mrna was isolated using the rneasy mini kit ( qiagen , valencia ca , usa ) . dnase i amplification grade ( invitrogen ) was used to remove genomic dna . total rna concentration and purity next , cdna synthesis was performed with m - mlv reverse transcriptase ( promega , madison wi , usa ) . to amplify the spliced and unspliced xbp1 mrna , xbp1 primers were used as described previously ( yoshida et al . gapdh ( forward 5ggatgatgttctggagagcc3 , reverse 5catcaccatcttccaggagc3 ) was used as a loading control . the size difference between the spliced and the unspliced xbp1 is 26 nucleotides . primers were designed to span the 26 base pair intron that is removed by ire1 to obtain the spliced xbp1 mrna ( xbp1spl ) ( forward 5tgctgagtccgcagcaggtg3 and reverse 5gctggcaggctctggggaag3 ) . also specific primers for chop ( forward 5gcacctcccagagccctcactctcc3 and reverse 5gtctactccaagccttccccctgcg3 ) and bip ( hirota et al . quantitative pcr was carried out at 95c for an initial 3 min followed by 40 cycles of denaturation at 95c for 10 s , annealing at 62c for 15 s and extension at 72c for 30 s using iq sybrgreen supermix ( bio - rad , hercules , ca , usa ) . each assay was run on a bio - rad cfx real - time pcr system in triplicates and arbitrary mrna concentrations were calculated by the bio - rad software , using the relative standard curve method . relative mrna concentrations of atp5b and rpl13a ( genorm , primerdesign , southampton , uk ) were used as reference genes to calculate the normalized expression of the xbp1spl mrna . the identity of the pcr products obtained with the xbp1spl primers was verified by dna sequencing . several publications described methods for monitoring er stress and the upr , and spliced xbp1 mrna is generally considered to be a relevant marker for er stress . however , the semi - quantitative conventional rt - pcr method is currently used to assess the splicing of the xbp1 mrna . only hirota et al . ( 2006 ) developed a quantitative real - time rt - pcr method for measuring spliced xbp1 , but this method does involve an additional step with a restriction enzyme during the pcr reaction , which we consider to be more laborious and more complex . both samali et al . ( 2011 ) recently reviewed methods for monitoring er stress , and recommended the analysis of spliced xbp1 by conventional rt - pcr for detection of er stress . in our study , we have designed primers that span the 26 bp intron of the xbp1 mrna , in order to amplify only the spliced xbp1 mrna . because of the similarities between the sequence of the xbp1 mrna just before this intron and the last part of the intron itself , only a very few options were possible to design a specific forward primer for the xbp1spl mrna ( fig . 1 ) . the pcr product of the thapsigargin - treated , as well as the dmso - treated cells , both matched the spliced xbp1 mrna and no unspliced mrna was detected with the new primers . these results also indicate that in dmso - treated cells there is a low level of spliced xbp1 mrna present.fig . 1location of the forward xbp1spl primer on the spliced and unspliced xbp1 mrna location of the forward xbp1spl primer on the spliced and unspliced xbp1 mrna we compared the semi - quantitative conventional rt - pcr method with the newly developed quantitative real - time rt - pcr method by conducting a time course experiment with thapsigargin and dmso on pbec . in both methods , the spliced mrna was mostly expressed after 6 h , and the levels of spliced xbp1 mrna were found to be comparable between the two methods ( fig . 2 ) . next , we used the xbp1spl primers to perform a real - time rt - pcr in a dose response experiment on pbec , using various concentrations of thapsigargin as well as two concentrations of tunicamycin . in addition , chop and bip mrna was analyzed and correlated with levels of spliced xbp1 mrna ( fig . we found a significant and high correlation between chop and xbp1spl mrna ( r = 0.962 , p < 0.000 ) as well as between bip and xbp1spl mrna ( r = 0.884 , p < 0.000 ) . since both chop and bip genes contain an erse region which is recognized by the xbp1-protein , a good correlation between the spliced xbp1 and the more downstream er stress markers chop and bip was anticipated.fig . the spliced xbp1 mrna was mostly expressed after 6 h of stimulation with 50 nm thapsigargin , as shown by conventional rt - pcr ( top ) as well as by quantitative real - time rt - pcr ( bottom)fig . pbec were exposed to various concentrations of thapsigargin or tunicamycin for 6 h , and next rna was isolated for real - time rt - pcr - based detection of spliced xbp1 , chop and bip mrna . the results showed a dose - dependent increase in spliced xbp1 mrna after exposure to both stimuli ( a ) , that showed a significant correlation with levels of chop and bip mrna ( b ) . data are meansem using cells from three different donors ( a ) , and data points represent the result of a single experiment ( b ) effect of thapsigargin exposure for various time periods on xbp1 expression in pbec . the spliced xbp1 mrna was mostly expressed after 6 h of stimulation with 50 nm thapsigargin , as shown by conventional rt - pcr ( top ) as well as by quantitative real - time rt - pcr ( bottom ) effect of thapsigargin and tunicamycin on markers of er stress in pbec . pbec were exposed to various concentrations of thapsigargin or tunicamycin for 6 h , and next rna was isolated for real - time rt - pcr - based detection of spliced xbp1 , chop and bip mrna . the results showed a dose - dependent increase in spliced xbp1 mrna after exposure to both stimuli ( a ) , that showed a significant correlation with levels of chop and bip mrna ( b ) . data are meansem using cells from three different donors ( a ) , and data points represent the result of a single experiment ( b ) to evaluate whether the new xbp1spl primers could also be used in other cell lines , experiments were performed in two different epithelial cell lines , hk-2 ( a proximal tubule epithelial cell line from normal adult human kidney ) and a549 ( a human lung carcinoma cell line with type ii alveolar epithelial cell characteristics ) . we repeated the dose response experiments on these cell lines and compared the results with those previously obtained with the pbec ( fig . we found a dose - dependent increase of spliced xbp1 mrna in both cell lines , similar to the results found in pbec.fig . pbec , hk-2 , and a549 cells were exposed to various concentrations of thapsigargin or tunicamycin , and next rna was isolated for real - time rt - pcr - based detection of spliced xbp1 mrna . the results showed a dose - dependent increase in spliced xbp1 mrna after exposure to both stimuli for all cell lines . data are meansem using cells from three different donors ( pbec ) or three separate experiments ( hk-2 , a549 ) . pbec , hk-2 , and a549 cells were exposed to various concentrations of thapsigargin or tunicamycin , and next rna was isolated for real - time rt - pcr - based detection of spliced xbp1 mrna . the results showed a dose - dependent increase in spliced xbp1 mrna after exposure to both stimuli for all cell lines . data are meansem using cells from three different donors ( pbec ) or three separate experiments ( hk-2 , a549 ) . 3a in conclusion , the novel real - time rt - pcr method described in this report is a reliable quantitative method to measure spliced human xbp1 mrna as a marker for er stress .
endoplasmic reticulum ( er ) stress is increasingly recognized as an important mechanism in a wide range of diseases including cystic fibrosis , alpha-1 antitrypsin deficiency , parkinson 's and alzheimer 's disease . therefore , there is an increased need for reliable and quantitative markers for detection of er stress in human tissues and cells . accumulation of unfolded or misfolded proteins in the endoplasmic reticulum can cause er stress , which leads to the activation of the unfolded protein response ( upr ) . upr signaling involves splicing of x - box binding protein-1 ( xbp1 ) mrna , which is frequently used as a marker for er stress . in most studies , the splicing of the xbp1 mrna is visualized by gel electrophoresis which is laborious and difficult to quantify . in the present study , we have developed and validated a quantitative real - time rt - pcr method to detect the spliced form of xbp1 mrna .
type ia supernovae ( sne ia ) are exploding stars which are good cosmological distance indicators and have been used to measure the accelerated expansion of the universe @xcite . it is widely accepted that progenitors of sne ia are mass accreting carbon - oxygen ( co ) white dwarfs ( wds ) and they explode as a sn ia when their masses reach approximately the chandrasekhar mass @xcite . two families of progenitor models have been proposed : the double - degenerate model and the single - degenerate model . for the double degenerate model , previous works indicated that the expected accretion rates may cause the accretion - induced collapse of the co wds and the formation of neutron stars @xcite . in the single - degenerate model the mass donor is a main sequence ( ms ) referred to as the wd+ms channel or a red giant ( rg ) referred to as wd+rg channel @xcite . the latter is also called a symbiotic channel . according to @xcite and @xcite , the birth rate of sne ia via the wd+ms channel can only account for up to 1/3 of @xmath4yr@xmath2 observationally estimated by @xcite and @xcite . other channels should be important . recently , @xcite showed that helium star donor channel is noteworthy for producing sne ia ( @xmath5 yr@xmath2 ) . however , helium associated with sne ia fails to be detected @xcite , and @xcite suggested that sne ia from helium star donor channel are rare . therefore , symbiotic channel is a possible candidate . by detecting na i absorption lines with low expansion velocities , @xcite suggested that the companion of the progenitor of the sn 2006x may be an early rg ( however , @xcite showed that the absorption lines detected in sn 2006x can not form in the rg wind ) . @xcite studied the pre - supernova archival x - ray images at the position of the recent sn 2007on , and they considered that its progenitor may be a symbiotic binary . unfortunately , a wd+rg binary system usually undergoes a common envelope phase when rg overflows its roche lobe in popular theoretical model . wd+rg binaries are unlikely to become a main channel for sne ia . @xcite , @xcite , @xcite and @xcite showed that the birthrate of sne ia via symbiotic channel are much lower than that from wd+ms channel . in order to stabilize the mass transfer process and avoid the common envelope , @xcite assumed a mass - stripping model in which a wind from the wds strips mass from the rg . they obtained a high birth rate ( @xmath6 yr@xmath2 ) of sne ia coming from wd+rg channel . most of previous theoretical works assumed that the cool giant in symbiotic stars shares the same mass - loss rate with field giant and the wind from the cool giant is spherical @xcite . however , based on cm and mm / submm radio observations @xcite and iras data @xcite , mass - loss rates for the symbiotic giants are systematically higher than those reported for field giants . recently , @xcite found that the rotational velocities of the giants in symbiotic stars are 1.54 times faster than those of field giants . using the relation between rotational velocity and mass - loss rate found by @xcite , @xcite estimated that the mass - loss rates of the symbiotic giants are 330 times higher than those of field giants . in addition , @xcite suggested that the bipolarity in the 2006 outburst of the recurrent nova rs oph may result from an equatorial enhancement in its cool giant . if the cool giant in symbiotic star has high mass - loss rate and an aspherical stellar wind , the contribution of symbiotic channel to total sne ia may be significantly enhanced . in this work , assuming an aspherical stellar wind with equatorial disk around the symbiotic giant , we show an alternative symbiotic channel to sne ia . in @xmath7 2 we present our assumptions and describe some details of the modeling algorithm . in @xmath7 3 we show wd+rg systems in which sne ia are expected . the population synthesis results are given in @xmath7 4 . main conclusions are in @xmath7 5 . for the binary evolution , we use a rapid binary star evolution code of @xcite . when the primary has become a co wd and the secondary just evolves into first giant branch ( fgb ) or asymptotic giant branch ( agb ) , an aspherical wind from the secondary is considered . in other phases of binary evolution , we adopt the descriptions of @xcite . in general , the stellar wind from a normal rg is expected to be largely spherical due to the spherical stellar surface and isotropic radiation . however , the majority ( @xmath8 ) of observed planetary nebulae are found to have aspherical morphologies @xcite . this property can be explained by two models : ( i)the generalized wind - blown bubble in which a fast tenuous wind is blown into a previously ejected slow wind ( see a review by @xcite ) ; ( ii)the interaction of the slow wind blown by an agb star with a collimated fast wind blown by its companion @xcite . in two models , the slow wind is aspherically distributed and the densest in the equatorial plane . the aspherical wind with an equatorial disk may result from the stellar rotation @xcite or a simple dipole magnetic field @xcite . @xcite showed that an anisotropic wind from agb star can be caused by combining rotation effects with the existence of an inflated atmosphere formed by the stellar pulsation . therefore , the fast rotation and strong magnetic field are crucial physical conditions for an equatorial disk . an isolated rg usually has a low rotational velocity so that its effect can be neglected . furthermore , using a traditional method applied for the solar magnetic field , @xcite estimated the magnetic activity of agb stars , and found that the level of activity expected from single agb stars is too low to explain the aspherical wind . however , the situations in symbiotic stars are different . @xcite showed that the cool companions in symbiotic systems are likely to rotate much faster than isolated rgs due to accretion , tidal interaction and back - flowing material . according to measurements of the projected rotational velocities of the cool giants 9 symbiotic stars and 28 field giants , @xcite found that the rotational velocities of the giants in symbiotic stars are 1.5 - 4 times faster than those of field giants , which confirmed the results of @xcite . stellar magnetic activity is closely related to the rotation . the cool giants with fast rotational velocities are prime candidates for possessing strong magnetic field @xcite . therefore , the cool giants in symbiotic stars , having the fast rotation velocity and strong magnetic field , can result in aspherical winds . however , in binary systems , the orbital motion , the magnetic field and the gravitational influence of the companions can complicate the structure of the outflow @xcite . a detailed structure of the winds is beyond the scope of this work . by assuming several parameters , we construct a primary aspherical model to describe the morphology of winds from cool giants in symbiotic stars . no comprehensive theory of mass loss for rgs exists at present . @xcite applied the mass loss laws of @xcite and @xcite to describe the mass - loss rates of fgb and agb stars , respectively . as mentioned in introduction , the mass - loss rates for the symbiotic giants are systematically higher than those reported for field giants . we use a free parameters @xmath9 to represent the enhanced times of mass - loss rates during fgb and agb phases for the symbiotic giants by @xmath10 where @xmath11 is the mass - loss rate of rg in @xcite . in the present work , the winds from cool giants in symbiotic stars flow out via two ways : an equatorial disk and a spherical wind . the total mass - loss rate @xmath12 is represented by @xmath13 where @xmath14 and @xmath15 give the mass - loss rates via the equatorial disk and the spherical wind , respectively . we assume that the ratio of the mass - loss rate in the equatorial disk to the total mass - loss rate is represented by a free parameter @xmath16 , that is , @xmath17 in this work we make different numerical simulations for a wide range of @xmath9 and @xmath16 . the mass - loss rates @xmath12 , @xmath14 and @xmath15 are affected by the rotational velocities and the magnetic activities of the rgs in symbiotic stars . the rotational velocities and the magnetic activities depend on the binary evolutionary history @xcite . in the present paper we focus on the effects of the aspherical wind with an equatorial disk on the formation of sne ia . we neglect the different rotational velocities and magnetic activities which result from the different binary evolutionary history and result in different @xmath18 , @xmath14 and @xmath15 . this work overestimates the contribution of the symbiotic stars with long orbital periods to sn ia . recently , @xcite suggested the existence of an extended zone above the agb star where parcels of gas do not reach the escape velocity . in general , the binary with an agb star have a long orbital periods . the low outflow velocity of stellar wind greatly increases the efficiency of the companion accreting stellar wind . in our work , we do not consider the extended zone above the agb star , which results in an underestimate the contribution of the symbiotic stars with long orbital periods to sn ia . therefore , it is acceptable for a primary model to neglect the binary evolutionary history and the extended zone above the agb star . in symbiotic stars wds accrete a fraction of the stellar winds from cool companions . in the present paper a stellar wind is composed of a spherical wind and an equatorial disk . for the former , according to the classical accretion formula in @xcite , @xcite gave the mean accretion rate in binaries by @xmath19 where @xmath20 is a parameter ( following @xcite , @xmath21 ) , @xmath22 is the wind velocity and @xmath23 where @xmath24 is the semi - major axis of the ellipse , @xmath25 is the orbital velocity and total mass @xmath26 . for the equatorial disk , we neglect the diffusion of equatorial disk , that is , the disk thickness is constant . the accretion rate of wd is relative to the crossing area of its gravitational radius in orbital plane , which is approximately given by : @xmath27 where @xmath28 is the gravitational radius of wd accretor . if @xmath29 , the accretion rate given by eq . ( [ eq : diska ] ) may be higher than @xmath14 . it is necessary that the wd does not accrete more mass than that lost by the rg . so we enforced the condition @xmath30 . for @xmath31 , we requested @xmath32 is not lower than @xmath33 . ( [ eq : diska ] ) overestimates the contribution of the symbiotic stars with long orbital periods to sn ia because we neglect the diffusion of equatorial disk . total mass accretion rate is @xmath34 based on eqs . ( [ eq : bona ] ) and ( [ eq : diska ] ) , accretion rate depends strongly on the wind outflow velocity @xmath22 which is not readily determined . for the spherical winds , we adopt the prescription in @xcite , and @xmath35 where @xmath36 . in general , the equatorial disk has a lower outflow velocity than the spherical wind @xcite , and the typical wind velocity of field giant is between @xmath3 5 and 30 km s@xmath2 . due to the existence of an extended zone above the agb star@xcite , the outflow velocity of the equatorial disk can be very low . in this work the outflow velocity of the equatorial disk @xmath22 is taken as 2 , 5 and 10 km s@xmath2 in different numerical simulations . when a secondary overflows its roche lobe , we assume that there is no equatorial disk for the secondary . at this time the mass transfer via the inner lagrangian point ( l@xmath37 ) can be dynamically unstable or stable . if the mass ratio of the components ( @xmath38 ) at the onset of roche lobe overflow is larger than a certain critical value @xmath39 , the mass transfer is dynamically unstable and results in the formation of a common envelope . the issue of the criterion for dynamically unstable roche lobe overflow @xmath39 is still open . @xcite did a detailed study of stability of mass transfer using polytropic models . @xcite showed that @xmath39 depends heavily on the assumed mass - transfer efficiency . recently , @xcite studied @xmath39 for dynamically stable mass transfer from a giant star to a main sequence companion . they found that @xmath39 almost linearly increases with the amount of the mass and angular momentum lost during mass transfer . in this work , for normal main sequence stars @xmath40 while @xmath41 when the secondary is in hertzsprung gap @xcite . base on the polytropic models in @xcite , @xcite gave @xmath39 for red giants by @xmath42 where @xmath43 and @xmath44 are core mass and mass of the donor , respectively . in our work , we adopt the @xmath39 of @xcite . when @xmath45 , the binary system undergoes a stable roche lobe mass transfer and the mass transfer rate is calculated by @xmath46 ^ 3 m_\odot { \rm yr}^{-1 } \label{eq : mstr}\ ] ] where @xmath47 and @xmath48 are donor s radius and roche lobe radius , respectively . details can be seen in @xcite . when @xmath49 , the binaries in which the giants overflow the roche lobe immediately evolve to the common envelope phase , while the binaries for the main sequence stars overflowing the roche lobes will undergo the dynamically stable mass transfer . instead of calculating the effects of accretion on to the wd explicitly , we adopt the prescription of @xcite for the growth of the mass of a co wd by accretion of hydrogen - rich material from its companion ( also see @xcite ) . if the mass accretion rate @xmath50 exceeds a critical value , @xmath51 , the accreted hydrogen burns steadily on the surface of the wd at the rate of @xmath51 . the unprocessed matter is assumed to be lost from the systems as an optically thick wind at a rate of @xmath52 @xcite . the critical mass - accretion rate is given by @xmath53 where @xmath54 is the hydrogen mass fraction and is 0.7 in this work . if the mass - accretion rate @xmath50 is less than @xmath55 but higher than @xmath56 , it is assumed that there is no mass loss and hydrogen - shell burning is steady . if @xmath50 is between @xmath57 and @xmath58 , the accreting wd undergoes very weak hydrogen - shell flashes , where we assume that the processed mass can be retained . if @xmath50 is lower than @xmath58 , hydrogen - shell flashes are so strong that no mass can be accumulated by the accreting wd . the growth rate of the mass of the helium layer on top of the co wd can be written as @xmath59 where @xmath60 when the mass of the helium layer reaches a certain value , helium is possible ignited . if helium - shell flashes occur , a part of the envelope mass is assumed to be blown off . the mass accumulation efficiency for helium - shell flashes , @xmath61 , is given by @xcite . then , the mass growth rate of the co wd , @xmath62 , is @xmath63 when the mass of accreting co wd reaches 1.378 @xmath64 , it explodes as a sn ia . c in this section , according to the assumptions in the above section , we simulate the evolutions of the binary systems with a co wd and a ms . all input parameters are the same with those in case 1 , that is , the outflow velocity of equatorial disk @xmath65 km s@xmath2 , @xmath66 and @xmath67 . the initial masses of wds are 0.76 , 0.8 , 0.9 , 1.0 , 1.1 , and 1.2 @xmath64 , respectively . the initial orbital periods are from 0.8 to 11000 days with @xmath68 days . the initial masses of mss are between 1.2 and 2.0 @xmath64 with @xmath69 when @xmath70 and @xmath71 , between 1.15 and 3.0 @xmath64 with @xmath72 when @xmath73 and @xmath74 , between 1.1 and 3.5 @xmath64 with @xmath72 when @xmath75 , between 1.0 and 8 @xmath64 with @xmath76 when @xmath77 , respectively . [ fig : mprg ] shows wd+rg binary systems in the initial orbital period - secondary masse ( @xmath78 ) plane . filled symbols give the binary systems in which co wds explode eventually as sne ia : filled squares indicate sn ia explosions during an optically thick wind phase @xmath79 ; filled circles sn ia explosions during stable hydrogen - shell burning ( @xmath80 ) ; filled triangles sn ia explosions during mildly unstable hydrogen - shell burning ( @xmath81 ) . crosses show binary systems which undergo common envelope evolution during wd+rg phases , empty squares represent binary systems which experience stable roche lobe overflows during wd+rg phases , empty circles give binary systems which are detached wd+rg systems . they can produce symbiotic phenomena while they can not explode as sne ia . the binary systems which are not plotted can not evolve to wd+rg phases . they either explode as sne ia via wd+ms channel , or become wd+dwarf or helium ms systems . according to fig . [ fig : mprg ] , the progenitors of sne ia via symbiotic channel are mainly split into left and right regions . the progenitors in the left region have short initial orbital periods . table [ tab : fgbsn ] shows an example . the secondary overflows roche lobe during hertzsprung gap . due to @xmath82 , the progenitor experiences stable mass transfer . during roche lobe overflows , the great deal matter of the secondary has been transferred to the co wd so that the co wd becomes more massive while the secondary mass decreases . when the secondary evolves to fgb phase , @xmath83 has been lower than @xmath39 . therefore the progenitor undergoes a stable mass transfer in fgb phase till co wd explodes as sn ia . in the whole process , there is no aspherical wind with equatorial disk from the secondary . the progenitors in the right region have long initial orbital periods so that the secondaries can evolve into agb phase . table [ tab : agbsn ] shows an example . due to the high mass - loss rate of rg and the high accretion rate of co wd resulting from an aspherical wind with equatorial disk , the ratio of mass of rg to that of co wd is lower than @xmath39 when secondary overflows its roche lobe . therefore , the binary system undergoes a stable matter transfer until co wd explodes as sn ia . in the progenitors with longer initial orbital periods , the secondaries never overflow roche lobe and the accretion of co wds mainly depends on the aspherical wind with equatorial disk . compared with the previous work @xcite , the progenitors in the present work have longer initial orbital periods ( up to @xmath3 10000 days ) and wider ranges of initial masses of secondaries ( from @xmath84 to 8 @xmath64 ) . 0.8 mm .an example for the progenitor of sn ia with a short initial orbital period . the initial mass of the co wd is 0.8 @xmath64 , the initial mass of the secondary is 1.9 @xmath64 and the initial orbital period is 1.99 days . the first column gives the evolutionary age . columns 2 , and 3 show the masses of co wd and the secondary , respectively . the letters in parentheses of column 3 mean the evolutionary phases of the secondary . ms represents the main sequence , hg for hertzsprung gap . the forth column shows the orbital period . column 5 gives the ratio of the secondary radius to its roche lobe radius . the last column shows the critical mass ratio @xmath39 . all input physical parameters are same with those in case 1 . [ cols= " < , < , < , < , < , < , < " , ] in order to investigate the birth rate of sne ia , we carry out binary population synthesis via monte carlo simulation technique . binary evolution is affected by some uncertain input parameters . in this work , the rapid binary star evolution code of @xcite is used . if any input parameter is not specially mentioned it is taken as default value in @xcite . the metallicity @xmath85=0.02 is adopted . we assume that all binaries have initially circular orbits , and we follow the evolution of both components by the rapid binary evolution code , including the effect of tides on binary evolution @xcite . for the population synthesis of binary stars , the main input model parameters are : ( i ) the initial mass function ( imf ) of the primaries ; ( ii ) the mass - ratio distribution of the binaries ; ( iii ) the distribution of orbital separations . a simple approximation to the imf of @xcite is used . the primary mass is generated using the formula suggested by @xcite @xmath86 where @xmath54 is a random variable uniformly distributed in the range [ 0,1 ] , and @xmath87 is the primary mass from @xmath88 to @xmath89 . for the mass - ratio distribution of binary systems , we consider only a constant distribution @xcite , @xmath90 where @xmath91 . the distribution of separations is given by @xmath92 where @xmath54 is a random variable uniformly distributed in the range [ 0,1 ] and @xmath24 is in @xmath93 . in order to investigate the birthrates of sne ia , we assume simply a constant star formation rate over last 15 gyr@xcite , or a single starburst@xcite . in the case of a constant star formation rate , we assume that a binary with its primary more massive than 0.8 @xmath64 is formed annually@xcite . in the case of a single star burst we assume a burst producing @xmath94 binary systems in which the primaries are more massive than 0.8 @xmath64 , which gives a statistical error for our monte carlo simulation lower than 5 per cent for sne ia via symbiotic channel in every case in table [ tab : cases ] . [ fig : galab ] shows the galactic birthrates of sne ia via symbiotic channel . they are between @xmath95 ( case 2 ) and @xmath96 ( case 6 ) . different outflow velocities ( @xmath97 ) of equatorial disk in cases 1 , 2 and 3 have a great effects on the birthrates with a factor of @xmath3 40 . different ratios of the mass - loss rate in the equatorial disk to total mass - loss rate in cases 1 , 4 , 5 and 6 , @xmath16s ( @xmath98 ) , give uncertainty of the birthrates within a factor of @xmath3 9 . different enhanced times of the mass - loss rate for the symbiotic giants in cases 1 , 7 and 8 , @xmath9 ( @xmath99 ) , have a weak effect within a factor of 1.6 . the observationally estimated the galactic birthrate of sne ia by @xcite and @xcite is @xmath4yr@xmath2 . in cases 3 , 5 and 6 , the contribution of sne ia via symbiotic channel to total sne ia is negligible . however , in case 2 with low outflow velocities ( @xmath100 ) and high @xmath16 ( @xmath101 ) , the contribution is approximately 1/3 , which is comparable with that via wd+ms channel in @xcite . our results are greatly affected by the low outflow velocity of equatorial disk and its mass - loss rate . c fig . [ fig : sinb ] displays the evolution of birthrates of sne ia for a single star burst of @xmath102 binary systems . using observations of the evolution of sne ia rate with redshift , @xcite suggested that sne ia have a wide range of delay time from @xmath103 gyr to @xmath104 gyr . the delay time is defined as the age at the explosion of the sn ia progenitor from its birth . in the single degenerate scenario , the delay time is closely related to the secondary lifetime and thus the initial mass of the secondary @xmath105 @xcite . in wd+ms channel , @xcite , @xcite , and @xcite showed that @xmath106 is between 2 and 3.5 @xmath64 , which indicates that the range of the delay time is from @xmath3 0.1 gyr to 1 gyr . in order to obtain a wide delay time from @xmath3 0.1 to 10 gyr , @xcite assumed that optically thick winds from the mass - accreting co wd and mass - stripping from the companion star by the wd wind . assuming an aspherical stellar wind with an equatorial disk from cool giants in symbiotic stars , we give a very wide delay time range from @xmath3 0.07 gyr to 5 gyr . sne ia with shorter delay time than 0.1 gyr result from those progenitors with more massive @xmath105 than @xmath107 , sne ia with longer delay time than 1 gyr from those progenitors with lower @xmath105 than 2@xmath64 . because the delay time determined greatly by the initial mass of the secondary , fig . [ fig : sinb ] can be easily explained by the distribution of the initial masses of the secondaries for sne ia in the next subsection . c fig . [ fig : miwd ] shows the distribution of the initial masses of the co wds that produce ultimately a sn ia according to our models . there are obviously two peaks . the left peak is at about 0.8 @xmath64 , and results mainly from the binary systems with short initial periods like that showed in table [ tab : fgbsn ] and with long initial periods like that showed in table [ tab : agbsn ] . the right peak is at about 1.1 @xmath64 , and mainly results from binary systems with long initial periods . [ fig : mirg ] gives the distribution of the initial masses of the secondaries for sne ia . the distribution shows three regions . the left region is between @xmath108 ( @xmath109 for case 2 ) and @xmath110 . these sne ia have delay time longer than about 1 gyr and correspond to the right peak of fig . [ fig : sinb ] . the middle region is from @xmath110 to @xmath111 , and results from the progenitors with long orbital periods . these sne ia have delay time between @xmath3 0.5 and 1.0 gyr and correspond to the middle peak of fig . [ fig : sinb ] . the right region is more massive mass than @xmath112 . these sne ia have very short delay time and correspond to the left peak of fig . [ fig : sinb ] . except case 6 , there is a peak at about @xmath113 ( also see fig . [ fig : mipi ] ) . the peak results from the binaries in which primary initial masses ( @xmath114 ) are more massive than @xmath115 , secondary initial masses are between @xmath116 and @xmath114 , and initial orbital periods are about 10000 days . for these binaries , before the primaries overflow their roche lobe during agb phase , the secondaries can accrete some material so that the ratio of primary mass to secondary mass is lower than the critical value @xmath39 . these binaries avoid the common envelope evolution . then , they have wide binary separations so that the secondaries can evolve rgs and form aspherical stellar winds . [ fig : mipi ] shows the distributions of the progenitors of sne ia , in the `` initial secondary mass initial orbital period '' plane . according to fig . [ fig : mipi ] , the distribution of the initial orbital periods should have double peaks . the evolution of progenitors with short and long orbital periods can be explained by tables [ tab : fgbsn ] and [ tab : agbsn ] , respectively . @xcite showed that the distribution of progenitors of sne ia via wd+ms channel is between @xmath3 1 days and 10 days . the orbital distribution via symbiotic channel in this work is much longer than their distribution . c sn 2002ic was the first sn ia for which circumstellar ( cs ) hydrogen has been detected unambiguously @xcite . the evolutionary origin of sn 2002ic has been investigated by @xcite on the basis of a common envelope evolution model , by @xcite on the basis of the delayed dynamical instability model of binary mass transfer , by @xcite on the basis of a recurrent nova model with a rg , by @xcite on the efficient mass - stripping from the companion star by the wd wind . spectropolarimetry observations suggest that sn 2002ic exploded inside a dense , clumpy cs environment , quite possibly with a disk - like geometry @xcite . subaru spectroscopic observations also provide evidence for an interaction of sn ejecta with a hydrogen - rich aspherical cs medium @xcite . c according to the above descriptions , the progenitor of sn 2002ic should have a large amount of cs material which has a disk - like geometry . in our model , the mass of the cs material is approximately calculated by @xmath117 where @xmath118 is a distance from sn ia , @xmath119 is the increasing mass of wd for a span of @xmath120 , @xmath121 and @xmath122 are the outflow velocity of the equatorial disk and spherical stellar wind , respectively . by assuming that sn ia explodes inside a spherically symmetric cs envelope , @xcite estimated the total mass of the cs material for sn 2002ic is about 0.4 @xmath64 within a radius of about @xmath123 cm , and for sn 1997cy about 6 @xmath64 within a radius of about @xmath124 cm . @xcite suggested that there are several solar masses of material asymmetrically arrayed to distances of @xmath125 cm . therefore , we select the progenitor of sn 2002ic by two conditions : ( i)the mass of cs material calculated by eq . ( [ eq : mcs ] ) being larger than @xmath126 within a radius of @xmath127 cm ; ( ii)a cs with a disk - like geometry which can interact with the ejecta of sn 2002ic . in our model there is always an equatorial disk around the secondary before it overflows its roche lobe . in fig . [ fig : cirm ] , we give the distribution of total masses of cs material for sne ia like 2002ic within a radius of @xmath127 cm . the majority of the cs material lies in the orbital plane . [ fig : mp02 ] shows gray - scale maps of initial secondary masses @xmath128 vs. initial orbital period @xmath129 distribution for the progenitors of sn 2002ic . in our model , the mass of cs material depends on the initial secondary mass . for examples , there are two peaks in fig . [ fig : cirm ] and two regions in fig . [ fig : mp02 ] for case 1 . the left peak around @xmath130 in fig . [ fig : cirm ] originates from the low region in fig . [ fig : mp02 ] , and the right peak around @xmath130 in fig . [ fig : cirm ] from the up region in fig . [ fig : mp02 ] . c sn 2002ic appeared to be a normal sn ia from @xmath3 5 to 20 days after explosion @xcite , and brightened to twice the luminosity of a normal sn ia and showed strong h@xmath131 emission around 22 days after explosion @xcite . the standard brightness during the first 20 days after explosion and the suddenness of the brightness around 22 days implied that the original sn explosion expanded into a region with little cs material , and then encountered a region with significant cs material @xcite . investigating the high - resolution optical spectroscopy at 256 d and hk - band infrared photometry at + 278 and + 380 d , @xcite suggested that there is a dense and slow - moving ( @xmath3 100 km s@xmath2 ) outflow , and a dusty cs material in sn 2002ic . in our model , a cavity with little cs material around the accreting wd results from the wd s accretion , the dense and slow - moving outflow originates from the dense equatorial disk which have collided with the ejecta from sn explosion , and dust in cs material is formed in the dense equatorial disk . the radius of the cavity is roughly equal to the roche lobe radius of the accreting wd , @xmath132 . the region out of the cavity has large amount of cs material which mainly originates from the equatorial disk . due to orbital movement , the cs material out of the cavity has complicated structure . however , the most dense region lies on the equatorial disk around the secondary . the region where the ejecta from sn 2002ic firstly encountered the significant cs material is between @xmath132 and binary separation . [ fig : sep02 ] shows the distributions of @xmath132 and the binary separation of the progenitors of sn 2002ic prior to their explosion as a sn ia . the observationally estimated region is @xmath133 cm away from the explosion . obviously , the majority of binary separations in our work are shorter than @xmath134 cm . a few binary separations in case 2 in which the equatorial disk has a low outflow velocity are longer than @xmath134 cm . as mentioned in [ sec : asphe ] , we do not consider an extended zone above the agb star , which results in the underestimate for contribution of the symbiotic stars with long orbital periods to sn ia . we calculate the birthrates of sne ia like sn 2002ic in the galaxy . from cases 1 to 8 , they are @xmath135 and @xmath136 , respectively . compared with @xmath4 yr@xmath2 which is the birthrates of sne ia observationally estimated in the galaxy , the birthrates of sne ia like sn 2002ic in this work are not more than 1% of the total birthrates of sne ia . sn 2002ic is very rare event in the galaxy . c @xcite detected the cs material in sn 2006x from variable na i d ( however , see @xcite ) . they found relatively low expansion velocities ( mean velocity is about @xmath137 ) , and estimated that the absorbing dust is a few @xmath138 cm from the sn . a possible interpretation given by @xcite is that the high - velocity ejecta of nova bursts in the progenitors are , by sweeping up the stellar wind of the donor star , slowed down to relatively low expansion velocities . according to the resolved light echo , @xcite fond that the illuminated dust is @xmath327170 pc . the dust surrounding sn 2006x has multiple shells @xcite , and stems from the progenitor . in addition , the dust surrounding sn 2006x is quite different from that observed in the galaxy , and has a much smaller grain size than that of typical interstellar dust @xcite . surprisingly , there is apparent presence of dust in the cs environment in the 2006 outburst of recurrent nova rs oph @xcite . @xcite and @xcite reported that the dust appears to be present in the intervals of outbursts and is not created during the outburst event . this means that there is a dense region of the wind from the rg so that the dust can be formed @xcite . according to @xcite and @xcite , the densest parts of the red - giant wind lie in the equatorial regions along the plane of the binary orbit . the dust in recurrent nova rs oph originates possibly from the dense equatorial disk around the rg . therefore , it is possible that the progenitor of sn 2006x is very similar with recurrent nova rs oph . we assume that the candidates for the progenitor of sn 2006x satisfy with following two conditions : ( i ) there is a dense equatorial disk around rg , which can produce dust ; ( ii ) the progenitor of sn 2006x has undergone a series of weak thermonuclear outbursts presented by the filled triangles in fig . [ fig : mprg ] prior to explosion as sn ia . the high - velocity ejecta from every thermonuclear outburst blow off the parts of the stellar wind including the dust produced by dense equatorial disk . during this process , the high - velocity ejecta are slowed down , and the dust can be irradiated so that the grain size becomes small . a more detailed model for sn 2006x is in preparation . according to the above conditions , we select the possible progenitors of sn 2006x from our sample which are showed in fig . [ fig : mp06 ] . the birthrates of sne ia like sn 2006x in the galaxy from cases 1 to 8 are @xmath139 and @xmath140 , respectively . therefore , sne ia like sn 2006x are very rare events . fig . [ fig : sin06 ] displays the evolution of birthrates of sne ia like sn 2006x for a single star burst . by a toy model in which the cool giant in symbiotic star has an aspherical stellar wind with an equatorial disk , we investigate the production of sne ia via symbiotic channel . we estimate that the galactic birthrate of sne ia via symbiotic channel is between @xmath95 ( case 2 ) and @xmath1 yr@xmath2 ( case 6 ) , the delay time of sne ia has wide range from @xmath3 0.07 to 5 gyr . the results are greatly affected by the outflow velocity and mass - loss rate of the equatorial disk . the progenitor of sn 2002ic may be a wd+rg in which there is a dense equatorial disk around the rg . the cs environment of sn 2002ic may mainly originate from the dense equatorial disk . in the progenitor of sn 2006x , the co wd has undergone a series of mildly unstable hydrogen - shell burning , and the dust in the cs environment may originate from a dense equatorial disk around the rg and is swept out by the ejecta of every thermonuclear outburst . we are grateful to the referee , n. soker , for careful reading of the paper and constructive criticism . we thank zhanwen han for some helpful suggestions . this work was supported by national science foundation of china ( grants nos . 10647003 and 10763001 ) and national basic research program of china ( 973 program 2009cb824800 ) . 99 asida s. m. , tuchman y. , 1995 , apj , 455 , 286 balick b. , 1987 , aj , 94 , 671 barry b. k. et al . , 2008 , apj , 677 , 1253 bjorkman j. e. , cassinelli j. p. , 1993 , apj , 409 , 429 bode m. f. , harman d. j. , o@xmath141brien t. j. , bond h. e. , starrfield s. , darnley m. j. , evans a. , eyres e. p. s. , 2007 , apjl , 665 , 63 boffin h. m. j. , jorissen a. , 1988 , a&a , 205 , 155 bondi h. , hoyle f. , 1944 , mnras , 104 , 273 cappellaro e. , turatto m. , 1997 in ruiz - lapuente , p. , canal , r. , & isern j. , des , thermonuclear supernovae ( kluwer , dordrecht ) , p. 77 chen w. c. , li x. d. , 2007 , apj , 658 , l51 chen x. f. , han z. , 2008 , mnras , 387 , 1416 chugai n. n. , yungelson l. r. , 2004 , astronomy letters , 30 , 65 chugai n. n. , 2008 , astronomy letters , 34 , 389 deng j. et al . , 2004 , apj , 605 , l37 eggleton p. p. , fitechett m. j. , tout c. a. , 1989 , apj , 347 , 998 evans a. et al . , 2007 , apjl , 671 , 157 frank a. , 1999 , newa rew . , 43 , 31 frankowski a. , tylenda r. , 2001 , a&a , 367 , 513 hachisu i. , kato m. , nomoto k. , 1996 , apjl , 470 , 97 hachisu i. , kato m. , nomoto k. , 1999 , apj , 522 , 487 hachisu i. , kato m. , nomoto k. , 2008a , apj , 679 , 1390 hachisu i. , kato m. , nomoto k. , 2008b , apj , 683 , 127 hamuy m. et al . , 2003 , nat . , 424 , 651 han z. , eggleton p. p. , podsiadlowski ph . , tout c. a. , 1995 , mnras , 272 , 800 han z. , eggleton p. p. , podsiadlowski ph . , tout c. a. , webbink r. f. , 2001 , in podsiadlowski ph . , rappaport s. , king a. r. , dantona f. , burderi l. , eds , evolution of binary and multiple star systems , asp conf . 229 , p. 205 han z. , podsiadlowski , ph . , maxted p. f. l. , marsh t. r. , ivanova n. , 2002 , mnras , 336 , 449 han z. , podsiadlowski ph . , 2004 , mnras , 350 , 1301 han z. , podsiadlowski ph . , 2006 , mnras , 368 , 1095 hjellming m. s. , webbink r. f. , 1987 , apj , 318 , 794 hurley j. r. , pols o. r. , tout c. a. , 2000 , mnras , 315 , 543 hurley j. r. , tout c. a. , pols r. , 2002 , mnras , 329 , 897 kato m. , hachisu i. , 2004 , apj , 613 , l129 kato m. , hachisu i. , kiyota s. , saio h. , 2008 , apj , 684 , 1366 kenyon s. j. , fernandez - castro t. , stencel r. e. , 1988 , aj , 95 , 1817 kotak r. , meikle w. p. s. , adamson a. , leggett s. k. , 2004 , mnras , 354 , l13 li x. d. , van den heuvel e. p. j. , 1997 , a&a , 322 , l9 livio m. , reiss a. g. , 2003 , apj , 594 , l93 l g. l. , yungelson l. , han z. , 2006 , mnras , 372 , 1389 l g. l. , zhu c. h. , han z. , wang z. j. , 2008 , apj , 683 , 990 mannucci f. , della valle m. , panagia n. , 2006 , mnras , 370 , 773 marion g. h. , hflich p. , vacca w. d. , wheeler j. c. , 2003 , apj , 591 , 316 matt s. , balick b. , winglee r. , goodson a. , 2000 , apj , 545 , 965 mattila s. , lundqvist p. , sollerman j. , kozma c. , baron e. , fransson c. , leibundgut b. , nomoto k. , 2005 , a&a , 443 , 649 mazeh t. , goldberg d. , duquennoy a. , mayor m. , 1992 , apj , 401 , 265 meng x. c. , chen x. f. , han z. , 2008 , mnras , in press(arxiv:0802.2471 ) miller g. e. , scalo j. m. , 1979 , apjs , 41 , 513 mikoajewska j. , ivison r. j. , omont a. , 2003 , edited by corradi r. l. m. , mikoajewska r. , mahoney t. j. , symbiotic stars probing stellar evolution , asp conference proceedings , vol . 303 . , p.478 nieuwenhuijzen h. , de jager c. , 1988 , a&a , 203 , 355 nomoto k. , thielemann f. , yokoi k. , 1984 , apj , 286 , 644 nomoto k. , iben i. jr . , 1985 , apj , 297 , 531 o@xmath142brien t. j. et al . , 2006 , nat . , 442 , 279 patat e. et al 2007 , science , 317 , 924 perlmutter s. et al . , 1999 , apj , 517 , 565 phillips j. p. , 1989 , in iau symp . 131 , planetary nebulae , ed . s. torres - peimbert , p.425 reimers d. , 1975 , mem.soc.r.sci.liege , 8 , 369 riess a. et al . , 1998 , aj , 116 , 1009 saio h. , nomoto k. , a&a , 150 , l21 seaquist e. r. , krogulec m. , taylor a. r. , 1993 , apj , 410 , 260 soker n. , 1992 , pasp , 104 , 923 soker n. , 1994 , mnras , 270 , 774 soker n. , 1997 , apjs , 112 , 487 soker n. , rappaport s. , 2000 , apj , 538 , 241 soker n. , 2002 , mnras , 337 , 1038 soker n. , 2008 , newa , 13 , 491 van den bergh s. , tammann g. a. , 1991 , ara&a , 29 , 363 van den heuvel e. p. j. , bhattacharya d. , nomoto k. , rappaport s. , 1992 , a&a , 262 , 97 vassiliadis e. , wood p. r. , 1993 , apj , 413 , 641 voss r. , nelemans g. , 2008 , nat . , 451 , 802 wang b. , meng x. , chen x. , han z. , 2009 , mnras , in press ( arxiv:0901.3496 ) wang l. , baade d. , hflich p. , wheeler c. , kawabata k. , nomoto k. , 2004 , apj , 604 , l53 wang x. f. et al . , 2008 , apj , 675 , 626 wang x. f. , li w. d. , filippenko a. v. , foley r. j. , smith n. , wang l. , 2008 , apj , 677 , 1060 webbink r. f. , 1988 , in mikolajewska j. , friedjung m. , kenyon s. j. , viotti r. , eds , proc . iau colloq . 103 , the symbiotic phenomenon . kluwer , dordrecht , p.311 wood - vasey w. m. , 2002 , iau circ . 8019 wood - vasey w. m. , sokoloski j. l. , 2006 , apj , 645 , l53 yaron o. , prialnik d. , kovetz a. , 2005 , apj , 623 , 398 yungelson l. , tutukov a. v. , livio m. , 1993 , apj , 418 , 794 yungelson l. , livio m. , tutukov a. v. , kenyon s. j. , 1995 , apj , 477 , 656 yungelson l. , livio m. , 1998 , apj , 497 , 168 zamanov r. k. , bode m. f. , melo c. h. f. , stateva i. k. , bachev r. , gomboc a. , konstantinova - antova r. , stoyanov k. a. , 2008 , mnras , 390 , 377 zhang f. h. , li l. f. , han z. , 2005 , mnras , 364 , 503 zuckerman b. , aller l. h. , 1986 , apj , 301,772
by assuming an aspherical stellar wind with an equatorial disk from a red giant , we investigate the production of type ia supernovae ( sne ia ) via symbiotic channel . we estimate that the galactic birthrate of sne ia via symbiotic channel is between @xmath0 and @xmath1 yr@xmath2 , the delay time of sne ia has wide range from @xmath3 0.07 to 5 gyr . the results are greatly affected by the outflow velocity and mass - loss rate of the equatorial disk . using our model , we discuss the progenitors of sn 2002ic and sn 2006x . [ firstpage ] binaries : symbiotic stars : evolution stars : mass loss supernovae : general
the status of the axillary lymph nodes is the most important predictor of outcome in breast carcinoma patients . axillary lymph node dissection ( alnd ) or sentinel lymph node biopsy ( slnb ) is the method used for assessing axillary nodal metastases . but , during alnd , arm lymphatics are not distinguished from breast lymphatics and many a times sacrificed unnecessarily . transection of arm lymphatics during an alnd most likely results in lymphedema and is perhaps the most widely published complication of alnd . although the sentinel lymph node ( sln ) clearly reflects the status of the axillary lymph nodal basin and is less morbid , it has not prevented lymphedema . there is no doubt that lymphedema is minimized with sln dissection ( slnd ) in comparison with alnd , as highlighted in eight clinical trials comparing the slnd and alnd methods of axillary staging . rates of lymphedema with slnd were much lower than those with alnd , in the range of 013% , compared with 777% for alnd . although less , there is some risk of lymphedema with slnd as well . it is hypothesized that this higher than expected rate of lymphedema may be secondary to disruption of low - lying arm lymphatics ( i.e. arm lymphatics lying in close proximity to breast lymphatics ) during an slnd procedure . identification and ultimately , protection of these low - lying arm lymphatics through axillary reverse mapping ( arm ) could be a technique to prevent lymphedema . this new concept is termed as arm . the goal in arm is identification and preservation of lymph node draining the ipsilateral arm during standard alnd procedure rather than its removal . this is the reverse of the slnd where first lymph node draining breast is identified and removed for histopathological examination . arm involves retrieving all breast - related lymph nodes but leaving the intact of the main lymphatic drainage chain of the upper limb . thus , it reduces the incidence of lymphedema in breast cancer patients requiring an axillary dissection . the assumption is that lymph node draining the upper limb is different from that draining the breast and it is likely to be uninvolved even in those patients with documented axillary nodal metastases . this is supported by the anatomist 's description that the different groups ( lateral / brachial group ) of axillary nodes may be involved in the lymphatic drainage of the arm . arm concept is also a direct surgical implication of an anatomical description of the lymphatic territories of the upper limb published in 2007 by suami et al . identification of arm node can be performed by either blue dye or radio tracers or both . limitations of the dye method include insufficient identification rate of the arm node ( sensitivity < 70% ) , persistent blue stain at the site of injection lasting up to 2 years as well as allergy to blue dye . in radiotracer method , 99mtc sulfur colloid is used in a filtered ( 0.22 millipore filter ) or unfiltered form . 99mtc antimony trisulfide colloid ( particle size , 0.0150.3 m ) and 99mtc nanocolloid ( particle size , 0.050.8 m ) are used in australia and europe . a 99mtc sulfur colloid is used in the united states in a filtered ( 0.22 millipore filter ) or unfiltered form . radiopharmaceuticals described above are routinely used for lymphoscintigraphy and sln procedures , and same can also be used for arm procedure . the aims and objectives of this prospective study were to assess the feasibility of detecting arm node with filtered 99mtc - sulphur colloid , to assess the sensitivity of this technique and to establish a standard protocol of lymphoscintigraphy . we also aimed to assess the consistency in the location of arm node and incidence of metastases in the same . patients with pathologically proven breast cancer with clinically palpable or nonpalpable axillary lymph nodes ( tx-4 , n0 - 2 , and mx-0 ) undergoing primary breast surgery plus axillary dissection were included in the study . patients who had previous surgery of the upper limb or axilla and the patients who received neo adjuvant chemotherapy were excluded . less than 37 mbq of aseptically prepared filtered 99mtc sulfur colloid ( filtered with 0.22 millipore filter ) in a total volume of 0.5 ml was intradermally injected in equal divided doses into the second and third web spaces of the ipsilateral hand after induction of general anesthesia just before commencing the breast surgery . usual precautions like gentle shaking of the syringe prior to injection was undertaken to avoid the clumping of colloidal particles together . after injecting , all patients underwent either breast conservation surgery or mastectomy as decided by a surgical oncologist . a hand held battery operated collimated gamma probe ( sinorx gamma finder , usa ) , was used intraoperatively for the identification of arm lymph node . any node with count statistics of at least 10 times the background counts at a location remote from the injection site either in the axilla or along the upper limb drainage area was considered to be an arm node / s . arm nodes in the axilla were identified , and their location was noted in relation to specific anatomical landmarks such as a subscapular pedicle , second intercostobrachial ( icb ) nerve , and axillary vein [ figure 1 ] . all the arm lymph node / s was successfully excised and ex vivo counting of lymph node was also performed to reassess the completeness of surgery . finally , all the excised nodes were marked and sent for histopathological examination . during each procedure , the time interval at which the first arm node was localized was noted to calculate the average transit time . for the case who received sentinel node biopsy in addition to arm , slnb was performed first , followed by arm . note the axillary anatomy with described landmarks ethical committee clearance was obtained before commencing the study . patients with pathologically proven breast cancer with clinically palpable or nonpalpable axillary lymph nodes ( tx-4 , n0 - 2 , and mx-0 ) undergoing primary breast surgery plus axillary dissection were included in the study . patients who had previous surgery of the upper limb or axilla and the patients who received neo adjuvant chemotherapy were excluded . less than 37 mbq of aseptically prepared filtered 99mtc sulfur colloid ( filtered with 0.22 millipore filter ) in a total volume of 0.5 ml was intradermally injected in equal divided doses into the second and third web spaces of the ipsilateral hand after induction of general anesthesia just before commencing the breast surgery . usual precautions like gentle shaking of the syringe prior to injection was undertaken to avoid the clumping of colloidal particles together . after injecting , all patients underwent either breast conservation surgery or mastectomy as decided by a surgical oncologist . a hand held battery operated collimated gamma probe ( sinorx gamma finder , usa ) , was used intraoperatively for the identification of arm lymph node . any node with count statistics of at least 10 times the background counts at a location remote from the injection site either in the axilla or along the upper limb drainage area was considered to be an arm node / s . arm nodes in the axilla were identified , and their location was noted in relation to specific anatomical landmarks such as a subscapular pedicle , second intercostobrachial ( icb ) nerve , and axillary vein [ figure 1 ] . all the arm lymph node / s was successfully excised and ex vivo counting of lymph node was also performed to reassess the completeness of surgery . finally , all the excised nodes were marked and sent for histopathological examination . during each procedure , the time interval at which the first arm node was localized was noted to calculate the average transit time . for the case who received sentinel node biopsy in addition to arm , slnb was performed first , followed by arm . a total of 50 patients with breast cancer underwent arm lymph node detection procedure from april 2012 to august 2012 . the median age was 50 years ( range : 2982 years ) [ table 1 ] . arm node could not be detected in 3 patients [ tables 2 and 3 ] . in 2/3 cases , the axillary dissection commenced immediately after the radiotracer injection ( i.e. , 20 min and 30 min , respectively ) , and no radioactive lymph node was localized in the axilla . in the third case , the fibro fatty tissue which showed elevated counts with the gamma probe did not have any lymph node on histopathological examination . in 16 cases , more than one arm nodes ( range : 24 , mean nodes - 2.6 ) in the axilla showed elevated counts results - location of arm nodes in 40 out of 47 cases ( 85% ) , the location of the arm node was found lateral to the subscapular pedicle , above the second icb nerve and just below the axillary vein [ figure 1 ] . in 7 out of these 40 cases ( 17.5% ) , it was found to be at a more superficial location , but still lateral to the subscapular pedicle line . in 5/47 ( 10.6% ) cases , the node was more medially located . in 1 case , it was located on the icb nerve but lateral to the subscapular vessels . in another case , the arm node was located anterior to the subscapular pedicle [ table 3 ] . in 9/47 cases , when there was a delay in starting axillary dissection , a second node was identified , usually along the lateral aspect of the subscapular pedicle . in 6/47 cases , three nodes and 1/47 case , the average number of arm nodes with elevated counts , as seen during surgery , was 1.4 nodes per patient . the median number of lymph nodes removed in axillary dissection was 16 ( range : 626 ) . out of total 50 patients , 30 patients had histopathologically proven node positivity ( 60% ) for metastasis ( pn1 -13 , pn2 -11 and pn3 -6 ) [ table 1 ] . of the 47 patients in whom arm node / s were identified , metastasis was noted in 5 of them [ table 2 ] . these 5 patients were analyzed with respect to different variables such as quadrantic location of primary tumor , the location of arm node , nodal staging , and to identify if there were a correlation . ( a ) location of primary tumor - the quadrantic location of the primary tumor in the breast ( i.e. , upper inner quadrant [ uiq ] , upper outer quadrant , lower inner quadrant , lower outer quadrant [ loq ] , and central ) was variable in these 5 patients . loq-3 patients , uiq-1 , central-1 ( b ) location of arm node - arm node was situated : ( i ) lateral to latissimus dorsi pedicle , above the icb nerve and inferior to axillary vein-3 patients . ( c ) nodal stage - four of these 5 patients had a pathological n3 disease ( 15/18 , 14/15 , 10/11 , and 10/15 nodes positive / total number of nodes , respectively ) . in 4/4 patients , nodes in axilla looked clinically enlarged till level iii . in 1/4 patient , 3 arm nodes were detected ; while 3 out of 4 patients showed solitary arm node . one out of 5 patients had pathological n2 disease ( with 6/16 nodes being positive for metastasis ) and nodes in axilla looked enlarged to level i. successful identification of the arm node was possible in 94% of patients ; however , in three of them the arm node could not be identified . a possible explanation for nonlocalization of the same could be inadequate time given for the radiotracer to reach the draining node . this study also shows the feasibility of lymphoscintigraphy for the identification of arm node . the technique is easy and simple to perform . as the radio pharmaceutical is injected after induction of anesthesia , there is no pain or discomfort to the patient . used blue dye with less volume ( 0.5 cc ) to reduce skin discoloration in initial 8 patients in their series , which yielded poor identification rate of arm nodes . thompson et al . described 61% arm detection rate with blue dye technique in their series . anatomically the location of the arm node was fairly constant in 85% of the cases in our series . the most common site of this node was within the area bounded by the subscapular vessels medially , the icb nerve inferiorly and the axillary vein superiorly [ figure 1 ] . in 15% of the cases , the arm node was located either medial to the subscapular pedicle or inferior to icb nerve or anterior to subscapular pedicle [ table 3 ] . at such location ( 15% cases ) , it may not be possible to preserve the arm node because the central group lymph nodes are closely related to breast lymphatic drainage . the detection of the second tier nodes along the lateral aspect of the subscapular vessels , lower down , seems to indicate that this may be one of the routes for the lymphatics of the arm to traverse . a study by nos et al . had 9% arm nodes in breast lymphatic drainage area , which is comparable to our findings . however , other three different studies had 42.7% , 33% , and 37.5% arm nodes in breast lymphatic drainage area respectively . we also attempted to assess the factors which can predict the metastatic involvement of arm nodes in breast cancer . this study attempted to confirm whether the arm nodes were always free of metastatic disease across all pn stages . in our series , we found that 10% of arm nodes that is , in 5 patients these nodes were actually positive for metastasis out of which four had pn3 disease , which is comparable to 14% metastatic involvement in study by nos et al . other studies revealed all arm nodes to be free of metastasis ; however , these studies had patients with n0 nodal status . there are two explanations that can account for the metastatic involvement of the arm nodes . the second explanation relates to the natural progression of the metastatic disease through the nodes of the axilla . it is striking that the 5 cases with metastatic involvement of the arm nodes were found in patients who had significant axillary tumor burden , that is , pn3 ( 10 or more metastatic nodes ) in our study . but , it is difficult to predict preoperatively the involvement of the entire axilla with metastatic disease , despite careful clinical examination and all the available modern imaging . however , 80% cases ( 4/5 cases ) showed clinically enlarged lymph nodes till level iii intra operatively . careful clinical examination of lymph nodes intraoperatively may help in predicting metastatic involvement of arm node . we also evaluated other parameters such as quadrantic location of the primary tumor in breast , location of arm node , primary tumor histopathology , etc . however , a number of cases with metastatic arm nodes is small ( 5 cases ) , making it insufficient to draw any conclusions regarding the prediction of metastatic involvement in them . overall , the incidence of metastasis in arm node is low as noted in this series ( 10% arm nodes positive for metastasis ) . although our current study demonstrated that rn method could successfully identify arm node , further considerations would be required for the clinical application of this technique because lymphatic channels bound for arm must be kept intact during axillary node dissection to avoid lymphedema after the dissection . the combination of arm and sln mapping would be another issue because arm nodes can not be differentiated from slns by rn methods . arm node exists , and it is feasible to identify the arm node using a very simple and easy radio isotope technique . radioisotope technique provides an excellent sensitivity in detecting arm node without any significant side effects . the protocol we followed provides good sensitivity and allows early identification of arm node without much delay post radiotracer injection . it is involved with the metastasis only when there are multiple lymph nodal metastases in the axilla . a long - term follow - up study with the preservation of arm node during axillary surgery
objective : in the surgery of breast cancer , axillary reverse mapping ( arm ) is the identification and preservation of arm draining lymph node ( arm node ) during an axillary dissection . the assumption is that the arm node is different from node draining breast and is unlikely to be involved even in the patients with axillary nodal metastases . if we can identify and preserve arm node using lymphoscintigraphy ; morbidity of lymphedema , as seen with axillary dissection , may be avoided.materials and methods : pathologically proven 50 breast cancer patients undergoing initial surgery ( ctx-4 , cn0 - 2 , and mx-0 ) were included in this study . less than 37 mbq , 0.5 ml in equally divided doses of filtered 99mtc sulfur colloid was injected intradermally into the second and third web spaces . arm nodes in the axilla were identified with the help of gamma probe intraoperatively ; however , their location was noted with the reference to specific anatomical landmarks and sent for histopathological examination after excision.results:the arm node was successfully identified in 47/50 cases ( sensitivity - 94% ) . in 40 out of 47 cases ( 85% ) , the location of the arm node was found to lateral to the subscapular pedicle , above the second intercostobrachial nerve and just below the axillary vein . of the 47 patients in whom arm node / s were identified , metastasis was noted in 5 of them ( 10% ) . four out of these 5 patients had the pn3 disease.conclusion:arm node exists , and it is feasible to identify arm node using radio isotope technique with an excellent sensitivity . arm node seems to have a fairly constant location in more than 80% cases . it is involved with metastasis ( 10% cases ) only when there are multiple lymph nodal metastases in the axilla .
WASHINGTON — With the growing conviction that the Assad family’s 42-year grip on power in Syria is coming to an end, Obama administration officials worked on contingency plans Wednesday for a collapse of the Syrian government, focusing particularly on the chemical weapons that Syria is thought to possess and that President Bashar al-Assad could try to use on opposition forces and civilians. Pentagon officials were in talks with Israeli defense officials about whether Israel might move to destroy Syrian weapons facilities, two administration official said. The administration is not advocating such an attack, the American officials said, because of the risk that it would give Mr. Assad an opportunity to rally support against Israeli interference. President Obama’s national security adviser, Thomas E. Donilon, was in Israel over the weekend and discussed the Syrian crisis with officials there, a White House official said. Mr. Obama called President Vladimir V. Putin of Russia on Wednesday and urged him again to allow Mr. Assad to be pushed from power. Russia, so far, has refused. A White House statement said that Mr. Putin and Mr. Obama “noted the growing violence in Syria and agreed on the need to support a political transition as soon as possible that achieves our shared goal of ending the violence and avoiding a further deterioration of the situation.” The statement pointedly noted the “differences our governments have had on Syria,” but said the two leaders “agreed to have their teams continue to work toward a solution.” American diplomatic and military officials said the bombing in Damascus on Wednesday that killed several of Mr. Assad’s closest advisers was a turning point in the conflict. “Assad is a spent force in terms of history,” Jay Carney, the White House press secretary, told reporters. “He will not be a part of Syria’s future.” Alluding to Russia’s position, Mr. Carney said the argument that Mr. Assad’s ouster would result in more violence was refuted by the bombing, and that Mr. Assad’s continued rule “will result in greater violence,” not less. Defense Secretary Leon E. Panetta said on Wednesday that Syria’s crisis, was “rapidly spinning out of control.” Within hours of the bombing, the Treasury Department announced additional sanctions against the Syrian prime minister and some 28 other cabinet ministers and senior officials, part of the administration’s effort to make life so difficult for the government that Mr. Assad’s allies desert him. “As long as Assad stays in power, the bloodshed and instability in Syria will only mount,” said David S. Cohen, a senior Treasury official. Behind the scenes, the administration’s planning has already shifted to what to do after an expected fall of the Assad government, and what such a collapse could look like. A huge worry, administration officials said, is that in desperation Mr. Assad would use chemical weapons to try to quell the uprising. “The Syrian government has a responsibility to safeguard its stockpiles of chemical weapons, and the international community will hold accountable any Syrian officials who fails to meet that obligation,” Mr. Carney said. Any benefit of an Israeli raid on Syria’s weapons facilities would have to be weighed against the possibility that the Assad government would exploit such a raid for its own ends, said Martin S. Indyk, the former United States ambassador to Israel and director of the Foreign Policy Program at the Brookings Institution. He and several administration officials said the view was that Mr. Assad might use chemical weapons as a last resort. “But it crosses a red line, and changes the whole nature of the discussion,” Mr. Indyk said. “There would be strong, if not overwhelming sentiment, internationally, to stop him.” Russia, in particular, would probably have to drop its opposition to tougher United Nations sanctions against Syria, and Mr. Assad’s other remaining ally, Iran, would probably not look too kindly on a chemical attack. ||||| AMMAN/CILVEGOZU, Turkey (Reuters) - Rebels seized control of sections of Syria's international borders and torched the main police headquarters in the heart of old Damascus, advancing relentlessly after the assassination of Bashar al-Assad's closest lieutenants. The battle for parts of the capital raged into the early hours of Friday, with corpses piled in the streets. In some neighborhoods residents said there were signs the government's presence was diminishing. Officials in neighboring Iraq confirmed that Syrian rebels were now in control of the Syrian side of the main Abu Kamal border checkpoint on the Euphrates River highway, one of the major trade routes across the Middle East. Rebels also claimed control of at least two border crossings into Turkey at Bab al-Hawa and Jarablus, in what appeared to have been a coordinated campaign to seize Syria's frontiers. In Damascus, a witness in the central old quarter district of Qanawat said the huge headquarters of the Damascus Province Police was black with smoke and abandoned after being torched and looted in a rebel attack. "Three patrol cars came to the site and were hit by roadside bombs," said activist Abu Rateb by telephone. "I saw three bodies in one car. Others said dozens of security men and shabbiha (pro-Assad militia) lay dead or wounded along Khaled bin al-Walid street, before ambulances took them away." The next few days will be critical in determining whether Assad's government can recover from the devastating blow of Wednesday's bombing, which wiped out much of Assad's command structure and destroyed his circle's aura of invulnerability. Assad's powerful brother-in-law, his defense minister and a top general were killed in Wednesday's attack. The head of intelligence and the interior minister were wounded. Government forces have responded by blasting at rebels in their own capital with helicopter gunships and artillery stationed in the mountains overlooking it. Assad's own failure to appear in public for more than 24 hours - he was finally shown on television on Thursday swearing in a replacement for his slain defense minister - added to the sense of his power evaporating. His whereabouts are not clear. Diplomatic efforts - rapidly overtaken by events on the ground - collapsed in disarray on Thursday when Russia and China vetoed a U.N. Security Council resolution that would have imposed sanctions unless Syrian authorities halted violence. Washington said the Council had "failed utterly". Activists in Damascus said rebels were now in control of the capital's northern Barzeh district, where troops and armored vehicles had pulled out. The army had also pulled out of the towns of Tel and Dumair north of Damascus after taking heavy losses, they said. However they said troops were hitting the western district of Mezzeh with heavy machineguns and anti-aircraft guns overnight. The reports could not be confirmed. The Syrian government restricts access by international journalists. A resident who toured much of Damascus late on Thursday said he saw signs that the government's presence was diminishing, with only sporadic checkpoints and tanks in place in some areas. The Interior Ministry at the main Marjeh Square had a fraction of its usual contingent of guards still in place. Shelling could be heard on the southwestern suburb of Mouadamiyeh from hills overlooking the city where the Fourth Division, commanded by Assad's brother Maher, is based, he said. Syrian television showed the bodies of about 20 men in T-shirts and jeans with weapons lying at their sides, sprawled across a road in the capital's Qaboun district. It described them as terrorists killed in battle. COORDINATION The operations to seize the border checkpoints appear to show a level of coordination and effectiveness hitherto unseen from the rebels, who have been outgunned and outnumbered by the army throughout the 16-month conflict. Footage filmed by rebels at the Bab al-Hawa border crossing with Turkey showed them climbing onto rooftops and tearing up a poster of Assad. "The crossing is under our control. They withdrew their armored vehicles," said a rebel fighter who would only be identified as Ali, being treated for wounds on the Turkish side. Two officers in the rebel Free Syrian Army said fighters were keeping themselves busy into the early hours of Friday, dismantling border computer systems, seizing security records and emptying the shelves of the duty free shop. At least 30 government tanks in the area had not mobilized to try to recapture the border post, according to Ahmad Zaidan, a senior Free Syrian Army commander. Officials in neighboring Lebanon said refugees were pouring across the frontier: a security source said 20,000 Syrians had crossed on Thursday. UTTER FAILURE Diplomacy has been largely ineffective throughout the crisis, with Western countries condemning Assad but showing no stomach for the sort of robust intervention that saw NATO bombers help blast Libya's Muammar Gaddafi from power last year. Thursday's failed U.N. Security Council resolution, which would have extended a small, unarmed U.N. monitoring mission, was the third that has been vetoed by Russia and China. The U.S. ambassador to the United Nations, Susan Rice, said the Security Council had "failed utterly", and Washington would look outside the body for ways "to bring pressure to bear on the Assad regime and to deliver assistance to those in need". To replace the vetoed text, Britain proposed a four-paragraph resolution that would at least extend the expiring mandate of the monitors for 30 days. Russia's ambassador said he would ask Moscow to consider it. (Additional reporting by Oliver Holmes, Samia Nakhoul and Dominic Evans in Beirut, Suleiman Al-Khalidi in Cilvegozu, Turkey; Writing by Peter Graff; Editing by Andrew Roche)
– Describing the suicide blast that killed Bashar al-Assad's security chiefs yesterday as a turning point, American officials are now getting ready for the Syrian regime's collapse, reports the New York Times. Pentagon officials have urged the Israelis not to attack Syrian weapons facilities, fearing that such a move would allow Assad to rally support. President Obama spoke to Vladimir Putin yesterday, but the Russian leader once again refused to allow Assad to be pushed from power. But Obama and Putin "agreed on the need to support a political transition as soon as possible" to end the violence, said a White House statement. Fierce fighting is raging for a sixth day in Damascus, sometimes in sight of the presidential palace, and Assad's whereabouts are a mystery, reports Reuters. "This is a situation that is rapidly spinning out of control," said Defense Secretary Leon Panetta, who called on the world to put the maximum pressure on Assad to step down. Russia's foreign minister described the battles in the capital—which rebels call the "liberation" of Damascus—"the decisive fight." A human rights group says at least 214 people died in the Syrian conflict yesterday, including 60 who were killed when a helicopter gunship attacked a funeral procession in a southern suburb of Damascus, reports the BBC.
SECTION 1. FOREIGN LANGUAGE ASSISTANCE. Part B of title II of the Elementary and Secondary Education Act of 1965 (20 U.S.C. 3001 et seq.) is amended to read as follows: ``SEC. 2101. SHORT TITLE. ``This part may be cited as the `Foreign Language Assistance Act of 1993'. ``SEC. 2102. FINDINGS. ``The Congress finds that-- ``(1) foreign language proficiency is key to our Nation's international economic competitiveness, security interests and diplomatic effectiveness; ``(2) the United States lags behind other developed countries in the opportunities the United States offers elementary and secondary school students to study and become proficient in foreign languages; ``(3) more teachers must be trained for foreign language instruction in our Nation's elementary and secondary schools, and those teachers must have expanded opportunities for continued improvement of their skills; ``(4) students with proficiency in languages other than English should be viewed as valuable second language resources for other students; and ``(5) a strong Federal commitment to the purpose of this part is necessary. ``SEC. 2103. PURPOSE. ``It is the purpose of this part to improve the quantity and quality of foreign language instruction offered in our Nation's elementary and secondary schools. ``SEC. 2104. PROGRAM AUTHORIZED. ``(a) Authority.-- ``(1) Grants from the secretary.--In any fiscal year in which the appropriations for this part equal or exceed $50,000,000, the Secretary is authorized, in accordance with the provisions of this part, to award grants to States from allocations under section 2105 to pay the Federal share of the costs of the activities described in section 2107. ``(2) State grant program.--In any fiscal year in which the appropriations for this part do not equal or exceed $50,000,000, the Secretary is authorized to make grants, in accordance with the provisions of this part, to State educational agencies, local educational agencies, consortia of local educational agencies, or consortia of local educational agencies and institutions of higher education, to pay the Federal share of the cost of activities described in section 2107. ``(b) Supplement Not Supplant.--Funds provided under this part shall be used to supplement and not supplant non-Federal funds made available for the activities described in section 2107. ``(c) Duration.--Grants or contracts awarded under this part shall be awarded for a period of not longer than 5 years. ``SEC. 2105. ALLOCATION OF FUNDS. ``(a) Allocation.--From the amount appropriated under section 2113 for any fiscal year, the Secretary shall reserve-- ``(1) not more than \1/2\ of 1 percent for allocation among Guam, American Samoa, the Virgin Islands, the Northern Mariana Islands, and the Republic of Palau (until such time as the Compact of Free Association is ratified) according to their respective needs for assistance under this part; ``(2) not more than \1/2\ of 1 percent for programs for Native American students served by schools funded by the Secretary of the Interior if such programs are consistent with the purpose of this part; ``(3) 10 percent for national programs described in section 2108(a); ``(4) 5 percent for evaluation and research described in section 2108(b); and ``(5) in the case of a fiscal year in which appropriations for this part equal or exceed $50,000,000, 10 percent for bonus grants described in section 2108(c). ``(b) Formula.--In any fiscal year in which the appropriations for this part equal or exceed $50,000,000, the remainder of the amount so appropriated (after meeting the requirements of subsection (a)) shall be allocated among the States as follows: ``(1) \1/2\ of such remainder shall be allocated among the States by allocating to each State an amount which bears the same ratio to \1/2\ of such remainder as the number of children aged 5 to 17, inclusive, in the State bears to the number of such children in all States; and ``(2) \1/2\ of such remainder shall be allocated among the States according to each State's share of allocations under chapter 1 of title I for the preceding fiscal year, except that no State shall receive less than \1/4\ of 1 percent of such remainder. ``(c) Special Rule.--The provisions of Public Law 95-134 shall not apply to assistance provided pursuant to paragraph (1) of subsection (a). ``SEC. 2106. IN-STATE APPORTIONMENT. ``(a) Funding Above $50,000,000.--In any fiscal year in which appropriations for this part equal or exceed $50,000,000, each State receiving a grant under this part shall distribute not less than 95 percent of such grant funds so that-- ``(1) 50 percent of such funds are distributed to local educational agencies within the State for instructional programs described in paragraph (1) of section 2107; and ``(2) 50 percent of such funds are distributed to local educational agencies within the State for teacher development and recruitment activities described in paragraph (2) of section 2107. ``(b) Funding Below $50,000,000.--In any fiscal year in which appropriations for this part do not equal or exceed $50,000,000, the Secretary shall award grants to State educational agencies, local educational agencies, consortia of local educational agencies, or consortia of local educational agencies and institutions of higher education, so that-- ``(1) 50 percent of the funds all such entities in a State receive shall be used for instructional programs described in paragraph (1) of section 2107; and ``(2) 50 percent of the funds all such entities in a State receive shall be used for teacher development and recruitment activities described in paragraph (2) of section 2107. ``SEC. 2107. AUTHORIZED ACTIVITIES. ``A State, State educational agency, local educational agency, consortium of local educational agencies, or consortium of a local educational agency and an institution of higher education may use payments received under this part for the following activities: ``(1) Instructional programs.--Activities which establish, improve or expand elementary or secondary school foreign language programs, including-- ``(A) elementary school immersion programs with articulation at the secondary school level; ``(B) content-based foreign language instruction; and ``(C) intensive summer foreign language programs for students. ``(2) Teacher development and recruitment.--Activities which-- ``(A) expand or improve preservice training, inservice training and retraining of teachers of foreign languages, which training or retraining shall emphasize-- ``(i) intensive summer foreign language programs for teachers; and ``(ii) teacher training programs for elementary school teachers; ``(B) recruit qualified individuals with a demonstrated proficiency in a foreign language to teach foreign languages in elementary and secondary schools, which individuals may include-- ``(i) a retired or returning Federal Government employee who served abroad or a Federal Government employee whose position required proficiency in one or more foreign languages; ``(ii) a retired or returning Peace Corps volunteer; ``(iii) a retired or returning business person or professional who served abroad or whose position required proficiency in one or more foreign languages; ``(iv) a foreign-born national with the equivalent of a bachelor's degree from a domestic or overseas institution of higher education; ``(v) an individual with a bachelor's degree whose major or minor was in a foreign language or international studies; and ``(vi) a graduate of a fellowship or scholarship program assisted under the David L. Boren National Security Education Act of 1991 (20 U.S.C. 1901 et seq.); ``(C) develop programs of alternative teacher preparation and alternative certification to qualify such individuals to teach foreign languages in elementary and secondary schools; and ``(D) establish programs for individual foreign language teachers within a local educational agency in order to improve such teachers' teaching ability or the instructional materials used in such teachers' classrooms. ``SEC. 2108. FEDERAL ACTIVITIES. ``(a) National Programs.--From amounts reserved pursuant to section 2105(a)(3) in each fiscal year, the Secretary is authorized to make grants to State educational agencies, local educational agencies or consortia of local educational agencies to pay the Federal share of the cost of model demonstration programs that represent a variety of alternative and innovative approaches to foreign language instruction for elementary or secondary school students, including-- ``(1) two-way language programs; and ``(2) programs that integrate educational technology into curricula. ``(b) Evaluation and Research.--From amounts reserved pursuant to section 2105(a)(4) in each fiscal year, the Secretary-- ``(1) shall evaluate programs assisted under this part; and ``(2) through the Office of Educational Research and Improvement, shall award grants or enter into contracts for research, regarding-- ``(A) effective methods of foreign language learning and teaching; ``(B) assessments of elementary school foreign language programs and student skills; and ``(C) the efficacy of secondary school foreign language programs. ``(c) Bonus Grants.-- ``(1) In general.--From amounts reserved pursuant to section 2105(a)(5) in any fiscal year, the Secretary is authorized to award bonus grants to States which-- ``(A) require at least 3 years of foreign language study for all students graduating from secondary school in the State; ``(B) require at least 2 years of foreign language study prior to entrance into grade 9 in the State; ``(C) have at least 40 percent of the elementary school students in the State enrolled in foreign language instruction programs; or ``(D) have at least 70 percent of the secondary school students in the State enrolled in foreign language instruction programs. ``(2) Amount.--Each State eligible to receive a grant under paragraph (1) in a fiscal year shall receive a grant in such fiscal year in an amount determined as follows: ``(A) 50 percent of such amount shall be determined on the basis of the number of children aged 5 to 17, inclusive, in such State compared to the number of such children in all such States. ``(B) 50 percent of such amount shall be determined on the basis of such State's share of allocations under chapter 1 of title I compared to all such States' share of such allocations. ``SEC. 2109. APPLICATIONS. ``Each State, State educational agency, local educational agency, consortium of local educational agencies, or consortium of a local educational agency and an institution of higher education, desiring assistance under this part shall submit an application to the Secretary at such time, in such form, and containing or accompanied by such information and assurances as the Secretary may reasonably require. ``SEC. 2110. PAYMENTS; FEDERAL SHARE; NON-FEDERAL SHARE; WAIVER. ``(a) Payments.--The Secretary shall pay to each eligible entity having an application approved under section 2109 the Federal share of the cost of the activities described in the application. ``(b) Federal Share.-- ``(1) In general.--The Federal share-- ``(A) for the first year for which an eligible entity receives assistance under this part shall be not more than 90 percent; ``(B) for the second such year shall be not more than 80 percent; ``(C) for the third such year shall be not more than 60 percent; and ``(D) for the fourth and any subsequent year shall be not more than 40 percent. ``(c) Non-Federal Share.--The non-Federal share of payments under this part may be in cash or in kind, fairly evaluated, including equipment or services. ``(d) Waiver.--The Secretary may waive, in whole or in part, the requirement to provide the non-Federal share of payments for any State, State educational agency, local educational agency, consortium of local educational agencies, or consortium of a local educational agency and an institution of higher education, which the Secretary determines does not have adequate resources to pay the non-Federal share of the program or activity. ``SEC. 2111. PARTICIPATION OF CHILDREN AND TEACHERS FROM PRIVATE SCHOOLS. ``(a) Participation of Private School Students.--To the extent consistent with the number of children in the State or in the school district of each local educational agency receiving assistance under this part who are enrolled in private nonprofit elementary and secondary schools, such State or agency shall, after consultation with appropriate private school representatives, make provision for including services and arrangements for the benefit of such children as will assure the equitable participation of such children in the purposes and benefits of this part. ``(b) Participation of Private School Teachers.--To the extent consistent with the number of children in the State or in the school district of a local educational agency receiving assistance under this part who are enrolled in private nonprofit elementary and secondary schools, such State or agency shall, after consultation with appropriate private school representatives, make provision, for the benefit of such teachers in such schools, for such training and retraining as will assure equitable participation of such teachers in the purposes and benefits of this part. ``(c) Waiver.--If by reason of any provision of law a State or local educational agency is prohibited from providing for the participation of children or teachers from private nonprofit schools as required by subsections (a) and (b), or if the Secretary determines that a State or local educational agency has substantially failed or is unwilling to provide for such participation on an equitable basis, the Secretary shall waive such requirements and shall arrange for the provision of services to such children or teachers, subject to the requirements of this section. Such waivers shall be subject to consultation, withholding, notice, and judicial review requirements in accordance with section 1017 of this Act. ``SEC. 2112. DEFINITIONS. ``For the purpose of this part-- ``(1) the term `articulation' means the continuity of expectations and instruction from year to year and level to level within foreign language study; ``(2) the term `content-based foreign language instruction' means instruction in which portions of subject content from the regular school curriculum are taught or reinforced through the medium of a foreign language; ``(3) the term `foreign language instruction' means instruction in any foreign language, with emphasis on languages not frequently taught in elementary and secondary schools; ``(4) the term `immersion' means an approach to foreign language instruction in which students spend one-half or more of their school day receiving instruction in the regular school curriculum through the medium of a foreign language; ``(5) the term `intensive summer foreign language program' means a program in which participants are immersed in the foreign language for the duration of the activity; ``(6) the term `State' means each of the 50 States, the District of Columbia and the Commonwealth of Puerto Rico; and ``(7) the term `two-way language program' means a foreign language program in which native speakers of English are brought together with approximately equal numbers of speakers of another language and in which content instruction, reading and language arts are taught in both English and the non- English language, with the goal of producing students who have high levels of proficiency in English and the non-English language, appreciation for other cultures, and academic achievement at grade level expectation or above. ``SEC. 2113. AUTHORIZATION OF APPROPRIATIONS. ``There are authorized to be appropriated $75,000,000 for fiscal year 1994, and such sums as may be necessary for each of the 4 succeeding years, to carry out this part.''. S 1525 IS----2
Foreign Language Assistance Act of 1993 - Amends the Elementary and Secondary Education Act of 1965 to establish a foreign language assistance program. Authorizes the Secretary of Education to make grants: (1) as allocations to States in any fiscal year in which appropriations equal or exceed a specified amount; or (2) when appropriations are below such amount, to State educational agencies, local educational agencies (LEAs), consortia of LEAs, or consortia of LEAs and institutions of higher education. Requires that half of such funds be used for foreign language instructional programs at elementary and secondary schools and half for foreign language teacher development and recruitment. Authorizes as Federal activities: (1) grants for model demonstration programs of foreign language instruction for elementary or secondary school students; (2) evaluation and research; and (3) bonus grants to States for having specified levels of foreign language requirements or enrollments. Provides for Federal share and for participation of children and teachers from private schools. Authorizes appropriations.
SECTION 1. SHORT TITLE. This Act may be cited as the ``Forty Percent Funding of IDEA in Four Years Act'' or the ``Forty-in-Four Act''. SEC. 2. FINDINGS; PURPOSE. (a) Findings.--Congress finds the following: (1) The Federal Government appropriately requires States that accept funds under the Individuals with Disabilities Education Act (20 U.S.C. 1400 et seq.) to make available a free appropriate public education to all children with disabilities. (2) While Congress committed to contribute up to 40 percent of the national average per pupil expenditure to assist States and local educational agencies with the excess costs of educating children with disabilities, the Federal Government has never contributed more than 14.9 percent of the national average per pupil expenditure under the Individuals with Disabilities Education Act. (3) If Congress fully funded the Federal Government's obligation under the Individuals with Disabilities Education Act, States and local educational agencies would have significantly greater resources to reduce class size, improve school facilities, provide local tax relief, and otherwise redirect resources to areas based on local need. (b) Purpose.--The purpose of this Act is to provide by fiscal year 2005 40 percent of the national current average per pupil expenditure to assist States and local educational agencies with the excess costs of educating children with disabilities under part B of the Individuals with Disabilities Education Act (20 U.S.C. 1411 et seq.) SEC. 3. AMOUNT OF GRANT FOR STATES UNDER PART B OF THE INDIVIDUALS WITH DISABILITIES EDUCATION ACT. (a) In General.--Section 611(a) of the Individuals with Disabilities Education Act (20 U.S.C. 1411(a)) is amended-- (1) by striking paragraph (2); and (2) by inserting after paragraph (1) the following: ``(2) Minimum amounts.--The minimum amount of the grant a State is entitled to receive under this section is-- ``(A) the number of children with disabilities in the State who are receiving special education and related services-- ``(i) aged 3 through 5 if the State is eligible for a grant under section 619; and ``(ii) aged 6 through 21; multiplied by ``(B) the following percentages of the average current per-pupil expenditure in public elementary and secondary schools in the United States for the following fiscal years: ``(i) 20 percent for fiscal year 2002. ``(ii) 25 percent for fiscal year 2003. ``(iii) 30 percent for fiscal year 2004. ``(iv) 40 percent for fiscal year 2005 and each subsequent fiscal year. ``(3) No individual entitlement.--Paragraph (2) shall not be interpreted to entitle any individual to assistance under any State program, project, or activity funded under this part.''. (b) Conforming Amendments.--(1) Section 611 of the Individuals with Disabilities Education Act (20 U.S.C. 1411) is amended by striking subsection (j). (2) Section 611 of the Individuals with Disabilities Education Act (20 U.S.C. 1411), as amended by paragraph (1), is further amended-- (A) in subsection (b)(1), by striking ``From the amount appropriated for any fiscal year under subsection (j), the Secretary shall reserve not more than one percent, which shall be used'' and inserting ``From the amount available for any fiscal year to carry out this part (other than section 619), the Secretary shall use not more than one percent''; (B) in subsection (c), by striking ``From the amount appropriated for any fiscal year under subsection (j), the Secretary shall reserve'' and inserting ``From the amount available for any fiscal year to carry out this part (other than section 619), the Secretary shall use''; (C) in subsection (d)-- (i) in paragraph (1)-- (I) by striking ``(1) In general.--''; and (II) by striking ``paragraph (2) or subsection (e), as the case may be'' and inserting ``subsection (e)''; and (ii) by striking paragraph (2); (D) in subsection (e)-- (i) in the heading, by striking ``Permanent''; (ii) in paragraph (1)-- (I) by striking ``subsection (d)(1)'' and inserting ``subsection (d)''; and (II) by inserting after ``subsection (j)'' the following: ``(as such subsection was in effect on the day before the date of the enactment of the Forty Percent Funding of IDEA in Four Years Act)''; and (iii) in paragraph (3)(B)-- (I) in clause (ii)-- (aa) in subclause (I)(bb), by striking ``amount appropriated under subsection (j)'' and inserting ``amount available to carry out this part (other than section 619)''; (bb) in subclause (II)(bb), by striking ``appropriated'' and inserting ``available''; and (cc) in subclause (III)(bb), by striking ``appropriated'' and inserting ``available''; and (II) in clause (iii)(II), by striking ``appropriated'' and inserting ``available''; (E) in subsection (g)-- (i) in paragraph (2)-- (I) by striking subparagraph (A); (II) by striking ``(B) Permanent procedure.--''; (III) by redesignating clauses (i) and (ii) and subclauses (I) and (II) as subparagraphs (A) and (B) and clauses (i) and (ii), respectively; and (IV) in subparagraph (B) (as redesignated), by striking ``clause (i)'' and inserting ``subparagraph (A)''; and (ii) in paragraph (3)(A)-- (I) in clause (i)(I), by striking ``appropriated'' and inserting ``available''; (II) in clause (ii), by striking ``appropriated'' and inserting ``available''; and (F) in subsection (i)(3)(A), by striking ``appropriated under subsection (j)'' and inserting ``available to carry out this part (other than section 619)''. (c) Effective Date.--The amendments made by this section shall take effect on October 1, 2001.
Forty Percent Funding of IDEA in Four Years Act - Forty-in-Four Act - Amends the Individuals with Disabilities Education Act (IDEA) to require specified minimum levels of Federal grant payments to States for assistance for education of all children with disabilities in order to increase funding under the Act, by five percent increments per fiscal year, from 20 percent of the national current average for per pupil expenditure in FY 2002 to 40 percent in FY 2005 and afterwards.
SECTION 1. SHORT TITLE. This Act may be cited as the ``Ice Age Floods National Geologic Route Designation Act of 2006''. SEC. 2. PURPOSE. The purpose of this Act is to designate the Ice Age Floods National Geologic Route in the States of Montana, Idaho, Washington, and Oregon, enabling the public to view, experience, and learn about the Ice Age Floods' features and story through the collaborative efforts of public and private entities. SEC. 3. DEFINITIONS. As used in this Act: (1) Route.--The term ``Route'' means the Ice Age Floods National Geologic Route designated in section 4. (2) Secretary.--The term ``Secretary'' means the Secretary of the Interior. (3) Floods.--The term ``Ice Age Floods'' or ``floods'' means the cataclysmic floods that occurred in what is now the northwestern United States during the last Ice Age primarily from massive, rapid and recurring drainage of Glacial Lake Missoula. SEC. 4. DESIGNATION OF THE ICE AGE FLOODS NATIONAL NATIONAL GEOLOGIC ROUTE. (a) Designation.--In order to provide for the public appreciation, education, understanding, and enjoyment, through a coordinated interpretive program of certain nationally significant natural and cultural sites associated with Ice Age Floods that are accessible generally by public roads, the Secretary, acting through the Director of the National Park Service, with the concurrence of the agency having jurisdiction over such roads, is authorized to designate, by publication of a map or other description thereof in the Federal Register, a vehicular tour route along existing public roads linking such natural and cultural sites. Such route shall be known as the ``Ice Age Floods National Geologic Route''. (b) Location.--The location of the Route shall generally follow public roads and highways from the vicinity of Missoula in western Montana, across northern Idaho, through eastern and southern sections of Washington, and across northern Oregon in the vicinity of the Willamette Valley and the Columbia River to the Pacific Ocean, as generally depicted on the map titled ``Ice Age Floods National Geologic Trial'', numbered P43/80,000, and dated June 2004. (c) Maps.-- (1) Revisions.--The Secretary may revise the map by publication in the Federal Register of a notice of availability of a new map, as needed, in cooperation with Federal, State, local, or tribal governments, and other public or private entities. (2) Availability.--Any map referred to in paragraph (1) shall be on file and available for public inspection in the appropriate offices of the National Park Service. (d) Description of Sites; Plan; Interpretive Program.-- (1) Description of sites; plan.--Not later than 3 years after the date that funds become available for this Act, the Secretary shall prepare a description of sites along the Route and general plan which shall include the location and description of each of the following: (A) Unique geographic or geologic features and significant landforms. (B) Important cultural resources. (2) Interpretive program.--The general plan shall include proposals for a comprehensive interpretive program of the Route. (3) Transmission to congress.--The Secretary shall transmit the description of sites and general plan to the Committee on Resources of the United States House of Representative and the Committee on Energy and Natural Resources of the United States Senate. (4) Consultation.--The description of sites and plan shall be prepared in consultation with other Federal agencies, the State of Montana, the State Idaho, the State of Washington, and the State of Oregon, units of local governments, tribal governments, interested private citizens, and nonprofit organizations, and the Ice Age Floods Institute. SEC. 5. ADMINISTRATION. (a) In General.--The Secretary, acting through the Director of the National Park Service, shall administer a program to interpret the Route in accordance with this Act. (b) Public Education.--With respect to sites linked by segments of the Route which are administered by other Federal, State, tribal, and local nonprofit or private entities, the Secretary is authorized to provide technical assistance in the development of interpretive devices and materials pursuant to cooperative agreements with such entities. The Secretary, in cooperation with Federal, State, tribal, or local governments or nonprofit or private entities, shall prepare and distribute information for the public appreciation of sites along the Route. (c) Markers.--The Secretary shall ensure that the Route is marked with appropriate markers to guide the public. With the concurrence and assistance of the State, tribal, or local entity having jurisdiction over the roads designated as part of the Route, the Secretary may erect thereon signs and other informational devices displaying the Ice Age Floods National Geologic Route marker. The Secretary is authorized to accept the donation of suitable signs and other informational devices for placement at appropriate locations. (d) Private Property Rights.--Nothing in this Act shall be construed to require any private property owner to allow public access (including Federal, State or local government access) to such private property or to modify any provision of Federal, State or local law with regard to public access to or use of private lands. SEC. 6. AUTHORIZATION OF APPROPRIATIONS. There is authorized to be appropriated to the Secretary $250,000 for each fiscal year to carry out this Act. Passed the House of Representatives September 25, 2006. Attest: KAREN L. HAAS, Clerk.
Ice Age Floods National Geologic Route Designation Act of 2006 - Authorizes the Secretary of the Interior, acting through the Director of the National Park Service (NPS), and with the concurrence of the agency having jurisdiction over such roads, to designate a vehicular tour route from Missoula, Montana, to the Pacific Ocean along existing public roads linking certain nationally significant natural and cultural sites associated with the Ice Age Floods, which shall be known as the "Ice Age Floods National Geologic Route." Requires the Secretary to prepare and transmit to specified congressional committees a description of sites along the Route and a general plan, which shall include the location and description of unique geographic or geologic features and significant landforms and important cultural resources. Requires the general plan to include proposals for a comprehensive interpretive program of the Route. Requires the Secretary of the Interior, acting through the NPS Director, to administer a program for interpretation of the Route. Authorizes the Secretary to provide other federal, state, tribal, and local nonprofit or private entities with technical assistance in developing interpretive devices and materials with respect to sites linked by segments of the Route administered by such entities. Instructs the Secretary to ensure that the Route is marked with appropriate signs and other markers to guide the public. Declares that nothing in this Act shall be construed to require any private property owner to allow public access (including federal, state, or local government access) to such private property, or to modify any provision of federal, state, or local law with regard to public access to or use of private lands. Authorizes appropriations.
salivary gland tumors are uncommon and comprise only 1 - 4% of head - face - neck tumors . majority of the salivary gland tumors affect parotid gland with more than 70% of the cases . several studies have been conducted on the tumors of the parotid and minor salivary glands , but very few reports in the literature have focused on submandibular gland tumors as they are rare and are usually grouped with other salivary glands . submandibular gland is affected in 5 - 10% of the cases with pleomorphic adenoma ( pa ) being the most common tumor . this paper describes a case of pa involving submandibular gland and reviews benign tumors especially pa affecting submandibular gland . a 42-year - old male patient reported to us with an 8 months history of swelling in the left submandibular region . extraoral examination revealed diffuse swelling in the left submandibular region measuring approximately 7 cm 5 cm in size and oval in shape [ figure 1 ] . on palpation the swelling was firm , non - tender and mobile with well - defined borders . a provisional diagnosis of tumor of left submandibular gland was given based on history and examination findings . patient was advised computerized tomography ( ct ) scan to know the extent of the lesion . facial profile showing swelling in the left submandibular region coronal and axial ct sections revealed well - defined heterogeneous mass involving the left submandibular gland with areas of calcification [ figures 2 and 3 ] . the mass was measuring 7.2 cm 5.5 cm and caused pressure effects on the adjacent structures . 3d reconstructed image revealed presence of mass in the left submandibular region [ figure 4 ] . coronal computed tomography sections showing well defined heterogeneous mass involving left submandibular gland axial computed tomography sections showing well defined heterogeneous mass involving left submandibular gland 3d reconstructed image showing mass in the left submandibular region aspiration revealed features of pa . patient underwent complete excision of left submandibular gland [ figure 5 ] and the excised mass was sent for histopathology . histopathologic sections revealed darkly stained tumor cells lying in chondromyxoid mesenchyme [ figure 6 ] . patient was followed - up for a period of 1 year during , which there was no recurrence of the tumor . post - operative picture after excision of the tumor along with the gland photomicrograph ( h and e , 10 ) showing darkly stained tumor cells in a predominantly mesenchymal background salivary gland tumors comprise only 1 - 4% of head and neck tumors and most commonly affect parotid gland . very few reports in the literature have focused on submandibular gland neoplasms as they are rare and are usually grouped with other salivary glands . the most frequent neoplasms in the submandibular glands are : pa ( 36% ) , adenoid cystic carcinoma ( 25% ) , mucoepidermoid carcinoma ( 12% ) and malignant mixed tumor ( 10% ) . clinical reports indicate that benign neoplasms are characterized by a painless enlargement of the submandibular triangle . becerril - ramrez et al . in their 10 year study found a total of 22 cases of submandibular gland neoplasms , in which 19 cases ( 86% ) were benign and 3 cases ( 14% ) were malignant . the most common benign neoplasm was pa which accounted for 18 out of 19 cases . the mean age of occurrence of pa was 39.8 years with female to male ratio of 3.5:1 . munir and bradley reviewed series of the pleomorphic adenoma affecting submandibular gland over a period of 16 years from 1988 to 2004 . a total of 32 cases of submandibular gland pa were treated between the period among which 22 out of 32 ( 69% ) cases were female and the mean age of occurrence of pa was 54 years . all patients presented with clinically visible and palpable mass of submandibular fossa among which 84% of cases were asymptomatic and 16% presented with pain . analyzed clinicopathologic features of 23 patients with submandibular gland tumors , in which nine were benign and 14 were malignant tumors . they found that pa was the most frequent benign tumor and manifest a mild course of disease . a total of 36 patients with submandibular gland tumors were reviewed among which 17 cases were benign and 19 cases were malignant . pa ( 36.1% ) was the most frequent tumor , followed by adenoid cystic carcinoma ( 11.1% ) , anaplastic carcinoma ( 11.1% ) and malignant lymphoma ( 11.1% ) . progressive painless swelling ( 80.6% ) was the most common mode of presentation and cases which presented with painful mass ( 11.1% ) and ulceration ( 8.3% ) were malignant . in a brazilian population , de oliveira et al . found that the salivary gland tumors affect females more often , with a male : female ratio of 1:1.5 . the mean age for benign tumors was 43 years and for malignant tumors was 55 years . reviewed clinicopathological and immunohistochemical features of 60 cases of pa in brazil and found that pa occurred commonly between 3 and 5 decades of life and 37/60 ( 62% ) of them were women . fine needle aspiration findings provide evidence for a pre - operative diagnosis that is 70 - 80% accurate and also helps to differentiate between tumor and inflammatory conditions or enlarged lymph nodes . the treatment of choice for submandibular gland pa is total submandibular gland excision along with tumor . recurrence rate of submandibular gland tumors are less than parotid gland since entire gland is excised . injury to the marginal mandibular nerve is the most common complication leading to temporary or permanent paralysis due to the stretching or compression of the nerve . although there are few studies that have been conducted exclusively on submandibular gland , the clinical findings in the present case are in agreement with findings of the existing studies with pa being most common benign tumor affecting submandibular gland , occurring commonly between the 3 and 5 decade of life and presenting as slow growing asymptomatic swelling . however , in the present case , pa affected a male patient although it is most common in females . further studies exclusively involving submandibular gland have to be carried out to know the nature of tumors affecting it .
neoplasms that arise in the salivary glands are relatively rare , yet they represent a wide variety of both benign and malignant histologic subtypes . approximately 70% of the salivary gland tumors affect parotid gland with the submandibular gland being affected in 5 - 10% of the cases , sublingual gland in 1% and minor glands in 5 - 15% of the cases . submandibular gland tumors are relatively rare and very few studies have been reported in the literature that is exclusively conducted on tumors affecting submandibular gland . in this paper , we describe a case of pleomorphic adenoma affecting submandibular gland with brief review of current literature on submandibular gland tumors .
I do not know Steve Knight or the precise details of his voting record in Congress. If I did, I might not like his voting record. However, all Californians have the right to be free from offensive touching (assault), the right to be free from threat of offensive touching (battery), and have the right of self defense. If some person was touching Congressman Knight in a way which offended him, or even threatening to touch Congressman Knight, the Congressman has the right to defend himself, and the use of aggressive words rather than the fist in the face need to be commended. As a member of Congress, Congressman Knight also has the protection of Federal law making it a Federal crime to assault, batter or otherwise harm a Member of Congress. Remembering what happened to Rep. Gabby Giffords within this last several years, being shot in the head by a nutball constituent, the Capitol Hill Police, who provide protection to Members of Congress, are particularly persuasive in asking U.S. Attorneys to indict those who threaten, assault or batter Members of Congress, regardless of the seniority or party affiliation of the MOC. Unfortunately, leaders of the extreme right in California have a very long history of using physical intimidation against other white people who do not agree with their views. Way back in 1986 my spouse was in the Simi Valley City Hall, to pick up a pile of voter registration cards to be used by actual Simi Valley residents who were circulating a local ballot measure for signature and placement on the City of Simi Valley's ballot. A now well known rightie named Steve (not Knight) accosted my spouse in the lobby of the Simi Valley City Hall and demanded to know why my spouse had a handful of voter registration cards. When my spouse explained, the now well known rightie, fearing that a few people who didn't share his views might be registered to vote, shoved my spouse and tried to grab the blank voter registration cards away. Thankfully, a Simi Valley PD officer happened to be in the lobby, and fended off the rightie nutball, telling him he would be arrested if he ever interfered with someone with business in City Hall again. My spouse, the clueless innocent, had no idea who the registration card grabber was, but the cop told him the man's name and that "He's constantly a problem." In my last conversation with the late Dr. Keith Richman, former Santa Clarita Republican Assemblyman, Dr. Keith regaled me with stories of how the same right wing nutball made his life miserable of a course of many years. Despite his light hearted mood, Dr. Keith seemed saddened by the way the right wing nutballs acted against other Republicans. Here we are 29 years after the blank voter registration card grabbing incident in Simi Valley City Hall, and the registration card grabber and harasser of Dr. Richman is now a big wig in the right wing of the California Republican coterie. I'm sure the guy who offensively got into Steve Knight's face (committing an assault) is not the nutball from Simi Valley, but the memory of the tactics of California's extreme right unpleasantly lingers and the tactic is apparently being taught to new generations of right wing political activist. Rep. Knight defended himself in the same way most tough guys would do, and his street cred in the halls of the House side of the Capitol has been inadvertently enhanced for the benefit of his constituents. It's an environment where Republican Congressman who "don't take sh*t" are admired by their peers, the press be darned as long as the Congressman has no skeletons in his closet like Congressman Grimm. It was those skeletons, and not Rep. Grimm's physical aggressiveness which brought him down, despite what the Democrat bloggers are saying today. It's my best advice to Rep. Knight that he avail himself of the services of the Capitol Hill Police and the U.S. Attorney to get people like the aggressive man in the cel phone video, and those like him, under control. I'm told that most of these right wing wacko types turn into little *****cats when the Feds come knocking at the front door of their doublewides. reply » 1 1 ||||| Steve Knight, a Republican from California, celebrates after picking number one in the House member-elect room lottery draw on Capitol Hill on Nov. 19, 2014. (Andrew Harrer/Bloomberg News) Freshman Rep. Steve Knight (R-Calif.) threatened one of a group of protesters outside his Simi Valley office last week, telling him, "If you touch me again, I'll drop your ass." The exchange was captured on video, which even a freshman member of Congress should realize was inevitable. (We saw it via SantaClarita.com.) Freshman Rep. Steve Knight (R-Calif.) threatened a protester outside his Simi Valley office last week, telling him, "If you touch me again, I'll drop your ass." (YouTube/We The People Rising) "Mike," the man Knight confronts, has a firm grip on the congressman's hand as he says, "You told me you didn't vote for amnesty, and you did. I looked it up on the Internet. You lied to me." Then Mike forcefully pats him on the shoulder. Knight approaches him. "Mike, if you touch me again," he says, "I'll drop your ass." "I shook your hand!" Mike protests, somewhat disingenuously. The protesters were angry about their perception that Knight had voted in favor of "amnesty" for illegal immigrants. In this case, the bill at issue was H.R. 240, the legislation that ping-ponged across Capitol Hill earlier this year that would, in its initial iteration, have revoked President Obama's executive orders on immigration as part of approving funding for the Department of Homeland Security. As originally passed by the Republican House, the bill would have met the protesters' concerns; it received a "yes" recommendation from the conservative Heritage Action. Knight voted yes on that bill. Knight's point to the protesters is that the bill needed to be amended for passage in the Senate, or DHS would shut down. "How many votes do I need in the Senate," he asks. "Sixty," someone replies. Since the Senate needed Democratic support to pass the measure, and since Democrats opposed blocking the executive action, the Senate took out the immigration measure and sent the so-called "clean" funding bill back to the House. Then Knight's version goes sideways. "So the Senate stripped out the amnesty," he says -- with protesters interrupting to say it stripped out the "defunding of amnesty." They're correct, if that's how you choose to frame it. The bill that came back to the House (and which got a "no" recommendation from Heritage) did indeed get Knight's vote. He was one of 75 Republicans to support the measure. Knight's argument might be that he opposed "amnesty" in the sense that the protesters mean. But that's not the argument he makes. He's debating how he voted on the funding bill which, in the end, was contrary to what the protesters wanted. You should not smack a member of Congress on the shoulder while backed up by a large angry group of people. Members of Congress, however, should not physically threaten a protester, constituent or not. Or, for that matter, misrepresent key votes.
– A freshman congressman in California is getting some unwanted attention after briefly losing his temper with a protester. Rep. Steve Knight is seen on the video posted at SantaClarita.com leaning in close to the protester and saying, "If you touch me again, I'll drop your ass." That came after the man identified only as Mike shook Knight's hand in a not-so-friendly manner and then slapped him on the shoulder a few times—all the while accusing Knight of being a liar and voting in favor of "amnesty" for illegal immigrants. After the exchange, Knight quickly regains his cool and tries to explain to the group the legislative complications surrounding the vote, which was tied into funding of Homeland Security. Philip Bump of the Washington Post explains the details for those interested, but he also finds a lesson in the encounter for both: "You should not smack a member of Congress on the shoulder while backed up by a large angry group of people. Members of Congress, however, should not physically threaten a protester, constituent or not."
in any hard process the initial interaction takes place between partons which then turn into the final hadrons by means of hadronization process . the space - time evolution of the hadronization process , despite on its importance , is known relatively little . in particular in refs . @xcite the average formation lengths of high - energy hadrons was studied based on the lund model of hadronization @xcite . the ambiguity in the concept of formation length for composite particles was pointed out . two different formation lengths were defined and their distributions calculated . the results were compared with the data which allowed to choice suitable form for average formation length . + in the ref . @xcite the investigation of the space - time scales of the hadronization process was continued for the concrete case of pseudoscalar mesons , produced in semi - inclusive deep inelastic scattering ( dis ) . it was shown that the average formation lengths of these hadrons depend from their electrical charges . in particular the average formation lengths of positively charged mesons are larger than of negatively charged ones . this statement was verified for @xmath11 ( the fraction of the virtual photon energy transferred to the detected hadron ) in the current fragmentation region , for cases of different scaling functions , for all nuclear targets and any value of the bjorken scaling variable @xmath10 . in all cases , the main mechanism was the direct production of pseudoscalar mesons . including in consideration the additional mechanism of pseudoscalar mesons production in result of decay of resonances , leaded to the decrease of average formation lengths . it was shown that the average formation lengths of positively ( negatively ) charged mesons were slowly rising ( decreasing ) functions of @xmath10 . + the investigation of average formation lengths of baryons and antibaryons is the next step in the study of space - time structure of hadronization process . the mechanism for meson production follows rather naturally from the simple picture of a meson as a short piece of string between @xmath12 and @xmath13 endpoints . there is no unique recipe to generalize this picture to baryons . in the framework of lund model @xcite the baryon in dis can be produced in three scenarios : ( i ) diquark scenario ; ( ii ) simple popcorn scenario ; ( iii ) advanced popcorn scenario . ( i ) diquark picture . baryon production may , in its simplest form , be obtained by assuming that any flavor @xmath14 , produced from color field of string , could represent either a quark or an antidiquark in a color triplet state . then the same basic formalism can be used as in case of meson production , supplemented with the probability to produce various diquark pairs . in this simple picture the baryon and the antibaryon are produced as neighbours in rank in a string breakup . + the experimental data indicate that occasionally one or a few mesons may be produced in between the baryon and the antibaryon ( @xmath15 ) along the string . this fact was used for development the so called popcorn model . the popcorn model is a more general framework for baryon production , in which diquarks as such are never produced , but rather baryons appear from the successive production of several @xmath16 pairs . it is evidently the density and the size of the color fluctuations which determine the properties of the @xmath15 production process . the density determines the rate of baryon production but in case the fluctuations are large on the scale of the meson masses it is possible that one or more mesons are produced between the @xmath15-pair . taking into account the uncertainty principle we can estimate the value of the color fluctuations . it turn out that there is a fast fall - off with the size of the space - time regions inside which color fluctuations may occur . therefore , in a model of this kind , @xmath15 produced in pair are basically either nearest neighbours or next nearest neighbours in rank . ( ii ) simple popcorn . in this model it is assumed that at most one meson could be produced between the baryon and antibaryon . it is assumed that @xmath15 and @xmath17 ( with additional meson between @xmath18 and @xmath19 ) configurations occur with equal probability . ( iii ) advanced popcorn . it is assumed that several mesons could be produced between the baryon and antibaryon . this model has more complicated set of parameters . in this work , for the sake of simplicity , the diquark picture will be used . + in section 2 the theoretical framework is briefly presented . in section 3 the obtained results are presented and discussed . the section 4 contains conclusions . in refs . @xcite it was shown that a ratio of multiplicities for the nucleus and deuterium can be presented in the form of a function of single variable which has the physical meaning of the formation length ( time ) of the hadron . this scaling was verified , for the case of charged pions , by hermes experiment @xcite . now hermes experiment prepares two - dimensional analysis of nuclear attenuation data . + in the string model , for the construction of fragmentation functions , the scaling function @xmath20 is introduced ( see , for instance , refs . it is defined by the condition that @xmath21 is the probability that the first hierarchy ( rank 1 ) primary hadron carries away the fraction of energy @xmath11 of the initial string . we use symmetric lund scaling function @xcite for calculations : @xmath22 where @xmath23 and @xmath24 are parameters of model , @xmath25 is the transverse mass of final hadron , @xmath26 is normalization factor . + in the further study we will use the average value of the formation length defined as @xmath27 . + the consideration is convenient to begin from @xmath28 direct , @xmath29 , which takes into account the direct production of hadrons : @xmath30 where @xmath31 is the full hadronization length , @xmath32 is the string tension ( string constant ) , @xmath33 is the distribution of the constituent formation length @xmath34 of hadrons carrying fractional energy @xmath11 . @xmath35 @xmath36 the functions @xmath37 and @xmath38 are the probabilities that in electroproduction process on proton target the valence quark compositions for leading ( rank 1 ) and subleading ( rank 2 ) hadrons will be obtained . similar functions were obtained in @xcite for more general case of nuclear targets . in eq.(3 ) @xmath39- and @xmath40-functions arise as a consequence of energy conservation law . the functions @xmath41 are distributions of the constituent formation length @xmath34 of the rank @xmath42 hadrons carrying fractional energy @xmath11 . for calculation of distribution functions we used recursion equation from ref . + the simple form of @xmath20 for standard lund model allows to sum the sequence of produced hadrons over all ranks ( @xmath43 ) . the analytic expression for the distribution function in this case was presented in @xcite . + unfortunately , in case of more complicated scaling function presented in eq.(1 ) the analytic summation of the sequence of produced hadrons over all ranks is impossible . therefore , we limited ourself by @xmath44 in eq.(3 ) . + in the ref . @xcite the essential contributions in the spectra of pseudoscalar mesons from the decays of vector mesons were obtained . now , using the same formalism we will calculate the contribution of baryonic resonances in the average formation lengths of baryons . + the distribution function of the constituent formation length @xmath34 of the daughter hadron @xmath45 which arises in result of decay of parent resonance @xmath46 and carries away the fractional energy @xmath11 is denoted @xmath47 . it can be computed from the convolution integral : @xmath48 @xmath49 where @xmath50 and @xmath51 , @xmath52 ( @xmath53 ) is maximal ( minimal ) fraction of the energy of parent resonance , which can be carried away by the daughter baryon . + let us consider the two - body isotropic decay of resonance @xmath46 , @xmath54 , and denote the energy and momentum of the daughter hadron @xmath45 ( @xmath55 or @xmath56 ) , in the rest system of resonance , @xmath57 and @xmath58 , respectively . in the coordinate system where resonance has energy and momentum equal @xmath59 and @xmath60 , @xmath61 @xmath62 where @xmath63 is the mass of resonance @xmath46 . in the laboratory ( fixed target ) system the resonance usually fastly moves , i.e. @xmath64 . + the constants @xmath65 can be found from the branching ratios in the decay process @xmath66 . we will present their values for interesting for us cases below . + the distributions @xmath67 are determined from the decay process of the resonance r , with momentum @xmath0 into the hadron @xmath45 with momentum @xmath68 . we assume that the momentum @xmath0 is much larger than the masses and the transverse momenta involved . + in analogy with eq.(2 ) we can write the expression for the average value of the formation length @xmath69 for the daughter baryon @xmath45 produced in result of decay of the parent resonance @xmath46 in form : @xmath70 + here it is need to give some explanations . we can formally consider @xmath69 as the formation length of daughter baryon @xmath45 for two reasons : ( i ) the parent resonance and daughter hadron are the hadrons of the same rank , which have common constituent quark ; ( ii ) beginning from this distance the chain consisting from prehadron , resonance and final baryon @xmath45 interacts ( in nuclear medium ) with hadronic cross sections . + the general formula for @xmath28 for the case when a few resonances contribute can be written in form : @xmath71 @xmath72 where @xmath73 ( @xmath74 ) is the probability that @xmath75 system turns into baryon ( baryonic resonance ) . for @xmath4 and @xmath8 resonances , taking into account the decuplet / octet suppression and the extra @xmath76 suppression following from the mass differences @xcite , the condition @xmath74=@xmath73=0.5 is used . + let us now discuss the details of model , which are necessary for calculations . we will consider several species of widely using baryons ( antibaryons ) such as @xmath0 ( @xmath1 ) , @xmath2 ( @xmath3 ) , @xmath4 ( @xmath5 ) , @xmath6 ( @xmath7 ) and @xmath8 ( @xmath9 ) electroproduced on proton , neutron and nuclear targets . the scaling function @xmath20 in eq.(1 ) has two free parameters @xcite @xmath77 , @xmath78 . next parameter , which is necessary for the calculations in the framework of string model is the string tension . it was fixed at a static value determined by the regge trajectory slope @xcite @xmath79 now let us turn to the functions @xmath37 and @xmath38 , which have the physical meaning of the probabilities to produce on proton target hadron h of first and second ranks , respectively . for pseudoscalar mesons they were presented in ref . for baryons these functions have more complicate structure , therefore it is convenient to present here final expressions which were obtained after small calculations . for protons and antiprotons of first and second ranks they have the form @xmath80 @xmath81 @xmath82 the coefficients for first rank neutron ( antineutron ) @xmath83 ( @xmath84 ) can be obtained from corresponding expressions for proton ( antiproton ) by means of changing distribution functions in nominators ( @xmath85)@xmath86(@xmath87 ) ( ( @xmath88,@xmath89)@xmath86 ( @xmath89,@xmath88 ) ) ; coefficients for second rank are equal to them for proton @xmath90 = @xmath91 = @xmath92 . for @xmath6 and @xmath7 they are : @xmath93 @xmath94 @xmath95 for @xmath4 resonances these coefficients are : @xmath96 @xmath97 @xmath98 @xmath99 @xmath100 where @xmath101 is the bjorken s scaling variable ; @xmath102 , where @xmath12 is the 4-momentum of virtual photon ; @xmath103 is the proton mass ; @xmath104 , where @xmath105 are quark ( antiquark ) distribution functions for proton . easily to see , that functions @xmath106 for hadrons of higher rank ( @xmath107 ) coincide with ones for second rank hadron @xmath108 . this fact was already used for construction of eq.(3 ) . for neutron and nuclear targets more general functions @xmath109 from @xcite similar functions can be obtained for @xmath5 , @xmath8 and @xmath9 resonances . all calculations were performed at fixed value of @xmath110 equal @xmath111 . calculations of @xmath11 - dependence were performed at fixed value of @xmath112 equal @xmath113 , which correspond to @xmath114 . the parameterizations for quark ( antiquark ) distributions in proton in approximation of leading order were taken from @xcite . we assume , that new @xmath115 pairs are @xmath116 with probability @xmath117 , @xmath118 with probability @xmath119 and @xmath120 with probability @xmath121 . it is followed from isospin symmetry that @xmath122 . in the diquark scenario we need also in probabilities for production in color field of string diquark - antidiquark pairs with different contents : ( i ) diquark - antidiquark pairs of light quarks with spin ( s ) and isospin ( i ) s = i=0 @xmath123=@xmath124 ; ( ii ) diquark - antidiquark pairs of light quarks with s = i=1 @xmath125=@xmath126=@xmath127=@xmath128 ; ( iii ) diquark - antidiquark pairs containing strange quark ( antiquark ) @xmath129=@xmath130=@xmath131 . we use the connections between different quantities and set of values for @xmath132 from lund model @xcite @xmath128=0.15@xmath124 ; @xmath131= 0.12@xmath124 ; @xmath124=0.1@xmath133 . + we take into account that part of baryons can be produced from decay of baryonic resonances . as a possible sources of @xmath0 , @xmath2 ( @xmath1 , @xmath3 ) and @xmath6 ( @xmath7 ) baryons we consider @xmath4 ( @xmath5 ) and @xmath8 ( @xmath9 ) baryonic resonances , respectively . the contributions of other resonances are neglected . + = 8.cm = 7.5 cm the decay distributions @xmath135 are determined from the decay process of the resonance r , with momentum p into the hadron h with momentum zp . + in refs . @xcite they were presented for case of pseudoscalar mesons , here we will use similar functions for baryons . the common expression @xmath136 for all using baryonic resonances will be used . the values of @xmath52 and @xmath53 it is easily to obtain from eqs.(5 ) and ( 6 ) . + for protons we have @xmath137 , @xmath138 , @xmath139 ; for neutrons @xmath140 , @xmath141 , @xmath142 ; for @xmath6 from @xmath143 decay we have @xmath144 . + in fig.1 the average formation lengths for electroproduction of baryons and antibaryons on proton target , normalized on @xmath145 , are presented as a functions of @xmath11 . on panel a the average formation lengths for protons and antiprotons are presented . the contributions of direct protons ( dashed curves ) as well as of the sum of direct and produced from decay of @xmath4 resonances protons ( solid curves ) are presented . upper curves represent the average formation lengths for protons and lower curves the same for antiprotons . on panel b the results for neutron and antineutron in the same approach are presented . on other panels ( c , d , e , f ) the results for baryonic resonances in the approach of direct production are presented . on panel c the results for @xmath4 resonances , on panel d for @xmath5 resonances , on panel e for @xmath8 resonances , and on panel f for @xmath9 resonances are presented . all results were obtained in the framework of symmetric lund model . in fig.2 the average formation lengths for electroproduction of @xmath6 and @xmath7 on proton target , normalized on @xmath145 , are presented as a functions of @xmath11 . the contributions of directly produced hadrons as well as of the sum of direct and produced from decay of @xmath146 ( @xmath147)resonances hadrons are presented . upper curves represent formation lengths of @xmath6 and lower curves of @xmath7 . the first observation which can be made from figs.1 and 2 is that the contribution of the resonances is small , so small that practically does not change result . the second observation is that for baryons having several charge states ( @xmath4 and @xmath8 ) there is following rule : larger charge corresponds larger average formation length . the third observation is that all antibaryons independent from charge and other quantum numbers have practically the same average formation lengths . + we already discussed in @xcite why the average formation lengths of positively charged hadrons are larger than of negatively charged ones . it happens due to the large probability to knock out @xmath148 quark in result of dis ( even in case of neutron target ) . the knocked out quark enter in the composition of leading hadron , which has maximal formation length . from figs.1 and 2 easely to see that all antibaryons , independent from electrical charge and other quantum numbers , have approximately the same average formation lengths . the cause of this is that all antibaryons consist from `` sea '' partons , antiquarks , and can be , with large probability , hadrons of second or higher ranks . we can compare antibaryons with @xmath149 meson , which has average formation length smaller than other pseudoscalar mesons , because it is constructed from `` sea '' quarks only and practically can not be leading hadron , whereas other pseudoscalar mesons can be leading hadrons due to @xmath148 or @xmath150 quarks entering in their composition . we would like to point out here one interesting feature of baryonic resonances . let us calculate from eqs.(5 ) and ( 6 ) the quantities @xmath53 and @xmath52 for decays @xmath151 and @xmath152 . we obtain for first case @xmath53=0.6 , @xmath52=0.97 and for second case @xmath53=0.88 , @xmath52=1 , which means that in case of baryons integration over @xmath153 in eq.(4 ) is performed in narrow enough region , i.e. contribution of resonances does not distort the distribution of formation lengths of hadrons . for comparison , we would like to remind @xcite , that for decay @xmath154 @xmath53=0.035 , @xmath52=0.965 , which leads to the distortion of pions spectra . = 8.cm it is worth to note , that results for deuteron coincide , in our approach , with results for any nuclei with @xmath155 , where @xmath156 ( @xmath26 ) is number of protons ( neutrons ) . average formation lengths of hadrons on krypton nucleus , which has essential excess of neutrons , do not differ considerably from the ones on nuclei with @xmath155 . in fig.3 the average formation lengths for electroproduction of protons and antiprotons on different targets , normalized on @xmath145 , as a functions of @xmath11 are presented . in fig.4 the average formation lengths for electroproduction of neutrons and antineutrons on different targets , normalized on @xmath145 , as a functions of @xmath11 are presented . from figs.3 and 4 we have interesting information , that average formation lengths of protons ( neutrons ) reaches maximal value on proton ( neutron ) target , which is easily to explain because when the kinds of target hadron and final hadron coincide , final hadron has maximal chance be leading . another information , which is not so obvious as in the previous case , is that antiproton has minimal average formation length on proton target ( may be it is connected with large denominator in function @xmath157 ) . in fig.5 the average formation lengths for electroproduction of protons and antiprotons on proton target , normalized on @xmath145 , as a functions of @xmath10 are presented . in fig.6 the same as in fig.5 for the case of neutrons and antineutrons are presented . the average formation lengths of protons ( antiprotons ) are slowly rising ( decreasing ) functions of @xmath10 . they differ significantly at middle @xmath11 , i.e. at middle @xmath11 antiprotons will attenuate in nuclei significantly stronger than protons . the average formation lengths of neutrons ( antineutrons ) both are slowly decreasing functions of @xmath10 , their values are close enough for all values of @xmath11 and for all region of @xmath10 . we obtained , for the first time , the average formation lengths for different baryons ( antibaryons ) in the eletroproduction process on proton , neutron , deuteron and krypton targets , in the framework of symmetric lund model . it is worth to note , that results for deuteron coincide , in our approach , with results for any nuclei with @xmath155 , where @xmath156 ( @xmath26 ) is number of protons ( neutrons ) . average formation lengths of baryons on krypton nucleus , which has essential excess of neutrons , do not differ considerably from the ones on nuclei with @xmath155 . main conclusions are : ( i ) the average formation lengths of several widely using species of baryons ( antibaryons ) such as @xmath0 ( @xmath1 ) , @xmath2 ( @xmath3 ) , @xmath4 ( @xmath5 ) , @xmath6 ( @xmath7 ) and @xmath8 ( @xmath9 ) for the case of symmetric lund model are obtained for the first time ; ( ii ) the average formation lengths of baryons and antibaryons produced in semi - inclusive deep inelastic scattering of leptons on different targets , depend from their electrical charges or , more precise , from their quark contents ; ( iii ) the contribution of @xmath4 ( @xmath5 ) resonances in case of protons ( antiprotons ) and neutrons ( antineutrons ) , and @xmath8 ( @xmath9)resonances in case of @xmath6 ( @xmath7 ) are considered . it is obtained that their contributions are small , i.e. in case of baryons , production from resonances is essentially weaker than in case of mesons ; ( iv ) the average formation lengths of protons ( antiprotons ) are slowly rising ( decreasing ) functions of @xmath10 , the average formation lengths of neutrons and antineutrons are slowly decreasing functions of @xmath10 ; ( v ) the shape and behavior of average formation lengths for baryons qualitatively coincide with the ones for pseudoscalar mesons obtained earlier @xcite . + it is worth to note that in string model the formation length of the leading ( rank 1 ) hadron @xmath158 does not depend from type of process , kinds of hadron and target . therefore , the dependence of obtained results from the type of process , kinds of targets and observed hadrons is mainly due to presence of higher rank hadrons . + which sizes can reach the average formation length ? at fixed @xmath10 it is proportional to @xmath110 . consequently it will rise with @xmath110 and can reach sizes much larger than nuclear sizes at very high energies . + at present the hadronization in nuclear medium is widely studied both experimentally and theoretically . it is well known , that there is nuclear attenuation of final hadrons . unfortunately it does not clear , which is the true mechanism of such attenuation : final state interactions of prehadrons and hadrons in nucleus ( absorption mechanism ) ; or gluon bremmstrahlung of partons ( produced in dis ) in nuclear medium , whereas hadronization takes place far beyond nucleus ( energy loss mechanism ) . we hope , that results obtained in the previous @xcite and this works can be useful for the understanding of this problem . 99 t.chmaj , acta phys.polon . * b18 * ( 1987 ) 1131 a.bialas , m.gyulassy , nucl.phys . * b291 * ( 1987 ) 793 b.andersson et al . , phys.rep.*97 * ( 1983 ) 31 l.grigoryan , phys.rev . * c81 * ( 2010 ) 045207 p.eden , g.gustafson , z.phys.*c75 * ( 1997 ) 41 t.sj@xmath159strand et al . , computer physics commun . * 135 * ( 2001 ) 238 n.akopov , l.grigoryan , z.akopov , phys.rev . * c76 * ( 2007 ) 065203 n.akopov , l.grigoryan , z.akopov , arxiv:0810.4841 [ hep - ph ] l.grigoryan , phys.rev . * c80 * ( 2009 ) 055209 a.airapetian et al . , nucl.phys . * b780 * ( 2007 ) 1 b.andersson , g.gustafson and c.peterson , nucl.phys . * b135 * ( 1978 ) 273 n.akopov , g.elbakian , l.grigoryan , hep - ph/0205123 p.v.chliapnikov , phys.lett . * b462 * ( 1999 ) 341 b.kopeliovich , j.nemchik , preprint jinr e2 - 91 - 150 ( 1991 ) ; preprint of infn - iss 91/3(1991 ) roma m.gl@xmath160ck , e.reya , a.vogt , z.phys.*c67 * ( 1995 ) 433 a.accardi et al . , nucl.phys . * a761 * ( 2005 ) 67
in this work it is continued the investigation of the space - time scales of the hadronization process in the framework of string model . the average formation lengths of several widely using species of baryons ( antibaryons ) such as @xmath0 ( @xmath1 ) , @xmath2 ( @xmath3 ) , @xmath4 ( @xmath5 ) , @xmath6 ( @xmath7 ) and @xmath8 ( @xmath9 ) are studied . it is shown that they depend from electrical charges or , more precise , from quark contents of the hadrons . in particular , the average formation lengths of positively charged hadrons , for example protons , are considerably larger than of their negatively charged antiparticles , antiprotons . this statement is fulfilled for all nuclear targets and any value of the bjorken scaling variable @xmath10 . the main mechanism is direct production . additional production mechanism in result of decay of resonances gives small contribution . it is shown that the average formation lengths of protons ( antiprotons ) are slowly rising ( decreasing ) functions of @xmath10 , the ones of neutrons and antineutrons are slowly decreasing functions of @xmath10 . the shape and behavior of average formation lengths for baryons qualitatively coincide with the ones for pseudoscalar mesons obtained earlier .
a link map is a ( continuous ) map @xmath3 from a union of spheres into another sphere such that @xmath4 for @xmath5 . two link maps are said to be link homotopic if they are connected by a homotopy through link maps , and the set of link homotopy classes of link maps as above is denoted @xmath6 . it is a familiar result that @xmath7 is classified by the linking number , and in his foundational work milnor @xcite described invariants of @xmath8 which classified @xmath9 . these invariants ( the @xmath10-invariants ) were refined much later by habegger and lin @xcite to achieve an algorithmic classification of @xmath8 . higher dimensional link homotopy began with a study of @xmath11 when @xmath12 , first by scott @xcite and later by massey and rolfsen @xcite . both papers made particular use of a generalization of the linking number , defined as follows . given a link map @xmath13 , choose a point @xmath14{-20}{$\relax\smallsetminus$ } } \mspace{-4mu } } } f(s^p\cup s^q)$ ] and identify @xmath15{-20}{$\relax\smallsetminus$ } } \mspace{-4mu } } } \infty$ ] with @xmath16 . when @xmath17 , the map @xmath18 is nullhomotopic on the subspace @xmath19 and so determines an element @xmath20 . when @xmath21 , the link homotopy invariant @xmath22 is the integer - valued linking number . in a certain dimension range , @xmath22 was shown in @xcite to classify _ embedded _ link maps @xmath23 up to link homotopy . indeed , historically , link homotopy roughly separated into settling two problems . 1 . decide when an embedded link map is link nullhomotopic . 2 . decide when a link map is link homotopic to an embedding . in a large metastable range this approach culminated in a long exact sequence which reduced the problem of classifying @xmath11 to standard homotopy theory questions ( see @xcite ) . on the other hand , four - dimensional topology presents unique difficulties , and link homotopy of 2-spheres in the 4-sphere requires different techniques . in this setting , the first problem listed above was solved by bartels and teichner @xcite , who showed that an embedded link @xmath24 is link nullhomotopic . in this paper we are interested in invariants of @xmath25 which have been introduced to address the second problem . fenn and rolfsen @xcite showed that @xmath22 defines a surjection @xmath26 and in doing so constructed the first example of a link map @xmath1 which is not link nullhomotopic . kirk @xcite generalized this result , introducing an invariant @xmath0 of @xmath25 which further obstructs embedding and surjects onto an infinitely generated group . to a link map @xmath27 , where we use signs to distinguish component 2-spheres , kirk defined a pair of integer polynomials @xmath28 such that each component is invariant under link homotopy of @xmath29 , determines @xmath30 and vanishes if @xmath29 is link homotopic to a link map that embeds _ either _ component . it is an open problem whether @xmath0 is the complete obstruction ; that is , whether @xmath31 implies that @xmath29 is link homotopic to an embedding . by ( * theorem 5 ) , this is equivalent to asking if @xmath0 is injective on @xmath25 . seeking to answer in the negative , li proposed an invariant @xmath32 to detect link maps in the kernel of @xmath0 . when @xmath33 , after a link homotopy the restricted map @xmath34{-20}{$\relax\smallsetminus$ } } \mspace{-4mu } } } f(s^2_\mp)$ ] may be equipped with a collection of whitney disks , and @xmath35 obstructs embedding by counting weighted intersections between @xmath36 and the interiors of these disks . precise definitions of these invariants will be given in section [ sec : prelims ] . by @xcite ( and @xcite ) , @xmath2 is an invariant of link homotopy , but the example produced in @xcite of a link map @xmath29 with @xmath31 and @xmath37 was found to be in error by pilz @xcite . the purpose of this paper is to prove that @xmath2 can not detect such examples ; indeed , it is a weaker invariant than @xmath0 . [ thm : mainresult ] let @xmath29 be a link map with @xmath38 and let @xmath39 be integers so that @xmath40 . then @xmath41 where the sum is over all @xmath42 equal to @xmath43 modulo @xmath44 . consequently , there are infinitely many distinct classes @xmath45 with @xmath46 , @xmath47 but @xmath48 ( see proposition [ eq : pk - image ] ) . in particular , the following corollary answers question 6.2 of @xcite . [ coro : maincoro ] if a link map @xmath29 has @xmath31 , then @xmath49 . by ( * ? ? ? * theorem 1.3 ) and ( * ? ? ? * theorem 2 ) , theorem [ thm : mainresult ] may be interpreted geometrically as follows . let @xmath29 be a link map such that @xmath46 . then , after a link homotopy , the self - intersections of @xmath50 may be paired up with framed , immersed whitney disks in @xmath51{-20}{$\relax\smallsetminus$ } } \mspace{-4mu } } } f(s^2_-)$ ] whose interiors are disjoint from @xmath50 . this paper is organized as follows . in section [ sec : prelims ] we first review wall intersection theory in the four - dimensional setting . the geometric principles thus established underly the link homotopy invariants @xmath0 and @xmath2 , which we subsequently define . in section [ sec : proof ] we exploit that , up to link homotopy , one component of a link map is unknotted , immersed , to equip this component with a convenient collection of whitney disks which enable us to relate the invariants @xmath0 and @xmath2 . a more detailed outline of the proof may be found at the beginning of that section . let us first fix some notation . for an oriented path or loop @xmath22 , let @xmath52 denote its reverse path ; if @xmath22 is a based loop , let @xmath53 $ ] denote its based homotopy class . let @xmath54 denote composition of paths , and denote the interval @xmath55 $ ] by @xmath56 . let @xmath57 denote equivalence modulo @xmath43 . in what follows assume all manifolds are oriented and equipped with basepoints ; specific orientations and basepoints will usually be suppressed . the link homotopy invariants investigated in this paper are closely related to the algebraic `` intersection numbers '' @xmath58 and @xmath59 introduced by wall @xcite . for a more thorough exposition of the latter invariants in the four - dimensional setting , see chapter 1 of @xcite , from which our definitions are based . suppose @xmath60 and @xmath61 are properly immersed , self - transverse 2-spheres or 2-disks in a connected 4-manifold @xmath62 . ( by self - transverse we mean that self - intersections arise precisely as transverse double points . ) suppose further that @xmath60 and @xmath61 are transverse and that each is equipped with a path ( a _ whisker _ ) connecting it to the basepoint of @xmath62 . for an intersection point @xmath63 , let @xmath64 \in \pi_1(y)$ ] denote the homotopy class of a loop that runs from the basepoint of @xmath62 to @xmath60 along its whisker , then along @xmath60 to @xmath65 , then back to the basepoint of @xmath62 along @xmath61 and its whisker . define @xmath66 $ ] to be @xmath67 or @xmath68 depending on whether or not , respectively , the orientations of @xmath60 and @xmath61 induce the orientation of @xmath62 at @xmath65 . the wall intersection number @xmath69 is defined by the sum @xmath70\lambda(a , b)[x]\ ] ] in the group ring @xmath71 $ ] , and is invariant under homotopy rel boundary of @xmath60 or @xmath61 ( * ? ? ? * proposition 1.7 ) , but depends on the choice of basepoint of @xmath62 and the choices of whiskers and orientations . for an element @xmath72 in @xmath71 $ ] ( @xmath73 , @xmath74 ) , define @xmath75 $ ] by @xmath76 . from the definition it is readily verified that @xmath77 and that the following observations , which we record for later reference , hold . [ prop : lambda - product ] if @xmath78 , then the product of @xmath79-elements @xmath64\overline{(\lambda(a , b)[y])}$ ] is represented by a loop that runs from the basepoint to @xmath60 along its whisker , along @xmath60 to @xmath65 , then along @xmath61 to @xmath80 , and back to the basepoint along @xmath60 and its whisker . moreover , if @xmath62 has abelian fundamental group , then this group element does not depend on the choice of whiskers and basepoint . [ prop : lambda - restricted ] if @xmath81 is an immersed 2-disk that is equipped with the same whisker and oriented consistently with @xmath60 , then for each @xmath82 we have @xmath64 = \lambda(d_a , b)[x]$ ] and @xmath66={\operatorname{sign}}_{d_a , b}[x]$ ] . the intersection numbers respect sums in the following sense . suppose that @xmath60 and @xmath61 as above are 2-spheres , and suppose there is an embedded arc @xmath83 from @xmath60 to @xmath61 , with interior disjoint from both . let @xmath84 be a path that runs along the whisker for @xmath60 , then along @xmath60 to the initial point of @xmath83 , and let @xmath85 be a path that runs from the endpoint of @xmath83 , along @xmath61 and its whisker to the basepoint of @xmath62 . form the connect sum @xmath86 of @xmath60 and @xmath61 along @xmath83 in such a way that the orientations of each piece agree with the result . equipped with the same whisker as @xmath61 , the 2-sphere @xmath86 represents the element and @xmath61 are both whiskered , we permit ourselves to confuse them with their respective homotopy classes in @xmath87 . ] @xmath88 in the @xmath71$]-module @xmath87 , where @xmath89\in \pi_1(y)$ ] . if @xmath90 is an immersed 2-disk or 2-sphere in @xmath62 transverse to @xmath60 and @xmath61 , then @xmath91 . the additive inverse @xmath92 is represented by reversing the orientation of @xmath60 . allowing @xmath60 again to be a self - transverse 2-disk or 2-sphere , the wall self - intersection number @xmath93 is defined as follows . let @xmath94 be a map with image @xmath60 , where @xmath95 or @xmath96 . let @xmath65 be a double point of @xmath60 , and let @xmath97 , @xmath98 denote its two preimage points in @xmath99 . if @xmath100 , @xmath101 are disjoint neighborhoods of @xmath97 , @xmath98 in @xmath102 , respectively , that do not contain any other double point preimages , then the embedded 2-disks @xmath103 and @xmath104 in @xmath60 are said to be two different _ branches _ ( or sheets ) intersecting at @xmath65 . let @xmath105 \in \pi_1(y)$ ] denote the homotopy class of a loop that runs from the basepoint of @xmath62 to @xmath60 along its whisker , then along @xmath60 to @xmath65 through one branch @xmath103 , then along the other branch @xmath104 and back to the basepoint of @xmath62 along the whisker of @xmath60 . ( such a loop is said to _ change branches _ at @xmath65 . ) define @xmath106 $ ] to be @xmath67 or @xmath68 depending on whether or not , respectively , the orientations of the two branches of @xmath60 intersecting at @xmath65 induce the orientation of @xmath62 at @xmath65 . in the group ring @xmath71 $ ] , let @xmath107\mu(a)[x],\ ] ] where the sum is over all such self - intersection points . ( note that it may sometimes be more convenient to write @xmath108 . ) for a fixed whisker of @xmath60 , changing the order of the branches in the above definition replaces @xmath105 $ ] by its @xmath79-inverse , so @xmath93 is only well - defined in the quotient @xmath109 of @xmath71 $ ] , viewed as an abelian group , by the subgroup @xmath110 . the equivalence class of @xmath93 in this quotient group is invariant under regular homotopy rel boundary of @xmath60 . note also that if the 4-manifold @xmath62 has abelian fundamental group , then @xmath93 does not depend on the choice of whisker . let @xmath111 denote the signed self - intersection number of @xmath60 . the _ reduced _ wall self - intersection number @xmath112 may be defined by @xmath113 it is an invariant of _ homotopy _ rel boundary ( * ? ? ? * proposition 1.7 ) ; this observation derives from the fact that non - regular homotopy takes the form of local `` cusp '' homotopies which may each change @xmath93 by @xmath114 ( see ( * ? ? ? * section 1.6 ) . ) we now recall the definitions of the link homotopy invariants @xmath0 of kirk @xcite and @xmath2 of li @xcite . let @xmath27 be a link map . after a link homotopy ( in the form of a perturbation ) of @xmath29 we may assume the restriction @xmath115{-20}{$\relax\smallsetminus$ } } \mspace{-4mu } } } f(s^2_{\mp})$ ] to each component is a self - transverse immersion . let @xmath116{-20}{$\relax\smallsetminus$ } } \mspace{-4mu } } } f(s^2_-)$ ] and choose a generator @xmath117 for @xmath118 , which we write multiplicatively . for each double point @xmath119 of @xmath50 , let @xmath120 be a simple circle on @xmath50 that changes branches at @xmath119 and does not pass through any other double points . we call @xmath120 an _ accessory circle _ for @xmath119 . letting @xmath121 , one defines @xmath122 in the ring @xmath123 $ ] of integer polynomials , where the sum is over all double points of @xmath50 , and to simplify notation we write @xmath124 $ ] . reversing the roles of @xmath125 and @xmath126 , we similarly define @xmath127 and write @xmath128\oplus \z[s]$ ] . kirk showed in @xcite that @xmath0 is a link homotopy invariant , and in @xcite that if @xmath29 is link homotopic to a link map for which one component is embedded , then @xmath31 . let @xmath129 denote the hurewicz map . referring to the definition of @xmath59 in the preceding section as applied to the map @xmath130 , observe that @xmath131 carries @xmath132 to the ring of integer polynomials @xmath123 $ ] and kirk s invariant @xmath133 is given by @xmath134.\end{aligned}\ ] ] as in @xcite , we say that @xmath29 is @xmath135-_good _ if @xmath136 and the restricted map @xmath137 is a self - transverse immersion with @xmath138 . we say that @xmath29 is _ good _ if it is both @xmath139- and @xmath140-good . equation has the following consequence . [ prop : sigma - mu ] if @xmath29 is a @xmath135-good link map , then @xmath141 . the invariant @xmath142 obstructs , up to link homotopy , pairing up double points of @xmath36 with whitney disks in @xmath143 . while the essential purpose of whitney disks is to embed ( or separate ) surfaces ( see ( * ? ? ? * section 1.4 ) ) , our focus will be on their _ construction _ for the purposes of defining certain invariants . in the setting of link maps , the following standard result ( phrased in the context of link maps ) is the key geometric insight behind all the invariants we discuss in this paper and will find later application . [ lem : linking - number - whitney - circle ] let @xmath29 be a link map such that @xmath126 is a self - transverse immersion , and suppose @xmath144 are a pair of oppositely - signed double points of @xmath145 . let @xmath146 and @xmath147 each be an embedded 2-disk neighborhood of @xmath144 on @xmath145 such that @xmath146 and @xmath147 intersect precisely at these two points . on @xmath145 , let @xmath148 be loops based at @xmath149 ( respectively ) that leave along @xmath146 and return along @xmath147 . let @xmath150 , @xmath151 be oriented paths in @xmath152 ( respectively ) that run from @xmath153 to @xmath154 . then the oriented loop @xmath155 satisfies @xmath156 wishing to obtain a `` secondary '' obstruction , in @xcite li proposed the following @xmath157-valued invariant to measure intersections between @xmath50 and the interiors of these disks . suppose @xmath29 is a @xmath140-good link map with @xmath158 . the double points of @xmath159 may then be labeled @xmath160 so that @xmath161 and @xmath162 ; consequently , by lemma [ lem : linking - number - whitney - circle ] we may let @xmath163 and @xmath164 so that if @xmath165 is an arc on @xmath166 connecting @xmath167 to @xmath168 ( and missing all other double point preimages ) and @xmath169 is an arc on @xmath166 connecting @xmath170 to @xmath171 ( and missing @xmath165 and all other double point preimages ) , then the loop @xmath172 is nullhomologous , hence nullhomotopic , in @xmath173 . let @xmath174 ( respectively , @xmath175 ) be a neighborhood of @xmath165 ( respectively , @xmath169 ) in @xmath166 . the arcs @xmath176 and neighborhoods @xmath177 may be chosen so that the collection @xmath178 is mutually disjoint , and so that the resulting _ whitney circles _ @xmath179 are mutually disjoint , simple circles in @xmath173 such that each bounds an immersed _ whitney disk _ @xmath180 in @xmath173 whose interior is transverse to @xmath145 . since the two branches @xmath181 and @xmath182 of @xmath145 meet transversely at @xmath183 , there are a pair of smooth vector fields @xmath184 on @xmath185 such that @xmath186 is tangent to @xmath145 along @xmath165 and normal to @xmath145 along @xmath169 , while @xmath187 is normal to @xmath145 along @xmath165 and tangent to @xmath145 along @xmath169 . such a pair defines a normal framing of @xmath180 on the boundary . we say that @xmath188 is a _ correct framing _ of @xmath180 , and that @xmath180 is _ framed _ , if the pair extends to a normal framing of @xmath180 . by boundary twisting @xmath180 ( see page 5 of @xcite ) if necessary , at the cost of introducing more interior intersection points with @xmath145 , we can choose the collection of whitney disks @xmath189 such that each is correctly framed . let @xmath190 . to each point of intersection @xmath191 , let @xmath192 be a loop that first goes along @xmath145 from its basepoint to @xmath65 , then along @xmath180 to @xmath193 , then back along @xmath145 to the basepoint of @xmath145 and let @xmath194 . let @xmath195 summing over all such points of intersection , let @xmath196 then li s @xmath197-invariant applied to @xmath29 is defined by @xmath198 we record two observations about this definition for later use . the latter is a special case of proposition [ prop : lambda - product ] . [ rem : omega - compute ] if @xmath199 is odd then @xmath200 , while if @xmath199 is even then @xmath201 [ rem : omega - product ] suppose @xmath202 , and let @xmath203 be a loop that runs from @xmath65 to @xmath80 along @xmath145 , then back to @xmath65 along @xmath204 . we have @xmath205 now suppose @xmath29 is an arbitrary link map with @xmath38 . by standard arguments we may choose a @xmath140-good link homotopy representative @xmath206 of @xmath29 , and @xmath207 is defined by setting @xmath208 . by a result in @xcite , in @xcite it was shown that this defines an invariant of _ link homotopy _ ; theorem [ thm : mainresult ] gives a new proof . by interchanging the roles of @xmath125 and @xmath126 ( and instead assuming @xmath46 ) , we obtain @xmath209 similarly , and write @xmath210 . based on similar geometric principles , teichner and schneiderman @xcite defined a secondary obstruction with respect to the homotopy invariant @xmath59 . when adapted to the context of link homotopy , however , their invariant reduces to @xmath2 @xcite . a common situation to arise in this paper is the following . suppose we have a torus ( punctured torus , respectively ) @xmath211 in the 4-manifold @xmath62 , on which we wish to perform surgery along a curve so as to turn it into a 2-sphere ( 2-disk , respectively ) whose self - intersection and intersection numbers may be calculated . our device for doing so is the following lemma , which is similar to ( * ? ? ? * lemma 4.1 ) and so its proof is omitted . a similar construction may be found at the bottom of page 86 of @xcite . if @xmath81 is an immersed 2-disk , let @xmath212 denote the contribution to @xmath93 due to self - intersection points on @xmath213 . [ lem : surgery ] suppose that @xmath62 is a codimension-@xmath214 submanifold of @xmath215 and @xmath79 is abelian . let @xmath61 be a properly immersed 2-disk or 2-sphere in @xmath62 , and suppose @xmath211 is an embedded torus ( or punctured torus ) in @xmath216{-20}{$\relax\smallsetminus$ } } \mspace{-4mu } } } { \operatorname{int}}b$ ] . let @xmath217 be a simple , non - separating curve on @xmath211 , let @xmath218 be a normal pushoff of @xmath217 on @xmath211 and let @xmath219 denote the annulus on @xmath211 bounded by @xmath220 . suppose there is a map @xmath221 such that @xmath222 and @xmath223 for @xmath224 . then , after a small perturbation , @xmath225{-20}{$\relax\smallsetminus$ } } \mspace{-4mu } } } { \operatorname{int}}\hat t ) { \overset{}{\underset{{\delta_0\cup \delta_1}}{\cup } } } j(d^2\times \{0,1\})\ ] ] is a properly immersed , self - transverse 2-sphere ( or 2-disk , respectively ) in @xmath62 such that 1 . @xmath226)\lambda(d , b)$ ] in @xmath71 $ ] , and 2 . @xmath227{}$ ] in @xmath228 , where @xmath229 is oriented consistently with and shares a whisker with @xmath230 , @xmath231 , and @xmath232 is a dual curve to each of @xmath217 and @xmath218 on @xmath211 such that @xmath233 is a simple arc running from @xmath217 to @xmath218 . the hypotheses of this lemma will frequently be encountered in the following form . we have @xmath211 and the nullhomotopic curve @xmath217 ; we then choose an immersed 2-disk @xmath234 bounded by @xmath217 and let @xmath235 be its `` thickening '' along a section ( which is not necessarily non - vanishing ) obtained by extending over the 2-disk a normal section to @xmath217 that is tangential to @xmath211 . let us first outline the steps of our proof . up to link homotopy , one component @xmath145 of a link map @xmath29 is unknotted , immersed , and in section [ sec : unknotted ] we exploit this to construct a collection of mutually disjoint , embedded , framed whitney disks @xmath236 for @xmath145 with interiors in @xmath51{-20}{$\relax\smallsetminus$ } } \mspace{-4mu } } } f(s^2_-)$ ] , such that each has nullhomotopic boundary in the complement of @xmath50 . we show how these disks ( along with disks bounded by accessory circles for @xmath126 ) can be used to construct 2-sphere generators of @xmath237 . the algebraic intersections between @xmath50 and these 2-spheres are then computed in terms of the intersections between @xmath50 and the aforementioned disks . in section [ sec : omega ] we surger the disks @xmath236 so to exchange their intersections with @xmath50 for intersections with @xmath145 . in this way we obtain _ immersed _ , framed whitney disks @xmath238 for @xmath126 in @xmath51{-20}{$\relax\smallsetminus$ } } \mspace{-4mu } } } f(s^2_+)$ ] , such that the algebraic intersections between @xmath145 and @xmath180 , measured by @xmath239 , are related to the algebraic intersections between @xmath50 and @xmath240 . in section [ sec : main3 ] we complete the proof by combining the results of these two sections : the intersections between @xmath50 and generators of @xmath237 of the former section are related to @xmath241 , which by the latter section can be related to @xmath239 . a notion of unknottedness for surfaces in 4-space was introduced by hosokawa and kawauchi in @xcite . a connected , closed , orientable surface in @xmath243 is said to be unknotted if it bounds an embedded 3-manifold in @xmath243 obtained by attaching ( 3-dimensional ) @xmath67-handles to a @xmath244-ball . they showed that by attaching ( 2-dimensional ) 1-handles only , any embedded surface in @xmath243 can be made unknotted . kamada @xcite extended their definition to _ immersed _ surfaces in @xmath243 and gave a notion of equivalence for such immersions . in that paper it was shown that an immersed 2-sphere in @xmath243 can be made equivalent , in this sense , to an unknotted , immersed one by performing ( only ) _ finger moves_. it was noted in @xcite that we may perform a link homotopy to `` unknot '' one immersed component of a link map ( see lemma [ lem : good ] ) . the algebraic topology of the complement of this unknotted ( immersed ) component is greatly simplified , making the computation of the invariants defined in the previous section more tractable . let us begin with a precise definition of an unknotted , immersed 2-sphere . to do so we construct cusp regions which have certain symmetry properties ; our justification for these specifications is to follow . using the moving picture method , figures [ fig : fig - cusps](a ) , ( b ) ( respectively ) illustrate properly immersed , oriented 2-disks @xmath245 , @xmath246 ( respectively ) in @xmath247 , each with precisely one double point @xmath248 , @xmath249 ( respectively ) of opposite sign . in those figures we have indicated coordinates @xmath250 of @xmath247 ; our choice of the @xmath98-ordinate to represent `` time '' is a compromise between ease of illustration and ease of subsequent notation . as suggested by these figures , we construct @xmath251 so that it has boundary @xmath252 and so that it intersects @xmath253 in an arc lying in the plane @xmath254 . further , letting @xmath255 denote the loop on @xmath251 in this plane that is based at @xmath256 and oriented as indicated in those figures , we have that the reverse loop @xmath257 is given by and paths @xmath258 , @xmath259 , so that for @xmath260 we have @xmath261 and hence @xmath262 ] @xmath263 where @xmath264 is the orientation - preserving self - diffeomorphism of @xmath247 given by @xmath265 . lastly , we may suppose that @xmath266 after orienting @xmath247 appropriately , the immersed 2-disk @xmath245 ( @xmath246 , respectively ) has a single , positively ( negatively , respectively ) signed double point @xmath248 ( @xmath249 , respectively ) , and @xmath255 is an oriented loop on @xmath251 based at @xmath256 which changes branches there . ( note also that in this construction we may suppose that @xmath246 is the image of @xmath245 under the orientation - reversing self - diffeomorphism of @xmath247 given by @xmath267 . ) we call @xmath245 and @xmath246 _ cusps_. roughly speaking , an unknotted immersion is obtained from an unknotted embedding by `` grafting on '' cusps of this form . the purpose of the above specifications is so that , by a manoeuvre resembling the disk theorem , we may more conveniently move these cusps around on the 2-sphere so that accessory circles of the form @xmath255 are permuted and perhaps reversed as oriented loops . formally , let @xmath268 , let @xmath269 be a map which associates a @xmath139 sign or a @xmath140 sign to each @xmath270 , and write @xmath271 . let @xmath272 denote the image of an oriented , unknotted embedding @xmath273 ; that is , an embedding that extends to the 3-ball ( which is unique up to ambient isotopy ) . suppose there are a collection of mutually disjoint , equi - oriented embeddings @xmath274 , @xmath275 , such that @xmath276 for each @xmath277 . by removing the interiors of the 2-disks @xmath278 from @xmath272 and attaching , for each @xmath277 , the cusp @xmath279 along @xmath280 , we obtain an _ unknotted , immersed _ 2-sphere in @xmath215 : @xmath281{-20}{$\relax\smallsetminus$ } } \mspace{-4mu } } } { \overset{d}{\underset{{i=1}}{\cup } } } { \operatorname{int}}b_i(d^4 ) ] \ , \cup\,{\overset{d}{\underset{{i=1}}{\cup } } } b_i(d^{{\varepsilon}_i}).\end{aligned}\ ] ] note that we use the function @xmath282 in only for convenience of notation in the proofs that follow . since the embeddings @xmath283 can always be relabeled , one sees that the definition of @xmath284 depends precisely on the choice of unknotted 2-sphere @xmath272 , the embeddings @xmath283 , and two non - negative integers @xmath285 and @xmath286 , where @xmath287 is the number of @xmath277 such that @xmath288 . the following lemma will allow us to perform an ambient isotopy of @xmath215 which carries the model @xmath284 back to itself such that accessory circles are permuted ( and perhaps reversed in orientation ) in a prescribed manner . [ prop : exchange - all - b - i ] let @xmath131 be a permutation of @xmath289 . for each @xmath277 , let @xmath290 . there is an ambient isotopy @xmath291 such that @xmath292 fixes @xmath293{-20}{$\relax\smallsetminus$ } } \mspace{-4mu } } } { \overset{d}{\underset{{i=1}}{\cup } } } { \operatorname{int}}b_i(d^4)\ ] ] set - wise and , for each @xmath277 , @xmath294 for all @xmath295 . in particular , if @xmath296 , then @xmath297 and @xmath298 b_{\rho(i)}\circ \overline{\theta^{{\varepsilon}_i } } & \text { if $ \mu_i=-1$. } \end{cases}\end{aligned}\ ] ] the proof consists of using the disk theorem ( * ? ? ? * corollary 3.3.7 ) to transport 4-ball neighborhoods of the cusps around the 2-sphere , and is deferred to appendix [ app : prop : fm - constructvis - withu ] . we proceed instead to apply the lemma to equip one component of a link map , viewed as an immersion into the 4-sphere , with a particularly convenient collection of mutually disjoint , embedded , framed whitney disks . for this purpose it will be useful to give a particular construction of an unknotted immersion of a 2-sphere in @xmath215 with @xmath268 pairs of opposite - signed double points . for @xmath299 and real numbers @xmath300 , write @xmath301=a\times [ a , b]\subset \r^3\times \r$ ] , and @xmath302 = a\times a$ ] . choose an increasing sequence @xmath303 and for each @xmath277 , write @xmath304 $ ] and let @xmath305 . on the unit circle , oriented clockwise , let @xmath306 and @xmath307 be distinct , consecutive points and let @xmath308 , @xmath309 be disjoint neighborhoods of @xmath310 , @xmath311 , respectively . let @xmath312 be a path in @xmath313 running from @xmath314 to @xmath315 ; let @xmath316 and @xmath317 be simple paths on @xmath318 running @xmath319 to @xmath320 and from @xmath321 to @xmath307 , respectively . let @xmath322 and @xmath323 be disjoint neighborhoods of @xmath316 and @xmath317 in @xmath318 , respectively . see figure [ fig : circle ] . for each @xmath277 , let @xmath324 be a linear map such that @xmath325 and @xmath326 , and let @xmath327 be the map @xmath328 . let @xmath329 be an oriented , self - transverse immersion with image as shown in figure [ fig : fig - standard - annulus - e - alpha - p - v ] ( ignoring the shadings ) , with two double - points @xmath330 , such that @xmath331 $ ] for each @xmath332 and @xmath333 . then @xmath334 is an oriented loop on @xmath335 based at @xmath336 which changes branches there . note that @xmath337 is the trace of a regular homotopy from the circle in @xmath338 to itself that figure [ fig : fig - standard - annulus - e - alpha - p - v ] illustrates . for each @xmath277 , define a map @xmath339 by @xmath340 for @xmath341 . write the 2-sphere as the capped off cylinder @xmath342 in @xmath343 , and define a map @xmath344 by the identity on @xmath345 and by @xmath346 on @xmath347 . after smoothing corners , @xmath348 is an immersed 2-sphere in @xmath215 . let @xmath349 be the oriented loop on @xmath350 given by @xmath351 for @xmath352 . observe that @xmath349 is based at the @xmath135-signed double point @xmath353 and changes branches there . referring to figure [ fig : fig - standard - annulus - e - alpha - p - v ] , let @xmath354\subset d^3[0]$ ] ( @xmath355 ) be the obvious , embedded whitney disk for the immersed annulus @xmath335 in @xmath343 , bounded by @xmath356 . for arbitrarily small @xmath357 , by pushing a neighborhood of @xmath358 in @xmath359 into @xmath360 ( as in figure [ fig : fig - fatten ] ) , we may assume that the whitney disk is framed : the constant vector field @xmath361 that points out of the page in each hyperplane @xmath362 $ ] and the constant vector field @xmath363 are a correct framing . thus the maps @xmath364 carry @xmath354 $ ] to a complete collection of mutually disjoint , embedded , framed whitney disks @xmath365 $ ] , @xmath275 , for @xmath350 in @xmath215 . in particular , the boundary of @xmath240 ( equipped with an orientation ) is given by @xmath366 by lemma 4.2 of @xcite , we may assume after a link homotopy that one component of a link map is of the form of this model of an unknotted immersion . [ lem : good ] a link map @xmath29 is link homotopic to a good link map @xmath367 such that @xmath368 for some non - negative integer @xmath369 . we proceed to generalize the proof of ( * ? ? ? * lemma 4.4 ) to construct representatives of a basis of @xmath370 and compute their algebraic intersections with @xmath50 in terms of the algebraic intersections between @xmath50 and @xmath236 . while parts ( i ) and ( iii ) of proposition [ lem:2-spheres ] may be deduced directly from that paper , we include the complete proof for clarity . [ lem:2-spheres ] let @xmath29 be a good link map such that @xmath371 . equip @xmath50 with a whisker in @xmath242 and fix an identification of @xmath372 with @xmath373 so as to write @xmath374 = \z[s , s^{-1}]$ ] . then @xmath375{}$ ] and there is a @xmath376$]-basis represented by mutually disjoint , self - transverse , immersed , whiskered 2-spheres @xmath377 in @xmath242 with the following properties . for each @xmath277 , there is an integer laurent polynomial @xmath378 $ ] such that 1 . @xmath379 , 2 . @xmath380 , 3 . @xmath381 , and 4 . @xmath382 , where @xmath383 is a 2-disk in @xmath242 obtained from @xmath240 by removing a collar in @xmath173 , and @xmath384 is the image of a section of the normal bundle of @xmath385 . moreover , if for any @xmath386 and @xmath387 the loop @xmath388 bounds a 2-disk in @xmath215 that intersects @xmath50 exactly once , then we may choose @xmath389 so that @xmath390 for each @xmath277 , the double points @xmath391 and @xmath392 of @xmath145 lie in @xmath393 $ ] and @xmath145 intersects the 4-ball @xmath394\subset d^3\times d^1 $ ] precisely along @xmath395 . in what follows we shall denote @xmath396 ; note that for @xmath333 , @xmath397)=g(s^1\times t)$ ] and @xmath398)$ ] . observe that by the construction of the annulus @xmath337 we may assume there are integers @xmath399 such that @xmath400)=g_0[a^\pm , b^\pm]$ ] for some ( embedded circle ) @xmath401 . let @xmath402 denote the 2-disk bounded by @xmath403 in @xmath404 $ ] , and choose a 3-ball @xmath405 so that @xmath406 $ ] is a 4-ball neighborhood of @xmath336 and @xmath407)$ ] is disjoint from @xmath50 . there is an embedded torus @xmath408 in @xmath406{\mathbin{\mathpalette{\mspace{-4mu } \raisebox { { \ifx\relax\displaystyle .8\else \ifx\relax\textstyle .8\else \ifx\relax\scriptstyle .6\else .45 \fi \fi \fi}\depth}{\rotatebox[origin = c]{-20}{$\relax\smallsetminus$ } } \mspace{-4mu } } } \hat g$ ] that intersects @xmath402 exactly once ; see figure [ fig : fig - standard - annulus - e - t - intersection ] for an illustration of @xmath409 and @xmath410 in @xmath343 . the torus @xmath408 appears as a cylinder in each of 3-balls @xmath411 $ ] and @xmath412 $ ] , and as a pair of circles in @xmath413 $ ] for @xmath414 . for each @xmath277 , let @xmath415{\mathbin{\mathpalette{\mspace{-4mu } \raisebox { { \ifx\relax\displaystyle .8\else \ifx\relax\textstyle .8\else \ifx\relax\scriptstyle .6\else .45 \fi \fi \fi}\depth}{\rotatebox[origin = c]{-20}{$\relax\smallsetminus$ } } \mspace{-4mu } } } f(s^2_-)$ ] . by alexander duality the linking pairing @xmath416 defined by @xmath417 , where @xmath418 , is nondegenerate thus , as the loops @xmath419 represent a basis for @xmath420 , we have that @xmath421 and ( after orienting ) the so - called _ linking tori _ @xmath422 represent a basis . we proceed to apply the construction of section [ surger ] ( twice , successively ) to turn these tori into 2-spheres . let @xmath423 be the embedded 2-disk so that @xmath424 $ ] appears in @xmath425 $ ] as in figure [ fig : fig - delta - hat - delta - gamma - plus - minus - gamma - tub - nbd - no - as ] , and let @xmath426 . the disk @xmath424 $ ] intersects @xmath427 at two points , which are the endpoints of an arc of the form @xmath428 $ ] ( @xmath429 ) in @xmath430 , also illustrated . in figure [ fig : fig - delta - hat - delta - gamma - plus - minus - gamma - tub - nbd - no - as ] we have also illustrated in @xmath431 $ ] the restriction of a tubular neighborhood of @xmath427 to @xmath428 $ ] , which we may write in the form @xmath432 $ ] and choose so that @xmath433 carries @xmath432 $ ] to a tubular neighborhood of @xmath145 restricted to @xmath434)$ ] in @xmath51{-20}{$\relax\smallsetminus$ } } \mspace{-4mu } } } f(s^2_+)$ ] . let @xmath435 be the embedded punctured torus in @xmath338 obtained from @xmath436 by attaching a 1-handle along @xmath437 ; that is , let @xmath438{-20}{$\relax\smallsetminus$ } } \mspace{-4mu } } } ( { \partial}\gamma\times { \operatorname{int}}d^2)\bigr ] { \overset{}{\underset{{{\partial}\gamma^\pm\times s^1}}{\cup } } } ( \gamma^\pm\times s^1).\ ] ] note that @xmath435 has boundary @xmath439 . let @xmath440 be a loop on @xmath435 formed by connecting the endpoints of a path on @xmath441 by an arc on @xmath442{-20}{$\relax\smallsetminus$ } } \mspace{-4mu } } } ( { \partial}\gamma^\pm\times { \operatorname{int}}d^2)$ ] so that @xmath443 $ ] links @xmath444 zero times ; see figure [ fig : fig - delta - hat - delta - beta - plus - minus ] . let @xmath445 be a pushoff of @xmath440 along a normal vector field tangent to @xmath435 . we see that @xmath445 and @xmath445 bound embedded 2-disks @xmath446 and @xmath447 in @xmath338 , respectively , which are disjoint ( that is to say , the aforementioned normal vector field extends to a normal vector field of @xmath446 ) . then @xmath443 $ ] and @xmath448 $ ] bound the disjoint , embedded 2-disks @xmath449 $ ] and @xmath450 $ ] , respectively , which lie in @xmath431{\mathbin{\mathpalette{\mspace{-4mu } \raisebox { { \ifx\relax\displaystyle .8\else \ifx\relax\textstyle .8\else \ifx\relax\scriptstyle .6\else .45 \fi \fi \fi}\depth}{\rotatebox[origin = c]{-20}{$\relax\smallsetminus$ } } \mspace{-4mu } } } g(s^1\times a^\pm)$ ] . since @xmath50 is disjoint from @xmath451)$ ] and ( we may assume ) from a collar of the 2-disk @xmath452)$ ] , observe that @xmath453 may be constructed from a normal pushoff @xmath454 of @xmath455 in @xmath338 by attaching a collar so that the intersections between @xmath452)$ ] and @xmath50 occur entirely on @xmath456)$ ] . similarly , since @xmath50 is disjoint from @xmath451)$ ] and @xmath457)$ ] , and ( we may assume ) from a collar of the 2-disk @xmath458)$ ] , the 2-disk @xmath459 may be chosen to contain a normal pushoff @xmath460 of @xmath147 in @xmath338 and a normal pushoff @xmath461 of @xmath455 in @xmath338 so that the intersections between @xmath462)$ ] and @xmath50 occur entirely on @xmath463)$ ] and @xmath464)$ ] . from these observations , by proposition [ prop : lambda - restricted ] we have ( by an appropriate choice of whiskers and orientations in @xmath242 ) @xmath465)\bigr ) = \lambda\bigl(f(s^2_+ ) , \theta_i(\hat e^+[a^+])\bigr)\end{aligned}\ ] ] and @xmath466)\bigr ) = \lambda\bigl(f(s^2_+ ) , \theta_i(\check e^+[a^-])\bigr ) + \lambda\bigl(f(s^2_+ ) , \theta_i(\hat v[a^-])\bigr)\end{aligned}\ ] ] in @xmath376 $ ] . but , as @xmath456)$ ] and @xmath467)$ ] are each normal pushoffs in @xmath242 of @xmath468)$ ] , the 2-disk bounded by @xmath469 , we deduce from equation that @xmath470)\bigr ) = q^{(1)}_i(s)\end{aligned}\ ] ] for some integer laurent polynomial @xmath471 $ ] such that @xmath472 moreover , if @xmath473 for some @xmath474 , then ( as we are free to choose the orientation and whisker of @xmath452)$ ] ) we may take @xmath475 . similarly , as @xmath464)$ ] is a normal pushoff of @xmath240 , from equation we have ( by an appropriate choice of orientations and whiskers in @xmath242 ) @xmath476)\bigr ) = q^{(1)}_i(s ) + \lambda\bigl(f(s^2_+ ) , v_i^c\bigr),\end{aligned}\ ] ] where @xmath477 is obtained from @xmath240 ( which has boundary on @xmath145 ) by removing a collar in @xmath51{-20}{$\relax\smallsetminus$ } } \mspace{-4mu } } } f(s^2_+)$ ] . from the ( embedded ) punctured torus @xmath435 , we remove the interior of the annulus bounded by @xmath478 and attach the 2-disks @xmath479 . we thus obtain an embedded 2-disk @xmath480 which has boundary @xmath439 and is such that @xmath481 $ ] lies in @xmath431{\mathbin{\mathpalette{\mspace{-4mu } \raisebox { { \ifx\relax\displaystyle .8\else \ifx\relax\textstyle .8\else \ifx\relax\scriptstyle .6\else .45 \fi \fi \fi}\depth}{\rotatebox[origin = c]{-20}{$\relax\smallsetminus$ } } \mspace{-4mu } } } g(s^1\times a^\pm)$ ] . consequently , @xmath482)$ ] is an embedded 2-disk in @xmath242 with boundary @xmath483)$ ] , obtained from the embedded punctured torus @xmath484)\subset x_-{\mathbin{\mathpalette{\mspace{-4mu } \raisebox { { \ifx\relax\displaystyle .8\else \ifx\relax\textstyle .8\else \ifx\relax\scriptstyle .6\else .45 \fi \fi \fi}\depth}{\rotatebox[origin = c]{-20}{$\relax\smallsetminus$ } } \mspace{-4mu } } } f(s^2_+)$ ] by applying the construction of lemma [ lem : surgery ] with the 2-disk @xmath485)$ ] and its normal pushoff @xmath486)$ ] . now , for an interior point @xmath487 on @xmath437 , the loop @xmath488 is dual to @xmath440 and @xmath445 on @xmath435 , and is meridinal to @xmath427 in @xmath343 . thus the loop @xmath489 is dual to @xmath490)$ ] and @xmath491)$ ] on @xmath484)$ ] , and represents @xmath117 or @xmath492 in @xmath493 . by lemma [ lem : surgery](i ) and equations - , then , we have ( by an appropriate choice of orientation and whisker in @xmath242 ) @xmath494)\bigr ) = ( 1-s)q^{(2)}_i(s)\end{aligned}\ ] ] and @xmath495)\bigr ) = ( 1-s)q^{(2)}_i(s ) + ( 1-s)\lambda\bigl(f(s^2_+ ) , v_i^c\bigr)\end{aligned}\ ] ] for some integer laurent polynomial @xmath496 $ ] such that @xmath497 . we proceed to attach an annulus to each of the 2-disks @xmath481 $ ] and @xmath498 $ ] so to obtain 2-disks with which to surger the linking torus @xmath408 using the construction of lemma [ lem : surgery ] . in figure [ fig : fig - t - delta - a - b - and - iso - copies ] we have illustrated an oriented circle @xmath499 on @xmath500 $ ] which intersects @xmath501 $ ] and @xmath502 $ ] each in an arc , and appears as a pair of points in @xmath503 $ ] for @xmath414 . we have also illustrated a normal pushoff @xmath504 of @xmath499 on @xmath408 . let @xmath505 denote the annulus on @xmath408 bounded by @xmath506 . notice that the pair @xmath506 is isotopic in @xmath406{\mathbin{\mathpalette{\mspace{-4mu } \raisebox { { \ifx\relax\displaystyle .8\else \ifx\relax\textstyle .8\else \ifx\relax\scriptstyle .6\else .45 \fi \fi \fi}\depth}{\rotatebox[origin = c]{-20}{$\relax\smallsetminus$ } } \mspace{-4mu } } } \hat g$ ] to the link @xmath507 in @xmath406{\mathbin{\mathpalette{\mspace{-4mu } \raisebox { { \ifx\relax\displaystyle .8\else \ifx\relax\textstyle .8\else \ifx\relax\scriptstyle .6\else .45 \fi \fi \fi}\depth}{\rotatebox[origin = c]{-20}{$\relax\smallsetminus$ } } \mspace{-4mu } } } \hat g$ ] which we have also illustrated in figure [ fig : fig - t - delta - a - b - and - iso - copies ] . we then see that the annulus @xmath505 is homotopic in @xmath406{\mathbin{\mathpalette{\mspace{-4mu } \raisebox { { \ifx\relax\displaystyle .8\else \ifx\relax\textstyle .8\else \ifx\relax\scriptstyle .6\else .45 \fi \fi \fi}\depth}{\rotatebox[origin = c]{-20}{$\relax\smallsetminus$ } } \mspace{-4mu } } } \hat g$ ] to the annulus @xmath508 $ ] by a homotopy of @xmath509 whose restriction to @xmath510 is a homotopy from @xmath506 to the link @xmath511\cup \hat \delta^\pm[b^\pm]$ ] through a sequence of links , except for one singular link where the two components pass through each other . that is , we can find a regular homotopy @xmath512{\mathbin{\mathpalette{\mspace{-4mu } \raisebox { { \ifx\relax\displaystyle .8\else \ifx\relax\textstyle .8\else \ifx\relax\scriptstyle .6\else .45 \fi \fi \fi}\depth}{\rotatebox[origin = c]{-20}{$\relax\smallsetminus$ } } \mspace{-4mu } } } \hat g$ ] such that 1 . @xmath513 , @xmath514 , @xmath515 ; 2 . @xmath516 $ ] , @xmath517 $ ] , @xmath518 $ ] ; and 3 . @xmath519 is a link for all @xmath260 except at one value @xmath520 , where @xmath521 has precisely with one transverse self - intersection point , arising as an intersection point between @xmath522 and @xmath523 . denote this intersection point by @xmath524 . we may further suppose that the image of the homotopy @xmath525 lies in a tubular neighborhood ( @xmath526 ) of @xmath499 in @xmath406{\mathbin{\mathpalette{\mspace{-4mu } \raisebox { { \ifx\relax\displaystyle .8\else \ifx\relax\textstyle .8\else \ifx\relax\scriptstyle .6\else .45 \fi \fi \fi}\depth}{\rotatebox[origin = c]{-20}{$\relax\smallsetminus$ } } \mspace{-4mu } } } \hat g$ ] . hence , in particular , we may assume that for each @xmath527 the image of @xmath528 is disjoint from each of @xmath481 $ ] and @xmath498 $ ] . 0.5 cm attaching the annuli @xmath529 and @xmath530 to the 2-disks @xmath481 $ ] and @xmath498 $ ] along @xmath511=k^\pm(s^1\times 0\times 1)$ ] and @xmath531=k^\pm(s^1\times 1\times 1)$ ] , respectively , we obtain embedded 2-disks @xmath532 { \overset{}{\underset{{\hat \delta^\pm[a^\pm]}}{\cup } } } k^\pm((s^1\times 0)\times i)\end{aligned}\ ] ] and @xmath533 { \overset{}{\underset{{\hat \delta^\pm[b^\pm]}}{\cup } } } k^\pm((s^1\times 1)\times i)\end{aligned}\ ] ] in @xmath534{-20}{$\relax\smallsetminus$ } } \mspace{-4mu } } } \hat g$ ] . observe that their respective boundaries @xmath535 and @xmath536 lie on @xmath408 . by construction , @xmath537 and @xmath538 intersect precisely once , transversely , at the intersection point @xmath524 between @xmath529 and @xmath530 . also , since the image of @xmath525 lies in @xmath406 $ ] , the intersections between @xmath539 and @xmath50 lie precisely on @xmath482)$ ] . consequently , by proposition [ prop : lambda - restricted ] we have ( after an appropriate choice of orientations and whiskers in @xmath242 ) @xmath540)\bigr).\end{aligned}\ ] ] now , let @xmath541 be the 2-sphere in @xmath534{-20}{$\relax\smallsetminus$ } } \mspace{-4mu } } } \hat g$ ] obtained by removing from the ( embedded ) torus @xmath408 the interior of the annulus @xmath542 ( bounded by @xmath506 ) and attaching the embedded 2-disks @xmath543 . then @xmath541 is immersed and self - transverse in @xmath247 , with a single double point : the point @xmath524 . wishing to apply lemma [ lem : surgery ] , define a homotopy from @xmath544 to @xmath545 as follows . let @xmath546 be a collar with @xmath547 , and let @xmath548 be an embedding with image @xmath549 . since @xmath550 and @xmath551 $ ] each lie in @xmath534{-20}{$\relax\smallsetminus$ } } \mspace{-4mu } } } \hat g$ ] , from equations - it is readily seen that @xmath552 & & \text { for $ y\in d^2$,}\end{aligned}\ ] ] defines a homotopy @xmath553{-20}{$\relax\smallsetminus$ } } \mspace{-4mu } } } \hat g$ ] from @xmath544 to @xmath545 . then @xmath554{-20}{$\relax\smallsetminus$ } } \mspace{-4mu } } } { \operatorname{int}}\hat t^\pm ) { \overset{}{\underset{{\delta_a^\pm\cup \delta_b^\pm}}{\cup } } } j^\pm(d^2\times \{0,1\}).\end{aligned}\ ] ] now , let @xmath555 ; then @xmath385 is an immersed , self - transverse 2-sphere in @xmath556{\mathbin{\mathpalette{\mspace{-4mu } \raisebox { { \ifx\relax\displaystyle .8\else \ifx\relax\textstyle .8\else \ifx\relax\scriptstyle .6\else .45 \fi \fi \fi}\depth}{\rotatebox[origin = c]{-20}{$\relax\smallsetminus$ } } \mspace{-4mu } } } f(s^2_-)$ ] constructed by surgering the embedded torus @xmath557{-20}{$\relax\smallsetminus$ } } \mspace{-4mu } } } f(s^2_+)$ ] using the 2-disks @xmath558 . furthermore , observe that a dual curve to @xmath499 and @xmath504 on the torus @xmath408 is meridinal to @xmath427 in @xmath247 , so a dual curve to @xmath559 and @xmath559 on the linking torus @xmath560 represents @xmath117 or @xmath492 in @xmath493 . thus , by equation and lemma [ lem : surgery](i ) we have ( after an appropriate choice of whiskers and orientations in @xmath242 ) @xmath561 from equations and we therefore have @xmath562 and @xmath563 for some @xmath378 $ ] such that @xmath564 . as observed earlier , if @xmath473 for some @xmath474 , then we may take @xmath565 . moreover , since @xmath566 and @xmath567 are embedded and intersect precisely once , we have from lemma [ lem : surgery](ii ) that @xmath568 in @xmath569 $ ] . hence @xmath570 . finally , by construction , @xmath385 is homologous to @xmath571 for each @xmath474 , so by ( * ? ? ? * lemma 4.3 ) the immersed 2-spheres @xmath377 represent a @xmath376$]-basis for @xmath237 . the rest of this section will be devoted to applying lemma [ prop : exchange - all - b - i ] to prove the following proposition , which will allow us to surger out the intersections between each @xmath240 and @xmath50 ( in exchange for intersections with @xmath572 . [ prop : fm - constructvis - withu ] let @xmath29 be a good link map such that @xmath38 and @xmath371 for some @xmath268 . then , perhaps after an ambient isotopy , we may assume that @xmath371 and the embedded whitney disks @xmath573 in @xmath215 are framed and satisfy @xmath574 for each @xmath277 . the remainder of this section shall be devoted to proving this result . recall from the beginning of the present section that on the immersed circle @xmath575 in @xmath404 $ ] , the arc @xmath576 contains the loop @xmath577 in its interior . [ lem : permute - dps - second - u ] let @xmath268 . for each @xmath578 let @xmath579 , and let @xmath580 be a permutation of @xmath289 . there are 4-ball neighborhoods @xmath581 of @xmath582 in @xmath215 , @xmath275 , and an ambient isotopy @xmath583 such that @xmath584 , and for each @xmath277 , 1 . [ item : permute1 ] @xmath585 restricts to the identity on @xmath586 ( so @xmath587 ) , 2 . [ item : permute2 ] @xmath585 carries @xmath588 to @xmath589 and 3 . [ item : permute3 ] @xmath590 \overline{\alpha_{\varsigma(i)}^- } & \text { if $ \mu_i=-1$. } \end{cases}\end{aligned}\ ] ] by the construction of @xmath337 , there are disjoint 3-balls @xmath591 and @xmath592 in @xmath338 such that @xmath593 $ ] is a neighborhood of @xmath594 $ ] and such that there is an orientation - preserving diffeomorphism @xmath595 carrying the cusp @xmath251 of section [ sec : unknotted ] to @xmath596 , the double point @xmath256 to @xmath336 , and the oriented loop @xmath255 to @xmath403 . for each @xmath277 , let @xmath597 denote the sign of @xmath598 , let @xmath599 be the orientation - preserving diffeomorphism given by @xmath600 and let @xmath601 , where @xmath264 is the orientation - preserving diffeomorphism of @xmath602 defined in section [ sec : unknotted ] by @xmath603 . recalling equations - , observe that @xmath604 , @xmath605 furthermore , by construction , @xmath350 is obtained from the unknotted , embedded 2-sphere @xmath606 by removing its intersections with the 4-balls @xmath607 , and attaching the cusps @xmath608 , for @xmath609 . for each @xmath277 , let @xmath610 , @xmath611 , and define a permutation @xmath131 on @xmath612 by @xmath613 and @xmath614 . then lemma [ prop : exchange - all - b - i ] yields an ambient isotopy @xmath583 such that @xmath585 fixes @xmath615{-20}{$\relax\smallsetminus$ } } \mspace{-4mu } } } { \operatorname{int}}{\overset{d}{\underset{{i=1}}{\cup } } } ( b_{2i}(d^4)\cup b_{2i-1}(d^4 ) ) = \hat \u_d{\mathbin{\mathpalette{\mspace{-4mu } \raisebox { { \ifx\relax\displaystyle .8\else \ifx\relax\textstyle .8\else \ifx\relax\scriptstyle .6\else .45 \fi \fi \fi}\depth}{\rotatebox[origin = c]{-20}{$\relax\smallsetminus$ } } \mspace{-4mu } } } { \operatorname{int}}{\overset{d}{\underset{{i=1}}{\cup } } } ( ( b^3_+\cup b^3_-)\times i_i)\end{aligned}\ ] ] set - wise , satisfies @xmath616 now , putting @xmath617 , equation gives part [ item : permute1 ] of the lemma ; equation gives part [ item : permute2 ] , and part [ item : permute3 ] follows from equation and by noting that equation implies @xmath618 if @xmath619 and @xmath620 if @xmath621 . since @xmath622 , @xmath585 sends @xmath623 to @xmath624 so @xmath625 by equation . we may now perform an ambient isotopy which carries @xmath371 back to itself in such a way that the accessory circles @xmath626 are rearranged into canceling pairs with respect to their linking numbers with @xmath50 . [ lem : linking - same - in - abs - value ] let @xmath29 be a link map such that @xmath38 and @xmath371 for some @xmath268 . then @xmath29 is link homotopic ( in fact , ambient isotopic ) to a link map @xmath367 such that @xmath627 and , for each @xmath277 , @xmath628 since @xmath38 , there is a function @xmath629 and a permutation @xmath580 on @xmath289 such that @xmath630 for each @xmath277 . by lemma [ lem : permute - dps - second - u ] , there is an ambient isotopy @xmath583 such that @xmath584 and , for each @xmath277 , @xmath631 , @xmath632 if @xmath619 , and @xmath633 if @xmath621 . then , for each @xmath277 , @xmath634 and hence @xmath635 thus , taking @xmath636 , we have @xmath637 having established a means to permute the accessory circles of @xmath350 in a prescribed way , we may now complete the proof of proposition [ prop : fm - constructvis - withu ] . by lemma [ lem : linking - same - in - abs - value ] we may assume , after an ambient isotopy , that @xmath371 and the accessory circles @xmath626 on @xmath145 satisfy @xmath638 for each @xmath277 . recall the notation of figure [ fig : circle ] and that we let @xmath322 and @xmath323 denote disjoint neighborhoods of @xmath316 and @xmath317 , respectively , on the circle @xmath318 . for each @xmath277 , @xmath639)$ ] and @xmath640)$ ] are embedded 2-disk neighborhoods of @xmath641 on @xmath145 which intersect precisely at these two points , and the accessory circles @xmath642 leave along @xmath639)$ ] and return along @xmath640)$ ] . thus , as the arc @xmath643)$ ] runs from @xmath153 to @xmath154 , and the arc @xmath644)$ ] runs from @xmath153 to @xmath154 , by lemma [ lem : linking - number - whitney - circle ] and equation we have @xmath645 referring to the notation of proposition [ prop : fm - constructvis - withu ] and proposition [ lem:2-spheres ] , we next show that by altering the interiors of the 2-disks @xmath236 so to exchange their intersections with @xmath50 for intersections with @xmath145 , we are able to compute @xmath197 as follows . [ prop : relate - omega - vi ] for each @xmath277 , the pair @xmath183 of double points of @xmath646 may be equipped with a framed , immersed whitney disk @xmath180 in @xmath173 such that @xmath647 . furthermore , there are integer laurent polynomials @xmath648 such that @xmath649 and for each @xmath277 , @xmath650 define a ring homomorphism @xmath651 \to \z_2 $ ] by @xmath652\xrightarrow{{\partial } } \z[s , s^{-1}]\xrightarrow{s\,\mapsto 1}\z \xrightarrow{\text{mod } 2 } \z_2,\ ] ] where @xmath653 is the formal derivative defined by setting @xmath654 ( for @xmath655 ) and extending by linearity . recall from section [ sec : prelims ] that we use @xmath57 to denote equivalence modulo @xmath43 , and for an integer laurent polynomial @xmath656 $ ] we write @xmath657 . the following properties of @xmath658 are readily verified . [ lem : varphi - props ] if @xmath656 $ ] , then 1 . @xmath659 , 2 . @xmath660 , and 3 . @xmath661 . let @xmath277 . since @xmath662 the intersections between @xmath50 and @xmath663 ( which may be assumed transverse after a small homotopy of @xmath125 ) may be decomposed into pairs of opposite sign @xmath664 for some @xmath665 ( for any choice of orientation of @xmath666 and @xmath383 ) . for each @xmath667 , choose a simple path @xmath668 on @xmath50 from @xmath669 to @xmath670 whose interior is disjoint from @xmath671 , let @xmath672 be a simple path in @xmath663 from @xmath670 to @xmath669 whose interior misses @xmath50 , and let @xmath673 . the resulting collection of loops @xmath674 in @xmath242 may be chosen to be mutually disjoint . for each @xmath667 define the @xmath675-integer @xmath676 note that @xmath677 is well - defined because @xmath145 and @xmath383 are simply - connected ( c.f . proposition [ prop : lambda - product ] ) . [ lem : m - i - j - new ] there are integer laurent polynomials @xmath648 in @xmath376 $ ] such that for each @xmath277 we have @xmath678 and @xmath679 . choose whiskers connecting @xmath50 and @xmath680 to the basepoint of @xmath242 . let @xmath277 and @xmath667 . since @xmath681 is a loop in @xmath242 that runs from @xmath669 to @xmath670 along @xmath50 , and back to @xmath669 along @xmath383 , by proposition [ prop : lambda - product ] we have @xmath682\cdot ( \lambda(f(s^2_+ ) , v_i^c)[y_i^j])^{-1 } = s^{\hat m_i^j } \in \pi_1(x_-),\ ] ] where @xmath683 is an integer such that @xmath684 thus @xmath682 = s^{\hat m_i^j}\cdot \lambda(f(s^2_+ ) , v_i^c)[y_i^j],\ ] ] so the mod @xmath43 contribution to @xmath685 due to the pair of intersections @xmath686 is @xmath682 + \lambda(f(s^2_+ ) , v_i^c)[y_i^j ] \equiv ( 1+s^{\hat m_i^j})s^{l_i^j},\ ] ] for some @xmath687 . choose @xmath688 $ ] such that @xmath689 applying @xmath658 to both sides ( c.f . lemma [ lem : varphi - props ] ) yields @xmath690 summing over all such pairs @xmath664 we have @xmath691 where @xmath692 satisfies @xmath679 by equation . let @xmath277 . we now remove the intersections between @xmath240 and @xmath50 by surgering @xmath240 along the paths @xmath693 , obtaining an embedded @xmath694-genus , once - punctured surface @xmath695 in @xmath173 which has interior in @xmath242 and coincides with @xmath240 near the boundary . since @xmath50 is transverse to @xmath240 , for each @xmath667 the restriction of a tubular neighborhood of @xmath50 to the arc @xmath668 may be identified with a 3-ball @xmath696 such that @xmath697 and @xmath698 intersects @xmath240 in two embedded 2-disks @xmath699 and @xmath700 neighborhoods of @xmath669 and @xmath670 in @xmath240 , respectively . attaching handles to @xmath240 along the arcs @xmath668 , @xmath701 , yields the surface @xmath702{-20}{$\relax\smallsetminus$ } } \mspace{-4mu } } } { \overset{j_i}{\underset{{j=1}}{\cup } } } { \operatorname{int}}h_i^j({\partial}d^1\times d^2 ) \bigr ] { \overset{j_i}{\underset{{j=1}}{\cup } } } { \overset{}{\underset{{h_i^j({\partial}d^1\times { \partial}d^2)}}{\cup } } } h_i^j(d^1\times { \partial}d^2),\]]which is disjoint from both @xmath50 and @xmath145 . see figure [ fig : wd - handle ] . now , for each @xmath667 we may assume that @xmath672 intersects @xmath703 and @xmath704 exactly once , at points @xmath705 and @xmath706 , respectively . let @xmath707 be the subarc of @xmath672 on @xmath695 running from @xmath706 to @xmath705 , let @xmath708 be a path on @xmath709 connecting @xmath705 to @xmath706 , and put @xmath710 . by band - summing @xmath711 with meridinal circles of @xmath50 of the form @xmath712 ( for a point @xmath713 in @xmath714 ) if necessary , we may assume that @xmath715 ( see figure [ fig : wd - handle ] ) . hence , as @xmath716 is abelian , there is an immersed 2-disk @xmath717 in @xmath173 bound by @xmath718 . we may further assume that @xmath717 misses a collar of @xmath719 and is transverse to @xmath145 . by boundary twisting @xmath717 along @xmath707 ( and so introducing intersections between the interior of @xmath717 and @xmath695 ) if necessary we may further assume that a normal section of @xmath718 that is tangential to @xmath695 extends to a normal section of @xmath717 in @xmath173 . hence there is a normal pushoff @xmath720 of @xmath717 and an annulus @xmath721 on @xmath695 with boundary @xmath722 ( see figure [ fig : wd - handle - surg ] ) . iterating the construction of lemma [ lem : surgery ] we may then surger @xmath695 along @xmath718 , using @xmath717 and its pushoff @xmath720 , for all @xmath667 , to obtain an _ immersed _ 2-disk @xmath180 in @xmath173 such that the framing of @xmath240 ( which agrees with @xmath180 near the boundary ) along its boundary extends over @xmath180 . but @xmath240 is a framed whitney disk for @xmath145 in @xmath215 , so @xmath723 is a framed whitney disk for @xmath145 in @xmath724 . that is , @xmath725{-20}{$\relax\smallsetminus$ } } \mspace{-4mu } } } { \operatorname{int}}{\overset{j_i}{\underset{{j=1}}{\cup } } } \,\varrho_i^j ) { \overset{j_i}{\underset{{j=1}}{\cup } } } { \overset{}{\underset{{{\partial}\varrho_i^j}}{\cup } } } ( { \hat}{q}_i^j\cup { \acute}{q}_i^j)\ ] ] is a framed , immersed whitney disk for the immersion @xmath726 . let @xmath727 denote the complement in @xmath180 of a half - open collar it shares with @xmath240 so that @xmath728 and @xmath145 intersects @xmath727 in its interior . the first step in relating @xmath239 to the intersections between @xmath50 and the @xmath240 s is the following lemma . [ lem : w - i - n - i - even - new ] the contribution to @xmath239 due to intersections between @xmath145 and the interior of @xmath180 is @xmath729 referring to the constructions preceding the lemma statement , since @xmath730 , the only intersections between @xmath204 and @xmath145 lie on the immersed 2-disks @xmath731 . indeed , since @xmath717 is the pushoff of @xmath720 along a section of its normal bundle that is tangent to the annulus @xmath721 on @xmath695 , there is an immersion of a 3-ball @xmath732 such that @xmath733 , @xmath734 and @xmath735 . furthermore , since @xmath717 is transverse to @xmath50 we may assume that if we let @xmath736 then there are distinct points @xmath737 , @xmath738 , such that @xmath145 intersects @xmath739 precisely along the arcs @xmath740 . whence the intersections between @xmath145 and @xmath204 consist precisely of pairs @xmath741 for @xmath738 . thus , in particular , if @xmath199 is odd then from remark [ rem : omega - compute ] we have @xmath742 . suppose now that @xmath199 is even . note that since the loop @xmath718 on @xmath695 is freely homotopic in @xmath242 to @xmath681 ( to see this , collapse @xmath743 onto its core @xmath744 in @xmath242 ) we have @xmath745 , so @xmath746 we may arrange that there are points @xmath747 and @xmath748 so that the meridinal circle @xmath749 of @xmath50 on @xmath695 intersects @xmath721 along the arc @xmath750 . let @xmath751 denote the interval @xmath55 $ ] oriented from @xmath214 to @xmath67 , and let @xmath752 be the path on @xmath753{-20}{$\relax\smallsetminus$ } } \mspace{-4mu } } } { \operatorname{int}}\varrho_i^j$ ] that runs from @xmath754 to @xmath755 . then the loop based at @xmath755 and spanning @xmath749 given by @xmath756 is a meridinal loop of @xmath50 , so @xmath757 . [ claim : omega - contribution ] for fixed @xmath758 : if @xmath199 is even then for each @xmath738 the contribution to @xmath759 due to the pair @xmath760 is @xmath761 let @xmath762 be a path in @xmath763 connecting @xmath764 to @xmath369 . then @xmath765 is a loop that runs from @xmath766 to @xmath767 along @xmath768 and then back to @xmath766 along @xmath727 , so by remarks [ rem : omega - compute ] and [ rem : omega - product ] we have @xmath769 now , the loop @xmath770 is homotopic in @xmath173 to the loop @xmath771 but the loop @xmath772 bounds the 2-disk @xmath773 . thus @xmath770 is homotopic in @xmath173 to @xmath774 and so @xmath775 applying the claim to all such pairs of intersections @xmath776 between @xmath50 and @xmath777 , over all @xmath667 , yields the total contribution @xmath778 where the last equality is by equation . this completes the proof of lemma [ lem : w - i - n - i - even - new ] . applying the lemma , we have @xmath779 where @xmath277 . proposition [ prop : relate - omega - vi ] now follows from lemma [ lem : m - i - j - new ] . @xmath780 [ sec : main3 ] we now bring kirk s invariant @xmath241 into the picture by noting its relationship with the homotopy class of @xmath50 as an element of @xmath237 . referring to proposition [ lem:2-spheres ] , since @xmath237 is generated as a @xmath781$]-module by the 2-spheres @xmath377 , there are integer laurent polynomials @xmath782 such that , as a ( whiskered ) element of @xmath237 , @xmath50 is given by @xmath783 by the sesquilinearity of the intersection form @xmath784 we have from proposition [ lem:2-spheres ] that @xmath785.\label{eq : final - step-2}\end{aligned}\ ] ] in @xcite , kirk showed that @xmath0 has the following image . [ eq : pk - image ] if @xmath367 is a link map , then @xmath786 for some integer @xmath787 and integers @xmath788 . now , since @xmath38 and @xmath29 is a good link map , by proposition [ prop : sigma - mu ] we have @xmath789 \label{eq : lambda - n}\end{aligned}\ ] ] for some integers @xmath790 . the following observation about the terms in the right - hand side of this equation will be useful in performing some arithmetic in @xmath791 $ ] . [ lem : lambda - n ] let @xmath792 be an integer . then @xmath793 for some integer laurent polynomial @xmath794 $ ] such that @xmath795 if @xmath796 for some @xmath797 , then modulo @xmath43 we have @xmath798 on the other hand , if @xmath799 for some @xmath797 , by direct expansion one readily verifies that , modulo @xmath43 , @xmath800 for some @xmath801 $ ] . now , from equation , proposition [ lem:2-spheres](ii ) and the sesquilinearity of @xmath802 , we have @xmath803 comparing with proposition [ lem:2-spheres](iii),(iv ) and proposition [ prop : relate - omega - vi ] , we see that there are integer laurent polynomials @xmath804 such that @xmath805 and for each @xmath277 , @xmath806 where @xmath807 . thus equation becomes @xmath808 \\ \;\;\;\;\;\;&\equiv ( s+s^{-1}){\overset{d}{\underset{{i=1}}{\textstyle\sum}}}\bigl[(1+s)u_i(s)\overline{q_i(s ) } + ( 1+s^{-1})q_i(s)\overline{u_i(s ) } \\ & \quad\quad\quad + ( 1+s)(1+s^{-1})u_i(s)\overline{u_i(s)}\bigr].\end{aligned}\ ] ] comparing with equation and applying lemma [ lem : lambda - n ] we then have @xmath809\notag\\ & \equiv ( 1+s)^4 \sum_{n=2}^k a_n r_n(s ) \notag\\ & \equiv ( s+s^{-1})(1+s^{-1})^2 \sum_{n=2}^k a_n \hat r_n(s)\end{aligned}\ ] ] for some integer laurent polynomials @xmath810 ( here , @xmath811 ) such that @xmath812 if @xmath42 is even , and @xmath813 if @xmath42 is odd . since @xmath791 $ ] is an integral domain , we may divide both sides of equation by @xmath814 to obtain @xmath815 applying the homomorphism @xmath658 of lemma [ lem : varphi - props ] to both sides of equation then yields the following equality in @xmath675 : @xmath816\\ & \equiv { \overset{d}{\underset{{i=1}}{\textstyle\sum}}}\ , q_i(1)u_i(1 ) + u_i(1 ) \\ & \equiv { \overset{d}{\underset{{i=1}}{\textstyle\sum}}}\ , u_i(1)(n_i + 1)\\ & \equiv { \overset{}{\underset{{i : \text { $ n_i$ is even}}}{\textstyle\sum}}}\ , u_i(1).\end{aligned}\ ] ] thus , as @xmath817 if and only if @xmath42 is even and @xmath818 ( i.e. , @xmath819 mod @xmath44 ) , from equation we have @xmath820 completing the proof of theorem [ thm : mainresult ] . we break the proof into the following lemmas . [ lem : isotopy-2disk ] fix an orientation of @xmath96 . let @xmath821 be a pair of mutually disjoint , equi - oriented embeddings . let @xmath822 , @xmath823 be 2-disk neighborhoods of @xmath824 and @xmath825 in @xmath96 , respectively . 1 . there is an ambient isotopy @xmath826 with support on @xmath822 such that @xmath827 for @xmath828 . 2 . there is an ambient isotopy @xmath829 with support on @xmath823 such that @xmath830 and @xmath831 . let @xmath834 be a 2-disk neighborhood of @xmath825 in the interior of @xmath823 , and choose a collar @xmath835 of @xmath823 such that @xmath836 for @xmath837 and @xmath838 . since the embeddings @xmath839 and @xmath840 are equi - oriented , by the disk theorem ( * ? ? ? * corollary 3.3.7 ) and the isotopy extension theorem ( * ? ? ? * theorem 2.5.2 ) , there is an ambient isotopy @xmath841 such that @xmath842 and @xmath843 . choose a smooth function @xmath844 satisfying @xmath845 and @xmath846 , and define @xmath829 as follows . for each @xmath260 , let @xmath847 on @xmath834 , let @xmath848 for @xmath849 , and let @xmath850 elsewhere . it is readily verified that @xmath851 , and that for each @xmath260 , the map @xmath852 is well - defined on @xmath853 and constant on the complement of @xmath854{-20}{$\relax\smallsetminus$ } } \mspace{-4mu } } } { \operatorname{int}}n$ ] . [ lem : isotopy-4disk ] suppose that @xmath855 are a pair of equi - oriented embeddings with mutually disjoint images such that , if @xmath856 denotes the standard embedding , we have @xmath857 for @xmath858 . let @xmath822 , @xmath823 be 2-disk neighborhoods of @xmath859 and @xmath860 in @xmath96 , respectively . 1 . there is an ambient isotopy @xmath861 with support on an arbitrarily small 4-ball neighborhood of @xmath862 such that @xmath863 fixes @xmath96 set - wise , and @xmath864 for @xmath295 . there is an ambient isotopy @xmath865 with support on an arbitrarily small 4-ball neighborhood of @xmath866 such that @xmath867 fixes @xmath868{-20}{$\relax\smallsetminus$ } } \mspace{-4mu } } } { \operatorname{int}}\bigl(b_1(d^4)\cup b_2(d^4)\bigr ) \ ] ] set - wise , and @xmath869 and @xmath870 . we prove ( ii ) only ; ( i ) is an analogous application of part ( i ) of lemma [ lem : isotopy-2disk ] . denote the closed 2-disk in @xmath871 of radius @xmath872 by @xmath873 . since for @xmath858 , @xmath874 intersects the standard 2-sphere along @xmath875 , we may identify a tubular neighborhood of @xmath876 with @xmath877 so that there are equi - oriented , disjoint embeddings @xmath878 such that @xmath879 and @xmath880 is given by @xmath881 for @xmath295 . by lemma [ lem : isotopy-2disk](ii ) there is an ambient isotopy @xmath829 with support on @xmath823 such that @xmath882 and @xmath831 ; in particular , @xmath883 fixes @xmath884{-20}{$\relax\smallsetminus$ } } \mspace{-4mu } } } { \operatorname{int}}\bigl(\hat b_1(d^2)\cup \hat b_2(d^2)\bigr)$ ] set - wise . we construct an isotopy @xmath885 of @xmath886 such that , for each @xmath260 : choose a smooth function @xmath844 such that @xmath846 and @xmath891 for all @xmath892 $ ] . for each @xmath260 and @xmath893 , let @xmath894 , where @xmath895 denotes the euclidean norm on @xmath763 . note that on @xmath896 , @xmath897 has inverse given by @xmath898 . to verify ( 1 ) , observe that for @xmath899 and @xmath900 we have @xmath901 ; for @xmath889 and @xmath902 , we have @xmath903 . to verify ( 2 ) , observe that if @xmath890 then @xmath904 and so @xmath905 . regarding ( 3 ) , for @xmath295 we have @xmath906 and we have @xmath907 similarly . now , by ( 1 ) we may extend @xmath885 to an isotopy of @xmath215 that is constant on the complement of @xmath896 . since @xmath883 fixes @xmath884{-20}{$\relax\smallsetminus$ } } \mspace{-4mu } } } { \operatorname{int}}\bigl(\hat b_1(d^2)\cup \hat b_2(d^2)\bigr)=\bigl[n{\mathbin{\mathpalette{\mspace{-4mu } \raisebox { { \ifx\relax\displaystyle .8\else \ifx\relax\textstyle .8\else \ifx\relax\scriptstyle .6\else .45 \fi \fi \fi}\depth}{\rotatebox[origin = c]{-20}{$\relax\smallsetminus$ } } \mspace{-4mu } } } { \operatorname{int}}\bigl(\hat b_1(d^2)\cup \hat b_2(d^2)\bigr)\bigr]\times 0 $ ] set - wise , so does @xmath867 by property ( 2 ) ; hence @xmath867 fixes @xmath96 ( since @xmath867 is the identity outside @xmath908 ) . choose an orientation - preserving diffeomorphism @xmath909 that takes @xmath272 to the standard embedding @xmath856 ; then @xmath910 , @xmath275 , is a collection of mutually disjoint , equi - oriented embeddings @xmath911 whose images intersect @xmath96 precisely along @xmath912 , respectively . if @xmath131 is non - trivial , write it as a product of non - trivial transpositions @xmath913 for some @xmath914 . for each @xmath915 , write @xmath916 for some @xmath917 , and let @xmath918 be a 2-disk neighborhood of @xmath919 in @xmath854{-20}{$\relax\smallsetminus$ } } \mspace{-4mu } } } { \operatorname{int}}{\overset{}{\underset{{i\neq a_k , b_k}}{\cup } } } b_i'(d^4)$ ] . by lemma [ lem : isotopy-4disk](ii ) there is an ambient isotopy @xmath920 with support on @xmath921 in @xmath51{-20}{$\relax\smallsetminus$ } } \mspace{-4mu } } } { \operatorname{int}}{\overset{}{\underset{{i\neq a_k , b_k}}{\cup } } } b_i'(d^4)$ ] such that @xmath922 fixes @xmath868{-20}{$\relax\smallsetminus$ } } \mspace{-4mu } } } { \operatorname{int}}\bigl(b_{a_k}(d^4)\cup b_{b_k}(d^4)\bigr ) \ ] ] set - wise and is such that @xmath923 for @xmath924 . define @xmath865 by @xmath925 for @xmath926 and @xmath927 $ ] , where @xmath928 . then @xmath885 is an ambient isotopy which fixes @xmath929{-20}{$\relax\smallsetminus$ } } \mspace{-4mu } } } { \operatorname{int}}{\overset{d}{\underset{{i=1}}{\cup } } } b_i'(d^4)\ ] ] set - wise and satisfies @xmath930 for each @xmath277 . now , by lemma [ lem : isotopy-4disk](i ) , for each @xmath277 there is an ambient isotopy @xmath931 with support on a 4-ball neighborhood of @xmath932 in @xmath51{-20}{$\relax\smallsetminus$ } } \mspace{-4mu } } } { \operatorname{int}}{\overset{}{\underset{{k\neq \rho(i)}}{\cup } } } b_k'(d^4)$ ] such that @xmath933 fixes @xmath96 set - wise and is such that @xmath934 for @xmath295 . define @xmath861 by @xmath935 for @xmath926 and @xmath936 $ ] , where @xmath609 . then @xmath937 is an ambient isotopy which fixes @xmath96 set - wise and satisfies @xmath938 for @xmath295 . thus if @xmath939 is the ambient isotopy defined by @xmath940 for @xmath941 $ ] and @xmath942 for @xmath943 $ ] , then @xmath944 is the required isotopy . a. bartels , higher dimensional links are singular slice , _ math . * 320 * ( 3 ) ( 2001 ) 547 - 576 . a. bartels and p. teichner , all two dimensional links are null homotopic , _ geom . _ * 3 * ( 1999 ) 235 - 252 . r. fenn and d. rolfsen , spheres may link in 4-space , _ j. london math . * 34 * ( 1986 ) 177 - 184 . m. freedman and r. kirby , a geometric proof of rochlin s theorem , _ proc . . ams _ * 32 * ( 2 ) ( 1978 ) 85 - 98 . m. freedman and f. quinn , the topology of 4-manifolds , _ princeton math . series , _ vol . 39 ( princeton , nj , 1990 ) . n. habegger and x .- s . lin , the classification of links up to homotopy , _ soc . _ * 3 * ( 1990 ) 389 - 420 . f. hosokawa and a. kawauchi , proposals for unknotted surfaces in four - spaces , _ osaka j. math . _ * 16 * ( 1979 ) 233 - 248 . s. kamada , vanishing of a certain kind of vassiliev invariants of 2-knots , _ proc . ams _ * 127 * ( 11 ) ( 1999 ) 3421 - 3426 . p. kirk , link maps in the four sphere , in _ proc . 1987 siegen topology conf . _ , slnm 1350 ( springer , berlin , 1988 ) . p. kirk , link homotopy with one codimension two component . _ * 319 * ( 1990 ) 663 - 688 . u. koschorke , on link maps and their homotopy classification , _ math . * 286 * ( 1990 ) 753 - 782 . a. kosinski , _ differential manifolds _ ( academic press , san diego , 1993 ) . li , an invariant of link homotopy in dimension four , _ topology _ * 36 * ( 1997 ) 881 - 897 . a. lightfoot , on invariants of link maps in dimension four , _ j. knot theory ramifications , _ july 2016 , + doi : http://dx.doi.org/10.1142/s0218216516500607 . j. milnor , link groups , _ ann . of math _ * 59 * ( 1954 ) 177 - 195 . w. s. massey and d. rolfsen , homotopy classification of higher - dimensional links , _ indiana univ . j. _ * 34 * ( 1985 ) 375 - 391 . r. schneiderman and p. teichner , higher order intersection numbers of 2-spheres in 4-manifolds , _ algebraic and geometric topology _ * 1 * ( 2001 ) 1 - 29 . c. t. c. wall , _ surgery on compact manifolds _ ( academic press , new york , 1970 ) .
it is an open problem whether kirk s @xmath0 invariant is the complete obstruction to a link map @xmath1 being link homotopically trivial . with the objective of constructing counterexamples , li proposed a link homotopy invariant @xmath2 that is defined on the kernel of @xmath0 and also obstructs link nullhomotopy . we show that @xmath2 is determined by @xmath0 , and is a strictly weaker invariant .
innovation and technological change have been described by many scholars as the main drivers of economic growth as in @xcite and @xcite . @xcite advertised the use of patents as an economic indicator and as a good proxy for innovation . subsequently , the easier availability of comprehensive databases on patent details and the increasing number of studies allowing a more efficient use of these data ( e.g. @xcite ) have opened the way to a very wide range of analysis . most of the statistics derived from the patent databases relied on a few key features : the identity of the inventor , the type and identity of the rights owner , the citations made by the patent to prior art and the technological classes assigned by the patent office post patent s content review . combining this information is particularly relevant when trying to capture the diffusion of knowledge and the interaction between technological fields as studied in @xcite . with methods such as citation dynamics modeling discussed in @xcite or co - authorship networks analysis in @xcite , a large body of the literature such as @xcite or @xcite has studied patents citation network to understand processes driving technological innovation , diffusion and the birth of technological clusters . finally , @xcite look at the dynamics of citations from different classes to show that the laser / ink - jet printer technology resulted from the recombination of two different existing technologies . consequently , technological classification combined with other features of patents can be a valuable tool for researchers interested in studying technologies throughout history and to predict future innovations by looking at past knowledge and interaction across sectors and technologies . but it is also crucial for firms that face an ever changing demand structure and need to anticipate future technological trends and convergence ( see , e.g. , @xcite ) to adapt to the resulting increase in competition discussed in @xcite and to maintain market share . curiously , and in spite of the large number of studies that analyze interactions across technologies @xcite , little is known about the underlying `` innovation network '' ( e.g. @xcite ) . in this monograph , we propose an alternative classification based on semantic network analysis from patent abstracts and explore the new information emerging from it . in contrast with the regular technological classification which results from the choice of the patent reviewer , semantic classification is carried automatically based on the content of the patent abstract . although patent officers are experts in their fields , the relevance of the existing classification is limited by the fact that it is based on the state of technology at the time the patent was granted and can not anticipate the birth of new fields . in contrast we do nt face this issue with the semantic approach . the semantic links can be clues of one technology taking inspiration from another and good predictors of future technology convergence ( e.g. @xcite study semantic similarities from the whole text of 326 us - patents on _ phytosterols _ and show that semantic analysis have a good predicting power of future technology convergence ) . one can for instance consider the case of the word _ optic_. until more recently , this word was often associated with technologies such as photography or eye surgery , while it is now almost exclusively used in a context of semi - transistor design and electro - optic . this semantic shift did not happen by chance but contains information on the fact that modern electronic extensively uses technologies that were initially developed in optic . previous research has already proposed to use semantic networks to study technological domains and detect novelty . @xcite was one of the first to enhance this approach with the idea of visualizing keywords network illustrated on a small technological domain . the same approach can be used to help companies identifying the state of the art in their field and avoid patent infringement as in @xcite and @xcite . more closely related to our methodology , @xcite develop a method based on patent semantic analysis of patent to vindicate the view that this approach outperform others in the monitoring of technology and in the identification of novelty innovation . semantic analysis has already proven its efficiency in various fields , such as in technology studies ( e.g. @xcite and @xcite ) and in political science ( e.g. @xcite ) . building on such previous research , we make several contributions by fulfilling some shortcomings of existing studies , such as for example the use of frequency - selected single keywords . first of all , we develop and implement a novel fully - automatized methodology to classify patents according to their semantic abstract content , which is to the best of our knowledge the first of its type . this includes the following refinements for which details can be found in section [ keywords ] : ( i ) use of multi - stems as potential keywords ; ( ii ) filtering of keywords based on a second - order ( co - occurrences ) relevance measure and on an external independent measure ( technological dispersion ) ; ( iii ) multi - objective optimization of semantic network modularity and size . the use of all this techniques in the context of semantic classification is new and essential from a practical perspective . furthermore , most of the existing studies rely on a subsample of patent data , whereas we implement it on the full us patent database from 1976 to 2013 . this way , a general structure of technological innovation can be studied . we draw from this application promising qualitative stylized facts , such as a qualitative regime shift around the end of the 1990s , and a significant improvement of citation modularity for the semantic classification when comparing to the technological classification . these thematic conclusions validate our method as a useful tool to extract endogenous information , in a complementary way to the technological classification . finally , the statistical model introduced in section [ statisticalmodel ] seems to indicate that patents tend to cite more similar patents in the semantic network when fitted to data . in particular , this propensity is shown to be significantly bigger than the corresponding propensity for technological classes , and this seems to be consistent over time . on the account of this information , we believe that patent officers could benefit very much from looking at the semantic network when considering potential citation candidates of a patent in review . the paper is organized as follows . section [ data ] presents the patent data , the existing classification and provide details about the data collection process . section [ keywords ] explains the construction of the semantic classes . section [ result ] tests their relevance by providing exploratory results . finally , section [ discussion ] discusses potential further developments and conclude . more details , including robustness checking , figures and technical derivations can be found in . in our analysis , we will consider all utility patents granted in the united states patent and trademark office ( uspto ) from 1976 to 2013 . a clearer definition of utility patent is given in . also , additional information on how to correctly exploit patent data can be found in @xcite and @xcite . each uspto patent is associated with a non - empty set of technological classes and subclasses . there are currently around 440 classes and over 150,000 subclasses constituting the united state patent classification ( uspc ) system . while a technological class corresponds to the technological field covered by the patent , a subclass stands for a specific technology or method used in this invention . a patent can have multiple technological classes , on average in our data a patent has 1.8 different classes and 3.9 pairs of class / subclass . at this stage , two features of this system are worth mentioning : ( i ) classes and subclasses are not chosen by the inventors of the patent but by the examiner during the granting process based on the content of the patent ; ( ii ) the classification has evolved in time and continues to change in order to adapt to new technologies by creating or editing classes . when a change occurs , the uspto reviews all the previous patents so as to create a consistent classification . as with scientific publications , patents must give reference to all the previous patents which correspond to related prior art . they therefore indicate the past knowledge which relates to the patented invention . yet , contrary to scientific citations , they also have an important legal role as they are used to delimit the scope of the property rights awarded by the patent . one can consult @xcite for more details about this . failing to refer to prior art can lead to the invalidation of the patent ( e.g. @xcite ) . another crucial difference is that the majority of the citations are actually chosen by the examiners and not by the inventors themselves . from the uspto , we gather information of all citations made by each patent ( backward citations ) and all citations received by each patent as of the end of 2013 ( forward citations ) . we can thus build a complete network of citations that we will use later on in the analysis . turning to the structure of the lag between the citing and the cited patent in terms of application date , we see that the mean of this lag is 8.5 years and the median is 7 years . this distribution is highly skewed , the @xmath0 percentile is 21 years . we also report 164,000 citations with a negative time lag . this is due to the fact that some citations can be added during the examination process and some patents require more time to be granted than others . in what follows , we choose to restrict attention to pairs of citations with a lag no larger than 5 years . we impose this restriction for two reasons . first , the number of citations received peaks 4 - 5 years after application . second , the structure of the citation lag is necessarily biased by the truncation of our sample : the more recent patents mechanically receive less citations than the older ones . as we are restricting to citations received no later than 5 years after the application date , this effect will only affect patents with an application date after 2007 . each patent contains an abstract and a core text which describe the invention . form at https://patents.google.com . ] although including the full core texts would be natural and probably very useful in a systematic text - mining approach as done in @xcite , they are too long to be included and thus we consider only the abstracts for the analysis . indeed , the semantic analysis counts more than 4 million patents , with corresponding abstracts with an average length of 120.8 words ( and a standard deviation of @xmath1 ) , a size that is already challenging in terms of computational burden and data size . in addition , abstracts are aimed at synthesizing purpose and content of patents and must therefore be a relevant object of study ( see @xcite ) . the uspto defines a guidance stating that an abstract should be `` a summary of the disclosure as contained in the description , the claims , and any drawings ; the summary shall indicate the technical field to which the invention pertains and shall be drafted in a way which allows the clear understanding of the technical problem , the gist of the solution of that problem through the invention , and the principal use or uses of the invention '' ( pct rule 8) . we construct from raw data a unified database . data is collected from uspto patent redbook bulk downloads , that provides as raw data ( specific ` dat ` or ` xml ` formats ) full patent information , starting from 1976 . detailed procedure of data collection , parsing and consolidation are available in . the latest dump of the database in ` mongodb ` format is available at http://dx.doi.org/10.7910/dvn/bw3ack collection and homogenization of the database into a directly usable database with basic information and abstracts was an important task as uspto raw data formats are involved and change frequently . we count 4,666,365 utility patents with an abstract granted from 1976 to 2013 . the number of patents granted each year increases from around 70,000 in 1976 to about 278,000 in 2013 . when distributed by the year of application , the picture is slightly different . the number of patents steadily increase from 1976 to 2000 and remains constant around 200,000 per year from 2000 to 2007 . restricting our sample to patent with application date ranging from 1976 to 2007 , we are left with 3,949,615 patents . these patents cite 38,756,292 other patents with the empirical lag distribution that has been extensively analyzed in @xcite . conditioned on being cited at least once , a patent receives on average 13.5 citations within a five - year window . 270,877 patents receive no citation during the next five years following application , 10% of patents receive only one citation and 1% of them receive more than 100 citations . a within class citation is defined as a citation between two patents sharing at least one common technological class . following this definition , 84% of the citations are within class citations . 14% of the citations are between two patents that share the exact same set of technological classes . potentialities of text - mining techniques as an alternative way to analyze and classify patents are documented in @xcite . the author s main argument , in support of an automatic classification tool for patent , is to reduce the considerable amount of human effort needed to classify all the applications . the work conducted in the field of natural language processing and/or text analysis has been developed in order to improve search performance in patent databases , build technology map or investigate the potential infringement risks prior to developing a new technology ( see @xcite for a review ) . text - mining of patent documents is also widely used as a tool to build networks which carry additional information to the simplistic bibliographic connections model as argued in @xcite . as far as the authors know , the use of text - mining as a way to build a global classification of patents remains however largely unexplored . one notable exception can be found in @xcite where semantic - based classification is shown to outperform the standard classification in predicting the convergence of technologies even in small samples . semantic analysis reveals itself to be more flexible and more quickly adaptable to the apparition of new clusters of technologies . indeed , as argued in @xcite , before two distinct technologies start to clearly converge , one should expect similar words to be used in patents from both technologies . finally , a semantic classification where patents are gathered based on the fact that they share similar significant keywords has the advantage of including a network feature that can not be found in the uspc case , namely that each patent is associated with a vector of probability to belong to each of the semantic classes ( more details on this feature can be found in section [ characteristics ] ) . using co - occurrence of keywords , it is then possible to construct a network of patents and to study the influence of some key topological features . in this section , we describe methods and empirical analysis leading to the construction of semantic network and the corresponding classification . let @xmath2 be the set of patents , we first assign to a patent @xmath3 a set of potentially significant keywords @xmath4 from its text @xmath5 ( that corresponds to the concatenation of its own title and abstract ) . @xmath4 are extracted through a similar procedure as the one detailed in @xcite : 1 . text parsing and tokenization : we transform raw texts into a set of words and sentences , reading it ( parsing ) and splitting it into elementary entities ( words organized in sentences ) . part - of - speech tagging : attribution of a grammatical function to each of the tokens defined previously . stem extraction : families of words are generally derived from a unique root called stem ( for example ` compute ` , ` computer ` , ` computation ` all yield the same stem ` comput ` ) that we extract from tokens . at this point the abstract text is reduced to a set of stems and their grammatical functions . multi - stems construction : these are the basic semantic units used in further analysis . they are constructed as groups of successive stems in a sentence which satisfies a simple grammatical function rule . the length of the group is between 1 and 3 and its elements are either nouns , attributive verbs or adjectives . we choose to extract the semantics from such nominal groups in view of the technical nature of texts , which is not likely to contain subtle nuances in combinations of verbs and nominal groups . text processing operations are implemented in ` python ` in order to use built - in functions ` nltk ` library @xcite for most of above operations . this library supports most of state - of - the - art natural language processing operations . [ [ relevance - definition ] ] relevance definition + + + + + + + + + + + + + + + + + + + + following the heuristic in @xcite , we estimate relevance score in order to filter multi - stem . the choice of the total number of keywords to be extracted @xmath6 is important , too small a value would yield similar network structures but including less information whereas very large values tend to include too many irrelevant keywords . we choose to set this parameter to @xmath7 . we first consider the filtration of @xmath8 ( with @xmath9 ) to keep a large set of potential keywords but still have a reasonable number of co - occurrences to be computed . this is done on the _ unithood _ @xmath10 , defined for keyword @xmath11 as @xmath12 where @xmath13 is the multi - stem s number of apparitions over the whole corpus and @xmath14 its length in words . a second filtration of @xmath6 keywords is done on the _ termhood _ @xmath15 . the latter is computed as a chi - squared score on the distribution of the stem s co - occurrences and then compared to a uniform distribution within the whole corpus . intuitively , uniformly distributed terms will be identified as plain language and they are thus not relevant for the classification . more precisely , we compute the co - occurrence matrix @xmath16 , where @xmath17 is defined as the number of patents where stems @xmath11 and @xmath18 appear together . the _ termhood _ score @xmath15 is defined as @xmath19 [ [ moving - window - estimation ] ] moving window estimation + + + + + + + + + + + + + + + + + + + + + + + + the previous scores are estimated on a moving window with fixed time length following the idea that the present relevance is given by the most recent context and thus that the influence vanishes when going further into the past . consequently , the co - occurrence matrix is chosen to be constructed at year @xmath20 restricting to patent which applied during the time window @xmath21 $ ] . note that the causal property of the window is crucial as the future can not play any role in the current state of keywords and patents . this way , we will obtain semantic classes which are exploitable on a @xmath22 time span . for example , this enables us to compute the modularity of classes in the citation network as in section [ citationmodularity ] . in the following , we take @xmath23 ( which corresponds to a five year window ) consistently with the choice of maximum time lag for citations made in section [ sub : citation ] . accordingly , the sensitivity analysis for @xmath24 can be found in appendix . we keep the set of most relevant keywords @xmath25 and obtain their co - occurrence matrix as defined in section [ keywords_est ] . this matrix can be directly interpreted as the weighted adjacency matrix of the semantic network . at this stage , the topology of raw networks does not allow the extraction of clear communities . this is partly due to the presence of hubs that correspond to frequent terms common to many fields ( e.g. ` method ` , ` apparat ` ) which are wrongly filtered as relevant . we therefore introduce an additional measure to correct the network topology : the concentration of keywords across technological classes , defined as : @xmath26 where @xmath27 is the number of occurrences of the @xmath28th keyword in each of the @xmath18th technological class taken from one of the @xmath29 uspc classes . the higher @xmath30 , the more specific to a technological class the node is . for example , the terms ` semiconductor ` is widely used in electronics and does not contain any significant information in this field . we use a threshold parameter and keep nodes with @xmath31 . in a similar manner , edge with low weights correspond to rare co - occurrences and are considered as noise : we filter edges with weight lower than a threshold @xmath32 , following the rationale that two keywords are not linked `` by chance '' if they appear simultaneously a minimal number of time . to control for size effect , we normalize by taking @xmath33 where @xmath34 is the number of patents in the corpus ( @xmath35 ) . communities are then extracted using a standard modularity maximization procedure as described in @xcite to which we add the two constraints captured by @xmath32 and @xmath36 , namely that edges must have a weight greater than @xmath32 and nodes a concentration greater than @xmath36 . at this stage , both parameters @xmath36 and @xmath37 are unconstrained and their choice is not straightforward . indeed , many optimization objectives are possible , such as the modularity , network size or number of communities . we find that modularity is maximized at a roughly stable value of @xmath32 across different @xmath36 for each year , corresponding to a stable @xmath37 across years , which leads us to choose @xmath38 . then for the choice of @xmath36 , different candidates points lie on a pareto front for the bi - objective optimization on number of communities and network size , among which we take @xmath39 ( see fig . [ fig : networksensitivity ] ) . for different @xmath36 values . the maximum is roughly stable across @xmath36 ( dashed red line ) . _ ( right panel ) _ to choose @xmath36 , we do a pareto optimization on communities and network size : the compromise point ( red overline ) on the pareto front ( purple overline : possible choices after having fixed @xmath37 ; blue level gives modularity ) corresponds to @xmath39.,title="fig:",scaledwidth=45.0% ] for different @xmath36 values . the maximum is roughly stable across @xmath36 ( dashed red line ) . _ ( right panel ) _ to choose @xmath36 , we do a pareto optimization on communities and network size : the compromise point ( red overline ) on the pareto front ( purple overline : possible choices after having fixed @xmath37 ; blue level gives modularity ) corresponds to @xmath39.,title="fig:",scaledwidth=45.0% ] and @xmath40 . the corresponding file in a vector format ( ` .svg ` ) , that can be zoomed and explored , is available at ` http://37.187.242.99/files/public/network.svg ` . ] for each year @xmath20 , we define as @xmath41 the number of semantic classes which have been computed by clustering keywords from patents appeared during the period @xmath42 $ ] ( we recall that we have chosen @xmath43 ) . each semantic class @xmath44 is characterized by a set of keywords @xmath45 which is a subset of @xmath25 selected as described in section [ keywordsextraction ] to section [ construction ] . the cardinal of @xmath46 distribution across each semantic class @xmath47 is highly skewed with a few semantic classes containing over @xmath48 keywords , most of them with roughly the same number of keywords . in contrast , there are also many semantic classes with only two keywords . there are around 30 keywords by semantic class on average and the median is 2 for any @xmath20 . [ fig : mean_k ] shows that the average number of keywords is relatively stable from 1976 to 1992 and then picks around 1996 prior to going down . $ ] * from @xmath49 to @xmath50.,scaledwidth=90.0% ] [ [ title - of - semantic - classes ] ] title of semantic classes + + + + + + + + + + + + + + + + + + + + + + + + + uspc technological classes are defined by a title and a highly accurate definition which help retrieve patents easily . the title can be a single word ( e.g. : class 101 : `` printing '' ) or more complex ( e.g. : class 218 : `` high - voltage switches with arc preventing or extinguishing devices '' ) . as our goal is to release a comprehensive database in which each patent is associated with a set of semantic classes , it is necessary to give an insight on what these classes represent by associating a short description or a title as in @xcite . in our case , such description is taken as a subset of keywords taken from @xmath45 . for the vast majority of semantic classes that have less than 5 keywords , we decide to keep all of theses keywords as a description . for the remaining classes which feature around 50 keywords on average , we rely on the topological properties of the semantic network . @xcite suggest to retain only the most frequently used terms in @xmath45 . another possibility is to select 5 keywords based on their network centrality with the idea that very central keywords are the best candidates to describe the overall idea captured by a community . for example , the largest semantic class in 2003 - 2007 is characterized by the keywords : ` support packet ; tree network ; network wide ; voic stream ; code symbol reader ` . [ [ size - of - technological - and - semantic - classes ] ] size of technological and semantic classes + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + we consider a specific window of observations ( for example 2000 - 2004 ) , and we define @xmath51 the number of patents which appeared during that time window . for each patent @xmath52 we associate a vector of probability where each component @xmath53 $ ] , with @xmath54 and where in @xmath55 . ] @xmath56 on average across all time windows , a patent is associated to 1.8 semantic classes with a positive probability . next we define the size of a semantic class as @xmath57 correspondingly , we aim to provide a consistent definition for technological classes . for that purpose , we follow the so - called `` fractional count '' method , which was introduced by the uspto and consists in dividing equally the patents between all the classes they belong to . formally , we define the number of technological classes as @xmath29 ( which is not time dependent contrary to the semantic case ) and for @xmath58 the corresponding matrix of probability is defined as @xmath59 where @xmath60 equals @xmath61 if the @xmath11th patent belongs to the @xmath18th technological class and @xmath62 if not . when there is no room for confusion , we will drop the exponent part and write only @xmath63 when referring to either the technological or semantic matrix . empirically , we find that both classes exhibit a similar hierarchical structure in the sense of a power - law type of distribution of class sizes as shown in fig . [ fig : class - sizes ] . this feature is important , it suggests that a classification based on the text content of patents has some separating power in the sense that it does not divide up all the patents in one or two communities . to @xmath64 , we plot the size of semantic classes ( left - side ) and technological classes ( right - side ) for the corresponding time window @xmath65 $ ] , from the biggest to the smallest . the formal definition of size can be found in section [ characteristics ] . each color corresponds to one specific year . yearly semantic classes and technological classes present a similar hierarchical structure which confirms the comparability of the two classifications . this feature is crucial for the statistical analysis in section [ statisticalmodel ] . over time , curves are translated and levels of hierarchy stays roughly constant.,scaledwidth=120.0% ] our semantic classification method could be refined by combining it with other techniques such as latent dirichlet allocation which is a widely used topic detection method ( e.g. @xcite ) , already used on patent data as in @xcite where it provides a measure of idea novelty and the counter - intuitive stylized facts that breakthrough invention are likely to come out of local search in a field rather than distant technological recombination . using this approach should first help further evaluate the robustness of our qualitative conclusions ( external validation ) . also , depending on the level of orthogonality with our classification , it can potentially bring an additional feature to characterize patents , in the spirit of multi - modeling techniques where neighbor models are combined to take advantage of each point of view on a system . our use of network analysis can also be extended using newly developed techniques of hyper - network analysis . indeed , patents and keywords can for example be nodes of a bipartite network , or patents be links of an hyper - network , in the sense of multiple layers with different classification links and citation links . @xcite provide a method to compare macroscopic structures of the different layers in a multilayer network that could be applied as a refinement of the overlap , modularity and statistical modeling studied in this paper . furthermore , is has recently been shown that measures of multilayer network projections induce a significant loss of information compared to the generalized corresponding measure @xcite , which confirms the relevance of such development that we left for further research . in this section , we present some key features of our resulting semantic classification showing both complementary and differences with the technological classification . we first present several measures derived from this semantic classification at the patent level : diversity , originality , generality ( section [ subsec : orig - gene ] ) and overlapping ( section [ subsec : overlaps ] ) . we then show that the two classifications show highly different topological measures and strong statistical evidence that they feature a different model ( sections [ citationmodularity ] and [ statisticalmodel ] ) . given a classification system ( technological or semantic classes ) , and the associated probabilities @xmath63 for each patent @xmath11 to belong to class @xmath18 ( that were defined in section [ characteristics ] ) , one can define a patent - level diversity measure as one minus the herfindhal concentration index on @xmath63 by @xmath66 we show in fig . [ fig : patent - level - orig ] the distribution over time of semantic and technological diversity with the corresponding mean time - series . this is carried with two different settings , namely including / not including patents with zero diversity ( i.e. single class patents ) . we call other patents `` complicated patents '' in the following . first of all , the presence of mass in small probabilities for semantic but not technological diversity confirms that the semantic classification contains patent spread over a larger number of classes . more interestingly , a general decrease of diversity for complicated patents , both for semantic and technological classification systems , can be interpreted as an increase in invention specialization . this is a well - known stylized fact as documented in @xcite . furthermore , a qualitative regime shift on semantic classification occurs around 1996 . this can be seen whether or not we include patents with zero diversity . the diversity of complicated patents stabilizes after a constant decrease , and the overall diversity begins to strongly decrease . this means that on the one hand the number of single class patents begins to increase and on the other hand complicated patents do not change in diversity . it can be interpreted as a change in the regime of specialization , the new regime being caused by more single - class patents . more commonly used in the literature are the measures of originality and generality . these measures follow the same idea than the above - defined diversity in quantifying the diversity of classes ( whether technological or semantic ) associated with a patent . but instead of looking at the patent s classes , they consider the classes of the patents that are cited or citing . formally , the originality @xmath67 and the generality @xmath68 of a patent @xmath11 are defined as @xmath69 where @xmath70 , @xmath71 denotes the set of patents that are cited by the @xmath11th patent within a five year window ( i.e. if the @xmath11th patent appears at year @xmath20 , then we consider patents on @xmath72 $ ] ) when considering the originality and @xmath73 the set of patents that cite patent @xmath11 after less than five years ( i.e. we consider patents on @xmath74 $ ] ) in the case of generality . note that the measure of generality is forward looking in the sense that @xmath75 used information that will only be available 5 years after patent applications . both measures are lower on average based on semantic classification than on technological classification . [ fig : orig - gene ] plots the mean value of @xmath76 , @xmath77 , @xmath78 and @xmath79 . to @xmath50 ( with the corresponding time window @xmath80 $ ] ) . the first row includes all classified patents , whereas the second row includes only patents with more than one class ( i.e. patents with diversity greater than 0 ) . , title="fig:",scaledwidth=49.0% ] to @xmath50 ( with the corresponding time window @xmath80 $ ] ) . the first row includes all classified patents , whereas the second row includes only patents with more than one class ( i.e. patents with diversity greater than 0 ) . , title="fig:",scaledwidth=49.0% ] + to @xmath50 ( with the corresponding time window @xmath80 $ ] ) . the first row includes all classified patents , whereas the second row includes only patents with more than one class ( i.e. patents with diversity greater than 0 ) . , title="fig:",scaledwidth=49.0% ] to @xmath50 ( with the corresponding time window @xmath80 $ ] ) . the first row includes all classified patents , whereas the second row includes only patents with more than one class ( i.e. patents with diversity greater than 0 ) . , title="fig:",scaledwidth=49.0% ] to @xmath50 ( with the corresponding time window @xmath80 $ ] ) as defined in subsection [ subsec : orig - gene ] . , title="fig:",scaledwidth=49.0% ] to @xmath50 ( with the corresponding time window @xmath80 $ ] ) as defined in subsection [ subsec : orig - gene ] . , title="fig:",scaledwidth=49.0% ] + a proximity measure between two classes can be defined by their overlap in terms of patents . such measures could for example be used to construct a metrics between semantic classes . intuitively , highly overlapping classes are very close in terms of technological content and one can use them to measure distance between two firms in terms of technology as done in @xcite . formally , recalling the definition of @xmath81 as the probability for the @xmath11th patent to belong to the @xmath18th class and @xmath34 as the number of patents it writes @xmath82 the overlap is normalized by patent count to account for the effect of corpus size : by convention , we assume the overlap to be maximal when there is only one class in the corpus . a corresponding relative overlap is computed as a set similarity measure in the number of patents common to two classes a and b , given by @xmath83 . [ [ intra - classification - overlaps ] ] intra - classification overlaps + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the study of distributions of overlaps inside each classification , i.e. between technological classes and between semantic classes separately , reveals the structural difference between the two classification methods , suggesting their complementary nature . their evolution in time can furthermore give insights into trends of specialization . we show in fig . [ fig : intra - classif - overlap ] distributions and mean time - series of overlaps for the two classifications . the technological classification globally always follow a decreasing trend , corresponding to more and more isolated classes , i.e. specialized inventions , confirming the stylized fact obtained in previous subsection . for semantic classes , the dynamic is somehow more intriguing and supports the story of a qualitative regime shift suggested before . although globally decreasing as technological overlap , normalized ( resp . relative ) mean overlap exhibits a peak ( clearer for normalized overlap ) culminating in 1996 ( resp . looking at normalized overlaps , classification structure was somewhat stable until 1990 , then strongly increased to peak in 1996 and then decrease at a similar pace up to now . technologies began to share more and more until a breakpoint when increasing isolation became the rule again . an evolutionary perspective on technological innovation @xcite could shed light on possible interpretations of this regime shift : as species evolve , the fitness landscape first would have been locally favorable to cross - insemination , until each fitness reaches a threshold above which auto - specialization becomes the optimal path . it is very comparable to the establishment of an ecological niche @xcite , the strong interdependency originating here during the mutual insemination resulting in a highly path - dependent final situation . for all @xmath84 ( zero values are removed because of the log - scale ) . _ ( right column ) _ corresponding mean time - series . _ ( first row ) _ normalized overlaps . _ ( second row ) _ relative overlaps.,title="fig:",scaledwidth=49.0% ] for all @xmath84 ( zero values are removed because of the log - scale ) . _ ( right column ) _ corresponding mean time - series . _ ( first row ) _ normalized overlaps . _ ( second row ) _ relative overlaps.,title="fig:",scaledwidth=49.0% ] + for all @xmath84 ( zero values are removed because of the log - scale ) . _ ( right column ) _ corresponding mean time - series . _ ( first row ) _ normalized overlaps . _ ( second row ) _ relative overlaps.,title="fig:",scaledwidth=49.0% ] for all @xmath84 ( zero values are removed because of the log - scale ) . _ ( right column ) _ corresponding mean time - series . _ ( first row ) _ normalized overlaps . _ ( second row ) _ relative overlaps.,title="fig:",scaledwidth=49.0% ] [ [ inter - classification - overlaps ] ] inter - classification overlaps + + + + + + + + + + + + + + + + + + + + + + + + + + + + + overlaps _ between _ classifications are defined as in ( [ overlap ] ) , but with @xmath18 standing for the @xmath18th technological class and @xmath47 for the @xmath47th semantic class : @xmath63 are technological probabilities and @xmath85 semantic probabilities . they describe the relative correspondence between the two classifications and are a good indicator to spot relative changes , as shown in fig . [ fig : inter - classif - overlap ] . mean inter - classification overlap clearly exhibits two linear trends , the first one being constant from 1980 to 1996 , followed by a constant decrease . although difficult to interpret directly , this stylized fact clearly unveils a change in the _ nature _ of inventions , or at least in the relation between content of inventions and technological classification . as the tipping point is at the same time as the ones observed in the previous section and since the two statistics are different , it is unlikely that this is a mere coincidence . thus , these observations could be markers of a hidden underlying structural changes in processes . an exogenous source of information on relevance of classifications is the citation network described in section [ sub : citation ] . the correspondence between citation links and classes should provide a measure of accuracy of classifications , in the sense of an external validation since it is well - known that citation homophily is expected to be quite high ( see , e.g , @xcite ) . this section studies empirically modularities of the citation network regarding the different classifications . to corroborate the obtained results , we propose to look at a more rigorous framework in section [ statisticalmodel ] . modularity is a simple measure of how communities in a network are well clustered ( see @xcite for the accurate definition ) . although initially designed for single - class classifications , this measure can be extended to the case where nodes can belong to several classes at the same time , in our case with different probabilities as introduced in @xcite . the simple directed modularity is given in our case by @xmath86\delta(c_i , c_j),\ ] ] with @xmath87 the citation adjacency matrix ( i.e. @xmath88 if there is a citation from the @xmath11th patent to the @xmath18th patent , and @xmath89 if not ) , @xmath90 ( resp . @xmath91 ) in - degree ( resp . out - degree ) of patents ( i.e. the number of citations made by the @xmath11th patent to others and the number of citations received by the @xmath11th patent ) . @xmath92 can be defined for each of the two classification systems : @xmath70 . if @xmath93 , @xmath94 is defined as the main patent class , which is taken as the first class whereas if @xmath95 , @xmath94 is the class with the largest probability . multi - class modularity in turns is given by @xmath96,\ ] ] where @xmath97 we take @xmath98 as suggested in @xcite . modularity is an aggregated measure of how the network deviates from a null model where links would be randomly made according to node degree . in other words it captures the propensity for links to be inside the classes . overlapping modularity naturally extends simple modularity by taking into account the fact that nodes can belong simultaneously to many classes . we document in fig . [ fig : modularities ] both simple and multi - class modularities over time . for simple modularity , @xmath99 is low and stable across the years whereas @xmath100 is slightly greater and increasing . these values are however low and suggest that single classes are not sufficient to capture citation homophily . multi - class modularities tell a different story . first of all , both classification modularities have a clear increasing trend , meaning that they become more and more adequate with citation network . the specializations revealed by both patent level diversities and classes overlap is a candidate explanation for this growing modularities . secondly , semantic modularity dominates technological modularity by an order of magnitude ( e.g. 0.0094 for technological against 0.0853 for semantic in 2007 ) at each time . this discrepancy has a strong qualitative significance . our semantic classification fits better the citation network when using multiple classes . as technologies can be seen as a combination of different components as shown by @xcite , this heterogeneous nature is most likely better taken into account by our multi - class semantic classification . in this section , we develop a statistical model aimed at quantifying performance of both technological and semantic classification systems . in particular , we aim at corroborating findings obtained in section [ citationmodularity ] . the mere difference between this approach and the citation modularity approach lies in the choice of the underlying model , and the according quantities of interest . in addition for the semantic approach , we want to see if when restricting to patents with higher probabilities to belong to a class , we obtain better results . to do that , we choose to look at within class citations proportion ( for both technological and semantic approaches ) . we provide two obvious reasons why we choose this . first , the citations are commonly used as a proxy for performance as mentioned in section [ citationmodularity ] . second , this choice is `` statistically fair '' in the sense that both approaches have focused on various goals and not on maximizing directly the within class proportion . nonetheless , the within class proportion is too sensitive to the distribution of the shape of classes . for example , a dataset where patents for each class account for 10% of the total number of patents will mechanically have a better within class proportion than if each class accounts for only 1% . consequently , an adequate statistical model , which treats datasets fairly regardless of their distribution in classes , is needed . this effort ressembles to the previous study of citation modularity , but is complementary since the model presented here can be understood as an elementary model of citation network growth . furthermore , the parameters fitted here can have a direct interpretation as a citation probability . we need to introduce and recall some notations . we consider a specific window of observations @xmath101 $ ] , and we define @xmath51 the number of patents which appeared during that time window . we let @xmath102 their corresponding appearance date by chronological order , which for simplicity are assumed to be such that @xmath103 . for each patent @xmath52 we consider @xmath104 the number of distinctive couples \{cited patent , cited patent s class } made by the @xmath11th patent ( for instance if the @xmath11th patent has only made one citation and that the cited patent is associated with three classes , then @xmath105 ) . let @xmath70 , we define @xmath106 the number of patents associated to at least one of the @xmath11th classes at time @xmath107 . for @xmath108 we consider the variables @xmath109 , which equal @xmath61 if the cited patent s class is also common to the @xmath11th patent . we assume that @xmath109 are independent of each other and conditioned on the past follow bernoulli variables @xmath110 where the parameter @xmath111 indicates the propensity for any patent to cite patents of its own technological or semantic class . when @xmath112 , the probability of citing patents from its own class is simply @xmath113 , which corresponds to the observed proportion of patents which belong to at least one of the @xmath11th patent s classes . thus this corresponds to the estimated probability of citing one patent if we assume that the probability of citing any patent @xmath114 is uniformly distributed , which could be a reasonable assumption if classes were assigned randomly and independently from patent abstract contents . conversely if @xmath115 , we are in the case of a model where there are 100% of within class citations . a reasonable choice of @xmath116 lies between those two extreme values . finally , we assume that the number of distinctive couples @xmath104 are a sequence of independent and identically distributed random variables following the discrete distribution @xmath117 , and also independent from the other quantities . we estimate @xmath116 via maximum likelihood , and obtain the corresponding maximum likelihood estimator ( mle ) @xmath118 . the likelihood function , along with the standard deviation expression and details about the test , can be found in . the fitted values , standard errors and p - values corresponding to the statistical test @xmath119 ( with corresponding alternative hypothesis @xmath120 ) on non - overlapping blocks from the period 1980 - 2007 are reported on table [ summary ] . semantic values are reported for four different chosen thresholds @xmath121 . it means that we restricted to the couples ( @xmath11th patent , @xmath18th class ) such that @xmath122 . the choice of considering non - overlapping blocks ( instead of overlapping blocks ) is merely statistical . ultimately , our interest is in the significance of the test over the whole period 1980 - 2007 . thus , we want to compute a global p - value . this can be done considering the local p - values ( by local , we mean for instance computed on the period 2001 - 2005 ) assuming independence between them . this assumption is reasonable only if the blocks are non - overlapping . all of this can be found in . finally , note that from a statistical perspective , including overlapping blocks would nt yield more information . the values reported in table [ summary ] are overwhelmingly against the null hypothesis . the global estimates of @xmath123 are significantly bigger than the estimate of @xmath124 for all the considered thresholds . although the corresponding p - values ( which are also very close to 0 ) are not reported , it is also quite clear that the bigger the threshold , the higher the corresponding @xmath123 is estimated . this is consistently seen for any period , and significant for the global period . this seems to indicate that when restricting to the couples ( patent , class ) with high semantic probability , the propension to cite patents from its own class @xmath123 is increasing . we believe that this might provide extra information to patent officers when making their choice of citations . indeed , they could look first to patents which belong to the same semantic class , especially when patents have high probability semantic values . note that the introduced model can be seen as a simple model of citations network growth conditional to a classification , which can be expressed as a stochastic block model ( e.g. @xcite , @xcite ) . the parameters are estimated computing the corresponding mle . in view of @xcite , this can be thought as equivalent to maximizing modularity measures . @rccccccccc@ & & & + + technological & .664 & .008 & + semantic @xmath125 & .741 & .047 & .053 + semantic @xmath126 & .799 & .081 & .049 + semantic @xmath127 & .828 & .126 & .097 + semantic @xmath128 & .834 & .166 & .153 + + technological & .634 & .007 & + semantic @xmath125 & .703 & .022 & .001 + semantic @xmath126 & .768 & .040 & .0004 + semantic @xmath127 & .804 & .069 & .007 + semantic @xmath128 & .832 & .114 & .041 + + technological & .619 & .006 & + semantic @xmath125 & .655 & .009 & .0004 + semantic @xmath126 & .713 & .017 & 9e-08 + semantic @xmath127 & .731 & .025 & 7e-06 + semantic @xmath128 & .750 & .037 & 9e-06 + + technological & .551 & .003 & + semantic @xmath125 & .585 & .002 & @xmath129 + semantic @xmath126 & .638 & .004 & @xmath129 + semantic @xmath127 & .660 & .006 & @xmath129 + semantic @xmath128 & .686 & .008 & @xmath129 + + technological & .567 & .003 & + semantic @xmath125 & .621 & .004 & @xmath129 + semantic @xmath126 & .676 & .007 & @xmath129 + semantic @xmath127 & .701 & .010 & @xmath129 + semantic @xmath128 & .710 & .013 & @xmath129 + + technological & .600 & .007 & + semantic @xmath125 & .683 & .016 & 1e-06 + semantic @xmath126 & .732 & .025 & 2e-07 + semantic @xmath127 & .760 & .036 & 6e-06 + semantic @xmath128 & .782 & .048 & 9e-05 + + technological & .606 & .002 & + semantic @xmath125 & .665 & .009 & 8e-11 + semantic @xmath126 & .721 & .017 & 9e-12 + semantic @xmath127 & .747 & .025 & 9e-09 + semantic @xmath128 & .782 & .035 & 3e-07 + the main contribution of this study was twofold . first we have defined how we built a network of patents based on a classification that uses semantic information from abstracts . we have shown that this classification share some similarities with the traditional technological classification , but also have distinct features . second , we provide researchers with materials resulting from our analysis , which includes : ( i ) a database linking each patent with its set of semantic classes and the associated probabilities ; ( ii ) a list of these semantic classes with a description based on the most relevant keywords ; ( iii ) a list of patent with their topological properties in the semantic network ( centrality , frequency , degree etc ... ) . the availability of these data suggests new avenues for further research . a first potential application is to use the patents topological measures inherited from their relevant keywords . the fact that these measures are backward - looking and immediately available after the publication of the patent information is an important asset . it would for example be very interesting to test their predicting power to assess the quality of an innovation , using the number of forward citations received by a patent , and subsequently the future effect on the firm s market value . regarding firm innovative strategy , a second extension could be to study trajectories of firms in the two networks : technological and semantic . merging these information with data on the market value of firms can give a lot of insight about the more efficient innovative strategies , about the importance of technology convergence or about acquisition of small innovative firms . it will also allow to observe innovation pattern over a firm life cycle and how this differ across technology field . a third extension would be to use dig further into the history of innovation . uspto patent data have been digitized from the first patent in july 1790 . however , not all of them contain a text that is directly exploitable . we consider that the quality of patent s images is good enough to rely on optical character recognition techniques to retrieve plain text from at least 1920 . with such data , we would be able to extend our analysis further back in time and to study how technological progress occurs and combines in time . @xcite conduct a similar work by looking at recombination and apparition of technological subclasses . using the fact that communities are constructed yearly , one can construct a measure of proximity between two successive classes . this could give clear view on how technologies converged over the year and when others became obsolete and replaced by new methods . * describes with more details the definition of patents and context . * * detailed description of data collection * * vector file of the semantic network ( fig.[fig : rawnetwork ] ) * available at + http://37.187.242.99/files/public/network.svg * extended figures for network sensitivity analysis * * extended definitions and derivations for the statistical model * 10 aghion p , howitt p. . econometrica . 1992 march;60(2):32351 . available from : https://ideas.repec.org/a/ecm/emetrp/v60y1992i2p323-51.html . . journal of political economy . 1990 october;98(5):s71102 . available from : https://ideas.repec.org/a/ucp/jpolec/v98y1990i5ps71-102.html . griliches z. . national bureau of economic research , inc ; 1990 . 3301 . available from : https://ideas.repec.org/p/nbr/nberwo/3301.html . hall bh , jaffe ab , trajtenberg m. . discussion papers ; 2001 . available from : https://ideas.repec.org/p/cpr/ceprdp/3094.html . youn h , strumsky d , bettencourt lma , lobo j. invention as a combinatorial process : evidence from us patents . journal of the royal society interface . 2015;12(106 ) . arxiv e - prints . 2013 oct;. e , pfitzner r , scholtes i , garas a , schweitzer f. . arxiv e - prints . 2014 feb;. sorenson o , rivkin jw , fleming l. complexity , networks and knowledge flow . research policy . 2006;35(7):9941017 . kay l , newman n , youtie j , porter al , rafols i. patent overlay mapping : visualizing technological distance . journal of the association for information science and technology . 2014;65(12):24322443 . bruck p , rthy i , szente j , tobochnik j , rdi p. recognition of emerging technology trends : class - selective study of citations in the us patent citation network . 2016;107(3):14651475 . curran cs , leker j. patent indicators for monitoring convergence examples from nff and ict . technological forecasting and social change . 2011;78(2):256273 . remarks on the economic implications of convergence . industrial and corporate change . 1996;5(4):10791095 . furman jl , stern s. climbing atop the shoulders of giants : the impact of institutions on cumulative research . american economic review . 2011 august;101(5):193363 . available from : http://www.aeaweb.org/articles?id=10.1257/aer.101.5.1933 . acemoglu au daron , kerr w. . proceedings of the national academy of sciences ( forthcoming ) . 2016;. preschitschek n , niemann h , leker j , moehrle mg . anticipating industry convergence : semantic analyses vs ipc co - classification analyses of patents . foresight . 2013 11;15(6):446464 . yoon b , park y. a text - mining - based patent network : analytical tool for high - technology trend . the journal of high technology management research . 2004;15(1):3750 . park i , yoon b. a semantic analysis approach for identifying patent infringement based on a product patent map . technology analysis & strategic management . 2014;26(8):855874 . yoon j , kim k. detecting signals of new technological opportunities using semantic patent analysis and outlier detection . scientometrics . 2011;90(2):445461 . gerken jm , moehrle mg . a new instrument for technology monitoring : novelty in patents measured by semantic patent analysis . 2012;91(3):645670 . choi j , hwang ys . patent keyword network analysis for improving technology development efficiency . technological forecasting and social change . 2014;83:170182 . fattori m , pedrazzi g , turra r. text mining applied to patent mapping : a practical business case . world patent information . 2003;25(4):335342 . s , smallegan m , pereda m , battiston f , patania a , poledna s , et al . . arxiv e - prints . 2015 oct;. lerner j , seru a. the use and misuse of patent data : issues for corporate finance and beyond . booth / harvard business school working paper . 2015;. oecd . oecd patent statistics manual . 2009;available from : /content / book/9789264056442-en . dechezlepretre a , martin r , mohnen m. . centre for economic performance , lse ; 2014 . dp1300 . available from : https://ideas.repec.org/p/cep/cepdps/dp1300.html . tseng yh , lin cj , lin yi . text mining techniques for patent analysis . information processing & management . 2007;43(5):12161247 . world patent information . 2010 march;32(1):2229 . available from : https://ideas.repec.org/a/eee/worpat/v32y2010i1p22-29.html . abbas a , zhang l , khan su . a literature review on the state - of - the - art in patent analysis . world patent information . 2014;37:313 . chavalarias d , cointet jp . phylomemetic patterns in science evolution the rise and fall of scientific fields . plos one . 2013;8(2):e54847 . natural language toolkit , stanford university ; 2015 . clauset a , newman me , moore c. finding community structure in very large networks . physical review e. 2004;70(6):066111 . yang y , ault t , pierce t , lattimer cw . improving text categorization methods for event tracking . in : proceedings of the 23rd annual international acm sigir conference on research and development in information retrieval . acm ; 2000 . p. 6572 . blei dm , ng ay , jordan mi . latent dirichlet allocation . journal of machine learning research . 2003;3(jan):9931022 . kaplan s , vakili k. the double - edged sword of recombination in breakthrough innovation . strategic management journal . 2015;36(10):14351457 . iacovacci j , wu z , bianconi g. mesoscopic structures reveal the network between the layers of multiplex datasets . arxiv preprint arxiv:150503824 . 2015;. de domenico m , sol - ribalta a , omodei e , gmez s , arenas a. ranking in interconnected multilayer networks reveals versatile nodes . nature communications . 2015;6 . archibugi d , pianta m. specialization and size of technological activities in industrial countries : the analysis of patent data . research policy . 1992;21(1):79 93 . available from : http://www.sciencedirect.com/science/article/pii/0048733392900283 . bloom n , schankerman m , reenen jv . . 2013 07;81(4):13471393 . available from : https://ideas.repec.org/a/ecm/emetrp/v81y2013i4p1347-1393.html . ziman j. technological innovation as an evolutionary process . cambridge university press ; 2003 . holland jh . signals and boundaries : building blocks for complex adaptive systems . mit press ; 2012 . nicosia v , mangioni g , carchiolo v , malgeri m. extending the definition of modularity to directed graphs with overlapping communities . journal of statistical mechanics : theory and experiment . 2009;2009(03):p03024 . decelle a , krzakala f , moore c , zdeborov l. asymptotic analysis of the stochastic block model for modular networks and its algorithmic applications . physical review e. 2011;84(6):066106 . valles - catala t , massucci fa , guimera r , sales - pardo m. multilayer stochastic block models reveal the multilayer structure of complex networks . physical review x. 2016;6(1):011036 . arxiv e - prints . 2016 jun;. akcigit u , kerr wr , nicholas t. the mechanics of endogenous innovation and growth : evidence from historical us patents . citeseer ; 2013 . rgibeau p , rockett k. innovation cycles and learning at the patent office : does the early patent get the delay ? journal of industrial economics . available from : http://econpapers.repec.org/repec:bla:jindec:v:58:y:2010:i:2:p:222-246 . a utility patent at the uspto is a document providing intellectual property and protection of an invention . it excludes others to making , using , or selling the invention the same invention in the united states in exchange for a disclosure of the patent content . the protection is granted for 20 years since 1995 ( it was 17 years before that from 1860 ) starting from the year the patent application was filled , but can be interrupted before if its owner fails to pay the maintenance fees due after 3.5 , 7.5 and 11.5 years . utility patents are by far the most numerous , with more than 90% of the total universe of uspto patents . according to the title 35 of the united states codes ( 35 usc ) section 101 : _ _ `` whoever invents or discovers any new and useful process , machine , manufacture , or composition of matter , or any new and useful improvement thereof , may obtain a patent therefor , subject to the conditions and requirements of this title . '' _ _ in practice however , other types of invention including algorithms can also be patented . the two following sections of the 35 usc defined the condition an invention must meet to be protected by the uspto : ( i ) novelty : the claimed invention can not be already patented or described in a previous publication ( 35 usc section 102 ) ; ( ii ) obviousness : _ `` differences between the claimed invention and the prior art must not be such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains''_. ( 35 usc section 103 ) . after review from the uspto experts , an application satisfying these requirements will be accepted and a patent granted . the average time lag for such a review is on average a little more than 2 years since 1976 , with some patents being granted after much more than two years . [ [ sample - restriction ] ] sample restriction + + + + + + + + + + + + + + + + + + as explained briefly before , we consider every patent granted by the uspto between 1976 and 2013 . for each patent , we gather information on the year of application , the year the patent was granted , the name of the inventors , the name of the assignees and the technological fields in which the patent has been classified ( we get back to what these fields are below ) . we restrict attention to patents applied for before 2007 . the choice of the year 2007 is due to the truncation bias : we only want to use information on granted patents and we get rid of all patents that were rejected by the uspto . however , in order to date them as closely as possible to the date of invention , we use the application date as a reference . as a consequence , as we approach the end of the sample , we only observe a fraction of the patents which have been granted by 2013 . looking at the distribution of time lag between application and grant in the past and assuming that this distribution is complete in time , we can consider that data prior to 2007 are almost complete and that data for 2007 are complete up to 90% . raw version of uspto redbook with abstracts are available for years 1976 - 2014 starting from bulk download page at https://bulkdata.uspto.gov/. a script first automatically downloads files . before being automatically processed , a few error in files ( corresponding to missing end of records probably due to line dropping during the concatenation of weekly files ) had to be corrected manually . files are then processed with the following filters transforming different format and xml schemes into a uniform dictionary data structure : * ` dat ` files ( 1976 - 2000 ) : handmade parser * ` xml ` files ( 2001 - 2012 ) : xml parser , used with different schemas definitions . everything is stored into a mongodb database , which latest dump is available at http://dx.doi.org/10.7910/dvn/bw3ack the source code for the full workflow is available at https://github.com/justeraimbault/patentsmining . a simplified shell wrapper is at ` models / fullpipe.sh ` . note that keywords co - occurrence estimation requires a memory amount in @xmath130 ( although optimized using dictionaries ) and the operation on the full database requires a consequent infrastructure . launch specifications are the following : [ [ setup ] ] setup + + + + + install the database and required packages . * having a running local mongod instance * mongo host , port , user and password to be configured in ` conf / parameters.csv ` * raw data import from gz file : use mongorestore -d redbook -c raw gzip $ file * specific python packages required : pymongo , python - igraph , nltk ( with resources punkt , averaged_perceptron_tagger , porter_test ) [ [ running ] ] running + + + + + + + the utility fullpipe.sh launches the successive stages of the processing pipe . [ [ options ] ] options + + + + + + + _ this configuration options can be changed in _ ` conf / parameters.csv ` * window size in years * beginning of first window * beginning of last window * number of parallel runs * ` kwlimit ` : total number of keywords @xmath131 * ` edge_th ` : @xmath32 pre - filtering for memory storage purposes * ` dispth ` : @xmath36 * ` ethunit ` : @xmath37 [ [ tasks ] ] tasks + + + + + the tasks to be done in order : keywords extraction , relevance estimation , network construction , semantic probas construction , are launched with the following options : 1 . ` keywords ` : extracts keywords 2 . ` kw - consolidation ` : consolidate keywords database ( techno disp measure ) 3 . ` raw - network ` : estimates relevance , constructs raw network and perform sensitivity analysis 4 . ` classification ` : classify and compute patent probability , keyword measures and patent measures ; here parameters @xmath132 can be changed in configuration file . [ [ classification - data ] ] classification data + + + + + + + + + + + + + + + + + + + the data resulting from the classification process with parameters used here is available as ` csv ` files at + http://37.187.242.99/files/public/classification_window5_kwlimit100000_dispth0.06_ethunit4.1e-05.zip . each files are named according to their content ( keywords , patent probabilities , patent measures ) and the corresponding time window . the format are the following : * keywords files : keyword ; community ; termhood times inverse document frequency ; technological concentration ; document frequency ; termhood ; degree ; weighted degree ; betweenness centrality ; closeness centrality ; eigenvector centrality * patents measures : patent i d ; total number of potential keywords ; number of classified keywords ; same topological measures as for keywords * patent probabilities : patent i d ; total number of potential keywords ; i d of the semantic class ; number of keywords in this class . probabilities have to be reconstructed by extracting all the lines corresponding to a patent and dividing each count by the total number of classified keywords . [ [ analysis ] ] analysis + + + + + + + + the results of classification has to be processed for analysis ( construction of sparse matrices for efficiency e.g. ) , following the steps : * from classification files to r variables with ` semantic / semanalfun.r ` * from csv technological classes to r - formatted sparse matrix with ` techno / preparedata.r ` * from csv citation file to citation network in r - formatted graph and adjacency sparse matrix with ` citation / constructnw.r ` analyses are done in ` semantic / semanalysis.r ` . the example of fig.1 in main text for a given year yielded the same qualitative behavior for all years , as shown in fig . [ fig : ext - sensitivity-1 ] , [ fig : ext - sensitivity-2 ] and [ fig : ext - sensitivity-3 ] here . we also show an other point of view over the pareto optimization , that is the third plot giving the values of normalized objectives as a function of @xmath36 . : number of communities as a function of @xmath32 , for each year . ] : pareto plots of number of communities and number of vertices , for each year . ] : normalized objective as a function of @xmath36 , for each year . ] we show in fig . [ fig : sensitivity - window3 - 1 ] , [ fig : sensitivity - window3 - 2 ] and [ fig : sensitivity - window3 - 3 ] the sensitivity plots used for semantic network construction optimization , for a different time window with @xmath133 . the same qualitative behavior is observed ( with different quantitative values , as typically @xmath37 is for example expected to vary with document number and semantic regime , thus with window size ) , what confirms that the method is valid across different time windows . : number of communities as a function of @xmath32 , for each year . ] : pareto plots of number of communities and number of vertices , for each year . ] : normalized objective as a function of @xmath36 , for each year . ] we define @xmath134 the filtration which corresponds to the time @xmath15 . with this notation @xmath135 simply means the likelihood of @xmath136 conditioned on the past . we consider @xmath137 the mle of @xmath116 , where the corresponding log - likelihood of the model can be expressed up to constant terms as @xmath138 recalling that @xmath109 are independent of each other and conditioned on the past follow bernoulli variables @xmath110 the log - likelihood of the model can be expressed as @xmath139 in practice , the user can easily implement the formula ( [ loglik ] ) for any @xmath111 , and maximize it over a predefined grid to obtain @xmath137 . under some assumptions , it is possible to show the asymptotic normality of @xmath137 and to compute the asymptotic variance . for simplicity of exposition , we assume that we restrict to @xmath116 such that we have @xmath140 for any @xmath141 . the central limit theorem can be expressed as @xmath142 } ( \widehat{\theta}^{(z ) } - \theta^{(z ) } ) \overset{\mathcal{l}}{\rightarrow } mn \big ( 0 , \int ( p + \theta^{(z)})(1 - ( p + \theta^{(z ) } ) ) d\pi^{(z ) } ( p ) \big),\end{aligned}\ ] ] where mn stands for a multinormal distribution and @xmath143 for the asymptotic limit distribution of the quantity @xmath144 . note that the variance term in ( [ variance ] ) is equal to an aggregate version of the fisher information matrix . the proof of such statement is beyond the scope of this paper . on the basis of ( [ variance ] ) , we provide a variance estimator as @xmath145 where @xmath146 is such that the @xmath146th patent corresponds to the @xmath47th couple . this estimator was used to compute the standard deviation in table [ summary ] . the test statistic used is a mean difference test statistic between @xmath147 and @xmath148 , where the formal expression can be found in ( [ teststat ] ) . we assume independence between both quantities and thus under the null hypothesis , we have that @xmath149 where @xmath150 can be estimated by @xmath151 . then , we obtain that @xmath152 where @xmath153 is the mean difference test static .
in this paper , we extend some usual techniques of classification resulting from a large - scale data - mining and network approach . this new technology , which in particular is designed to be suitable to big data , is used to construct an open consolidated database from raw data on 4 million patents taken from the us patent office from 1976 onward . to build the pattern network , not only do we look at each patent title , but we also examine their full abstract and extract the relevant keywords accordingly . we refer to this classification as _ semantic approach _ in contrast with the more common _ technological approach _ which consists in taking the topology when considering us patent office technological classes . moreover , we document that both approaches have highly different topological measures and strong statistical evidence that they feature a different model . this suggests that our method is a useful tool to extract endogenous information .
pegaptanib was developed in the 1990s using the selex ( selective evolution of ligands by exponential enrichment ) technique . the result was a 29-nucleotide aptamer , with a chemically modified rna backbone that increases nuclease resistance and a 5 terminus that includes a 40kd polyethylene glycol moiety for prolonged tissue residence [ fig . 1].3 pegaptanib was shown to bind vegf 165 with high specificity ( kd = 200 pm ) and inhibited vegf 165 -induced responses , including cell proliferation in vitro and vascular permeability in vivo , while not affecting responses to vegf 121 . pegaptanib proved to be stable in human plasma for more than 18h , while in monkeys pegaptanib administered into the vitreous was detectable in the vitreous for four weeks after a single dose.3 in rodent models , vegf 164 ( the rodent equivalent of human vegf 165 ) acts as a potent inflammatory cytokine , mediating both ischemia - induced neovascularization and diabetes - induced breakdown of the blood - retinal barrier ( brb ) . in these experiments , intravitreous pegaptanib was shown to significantly reduce pathological neovascularization , while leaving physiological vascularization unimpaired6 and was also able to reverse diabetes - induced brb breakdown.7 moreover , vegf 165 proved to be dispensable for mediating vegfs role in protecting retinal neurons from ischemia - induced apoptosis.8 these data suggested that intravitreous pegaptanib could provide a safe and effective treatment against both ocular neovascularization and diabetes - induced retinal vascular damage . pivotal clinical trial data have demonstrated that pegaptanib is both effective and safe for the treatment of neovascular amd . these data were derived from two randomized , double - masked studies known jointly as the v.i.s.i.o.n . ( vegf inhibition study in ocular neovascularization ) trials.9,10 a total of 1186 subjects with any angiographic subtypes of neovascular amd were included . patients received intravitreous injections of 0.3 mg , 1 mg or 3 mg pegaptanib or sham injections every six weeks for 48 weeks . subjects with predominantly classic lesions could also have received photodynamic therapy with verteporfin ( pdt ; visudyne , novartis ) at investigator discretion . after one year , the 0.3 mg dose conferred a significant clinical benefit compared to sham treatment as measured by proportions of patients losing < 15 letters of visual acuity ( va ) ; compared with 55% ( 164/296 ) of patients receiving sham injections , 70% ( 206/294 ) of patients receiving 0.3 mg of pegaptanib met this primary endpoint ( p < 0.001 ) . in contrast to pdt , clinical benefit was seen irrespective of angiographic amd subtype , baseline vision or lesion size and led to the clinical approval of pegaptanib for the treatment of all angiographic subtypes of neovascular amd . the 1 mg and 3 mg doses showed no additional benefit beyond the 0.3 mg dose.9 treatment with 0.3 mg pegaptanib was also efficacious as determined by mean va change , proportions of patients gaining vision and likelihood of severe vision loss . in an extension of the v.i.s.i.o.n . study , patients in the pegaptanib arms were rerandomized to continue or discontinue therapy for 48 more weeks.10 compared to patients discontinuing pegaptanib or receiving usual care , those remaining on 0.3 mg pegaptanib received additional significant clinical benefit in the second year . further subgroup analyses suggested that pegaptanib treatment was especially effective in those patients who were treated early in the course of their disease.11 pegaptanib showed an excellent safety profile . all dosages were safe , with most adverse events attributable to the injection procedure rather than to the study drug itself . in the first year , serious adverse events occurred with < 1% of intravitreous injections9 and no new safety signals have been identified in patients receiving pegaptanib for two and three years.12,13 the frequencies of serious ocular adverse events for all three years are presented in table 1.12,13 in addition , no systemic safety signals have emerged over this period . these conclusions have also been confirmed in assessments of systemic parameters following intravitreous injection of 1 mg and 3 mg pegaptanib.14 safety and efficacy of pegaptanib were assessed in a randomized , sham - controlled , double - masked , phase 2 trial enrolling 172 diabetic subjects with dme affecting the center of the fovea . intravitreous injections were administered at baseline and every six weeks thereafter . at week 36 , 0.3 mg pegaptanib was significantly superior to sham injection , as measured by mean change in va ( + 4.7 letters vs. -0.4 letters , p = 0.04 ) , proportions of patients gaining 10 letters of va ( 34% vs.10% ; p = 0.003 ) , change in mean central retinal thickness ( 68 m decrease vs. 4 m increase ; p = 0.02 ) and proportions of patients requiring subsequent photocoagulation treatment ( 25% vs. 48% , p = 0.04).15 in addition , a retrospective subgroup analysis revealed that pegaptanib treatment led to the regression of baseline retinal neovascularization in eight of 13 patients with proliferative diabetic retinopathy ( pdr ) whereas no such regression occurred in three sham - treated eyes or in four untreated fellow eyes.16 early results from a small , randomized , open - label study suggest that adding pegaptanib to panretinal photocoagulation conferred significant clinical benefit in patients with pdr.17 vegf has been implicated in the pathophysiology of crvo.18 accordingly , in a trial that enrolled subjects with crvo of < 6 months duration,19 98 subjects were randomized ( 1:1:1 ) to receive intravitreous pegaptanib ( 0.3 mg or 1 mg ) or sham injections every six weeks and panretinal photocoagulation if needed . at week 30 , treatment with 0.3 mg pegaptanib was superior in terms of mean change in va , proportions of patients losing 15 letters from baseline , proportions with a final va of 35 letters and reduction in center point and central subfield thickness.19,20 encouraging findings have been reported in small case series investigating the use of pegaptanib for the treatment of neovascular glaucoma,21 retinopathy of prematurity22 and familial exudative vitreoretinopathy.23 in addition , given its positive safety profile , now validated over three years in clinical trials and two and a half years of postmarketing experience , pegaptanib is being studied as a maintenance anti - vegf inhibitor following induction with nonselective anti - vegf agents such as ranibizumab or bevacizumab , which bind all vegf isoforms24,25 and appear to be associated with an increased , albeit small , risk of stroke.26 pivotal clinical trial data have demonstrated that pegaptanib is both effective and safe for the treatment of neovascular amd . these data were derived from two randomized , double - masked studies known jointly as the v.i.s.i.o.n . ( vegf inhibition study in ocular neovascularization ) trials.9,10 a total of 1186 subjects with any angiographic subtypes of neovascular amd were included . patients received intravitreous injections of 0.3 mg , 1 mg or 3 mg pegaptanib or sham injections every six weeks for 48 weeks . subjects with predominantly classic lesions could also have received photodynamic therapy with verteporfin ( pdt ; visudyne , novartis ) at investigator discretion . after one year , the 0.3 mg dose conferred a significant clinical benefit compared to sham treatment as measured by proportions of patients losing < 15 letters of visual acuity ( va ) ; compared with 55% ( 164/296 ) of patients receiving sham injections , 70% ( 206/294 ) of patients receiving 0.3 mg of pegaptanib met this primary endpoint ( p < 0.001 ) . in contrast to pdt , clinical benefit was seen irrespective of angiographic amd subtype , baseline vision or lesion size and led to the clinical approval of pegaptanib for the treatment of all angiographic subtypes of neovascular amd . the 1 mg and 3 mg doses showed no additional benefit beyond the 0.3 mg dose.9 treatment with 0.3 mg pegaptanib was also efficacious as determined by mean va change , proportions of patients gaining vision and likelihood of severe vision loss . in an extension of the v.i.s.i.o.n . study , patients in the pegaptanib arms were rerandomized to continue or discontinue therapy for 48 more weeks.10 compared to patients discontinuing pegaptanib or receiving usual care , those remaining on 0.3 mg pegaptanib received additional significant clinical benefit in the second year . further subgroup analyses suggested that pegaptanib treatment was especially effective in those patients who were treated early in the course of their disease.11 pegaptanib showed an excellent safety profile . all dosages were safe , with most adverse events attributable to the injection procedure rather than to the study drug itself . in the first year , serious adverse events occurred with < 1% of intravitreous injections9 and no new safety signals have been identified in patients receiving pegaptanib for two and three years.12,13 the frequencies of serious ocular adverse events for all three years are presented in table 1.12,13 in addition , no systemic safety signals have emerged over this period . these conclusions have also been confirmed in assessments of systemic parameters following intravitreous injection of 1 mg and 3 mg pegaptanib.14 safety and efficacy of pegaptanib were assessed in a randomized , sham - controlled , double - masked , phase 2 trial enrolling 172 diabetic subjects with dme affecting the center of the fovea . intravitreous injections were administered at baseline and every six weeks thereafter . at week 36 , 0.3 mg pegaptanib was significantly superior to sham injection , as measured by mean change in va ( + 4.7 letters vs. -0.4 letters , p = 0.04 ) , proportions of patients gaining 10 letters of va ( 34% vs.10% ; p = 0.003 ) , change in mean central retinal thickness ( 68 m decrease vs. 4 m increase ; p = 0.02 ) and proportions of patients requiring subsequent photocoagulation treatment ( 25% vs. 48% , p = 0.04).15 in addition , a retrospective subgroup analysis revealed that pegaptanib treatment led to the regression of baseline retinal neovascularization in eight of 13 patients with proliferative diabetic retinopathy ( pdr ) whereas no such regression occurred in three sham - treated eyes or in four untreated fellow eyes.16 early results from a small , randomized , open - label study suggest that adding pegaptanib to panretinal photocoagulation conferred significant clinical benefit in patients with pdr.17 vegf has been implicated in the pathophysiology of crvo.18 accordingly , in a trial that enrolled subjects with crvo of < 6 months duration,19 98 subjects were randomized ( 1:1:1 ) to receive intravitreous pegaptanib ( 0.3 mg or 1 mg ) or sham injections every six weeks and panretinal photocoagulation if needed . at week 30 , treatment with 0.3 mg pegaptanib was superior in terms of mean change in va , proportions of patients losing 15 letters from baseline , proportions with a final va of 35 letters and reduction in center point and central subfield thickness.19,20 encouraging findings have been reported in small case series investigating the use of pegaptanib for the treatment of neovascular glaucoma,21 retinopathy of prematurity22 and familial exudative vitreoretinopathy.23 in addition , given its positive safety profile , now validated over three years in clinical trials and two and a half years of postmarketing experience , pegaptanib is being studied as a maintenance anti - vegf inhibitor following induction with nonselective anti - vegf agents such as ranibizumab or bevacizumab , which bind all vegf isoforms24,25 and appear to be associated with an increased , albeit small , risk of stroke.26 pegaptanib is both safe and clinically effective for the treatment of all angiographic subtypes of neovascular amd . early , well - controlled trials further suggest that pegaptanib may provide therapeutic benefit for patients with dme , pdr and rvo . the roles that pegaptanib will ultimately play as part of the ophthalmologists armamentarium remain to be established . the recent results with ranibizumab demonstrating the potential for significant vision gains in amd27,28 have been impressive , but issues of safety remain to be definitively resolved;26 combinatorial regimens may ultimately prove to be most effective in balancing safety with efficacy.24 similarly , more established approaches , such as photodynamic therapy with verteporfin , may provide greater clinical benefit when combined with anti - vegf therapy,29 so that there is likely to be considerable space for empiricism in determining the best approach for a given patient . nonetheless , the overall trend is highly positive , with the anti - vegf agents affording many more options than were available only a few years ago . such successes highlight the importance of vegf in the pathogenesis of ocular vascular disorders and support the use of anti - vegf agents as foundation therapy in patients with these conditions .
pegaptanib sodium ( macugentm ) is a selective rna aptamer that inhibits vascular endothelial growth factor ( vegf ) 165 , the vegf isoform primarily responsible for pathologic ocular neovascularization and vascular permeability , while sparing the physiological isoform vegf 121 . after more than 10 years in development and preclinical study , pegaptanib was shown in clinical trials to be effective in treating choroidal neovascularization associated with age - related macular degeneration . its excellent ocular and systemic safety profile has also been confirmed in patients receiving up to three years of therapy . early , well - controlled studies further suggest that pegaptanib may provide therapeutic benefit for patients with diabetic macular edema , proliferative diabetic retinopathy and retinal vein occlusion . notably , pegaptanib was the first available aptamer approved for therapeutic use in humans and the first vegf inhibitor available for the treatment of ocular vascular diseases .
we selected a prospective cohort from the hong kong diabetes registry enrolled between 1 december 1996 and 8 january 2005 because drug dispensary data became fully computerized and available for analysis purposes in 1996 . a detailed description of the hong kong diabetes registry is available elsewhere ( 1113 ) . briefly , the registry was established at the prince of wales hospital , the teaching hospital of the chinese university of hong kong , which serves a population of > 1.2 million . the referral sources of the cohort included general practitioners , community clinics , other specialty clinics , the prince of wales hospital , and other hospitals . enrolled patients with hospital admissions within 68 weeks prior to assessment accounted for < 10% of all referrals . a 4-h assessment of complications and risk factors was performed on an outpatient basis , modified from the european diabcare protocol ( 14 ) . once a patient had undergone this comprehensive assessment , he / she was considered to have entered this study cohort and would be followed until the time of death . ethical approval was obtained from the chinese university of hong kong clinical research ethics committee . this study adhered to the declaration of helsinki , and written informed consent was obtained from all patients at the time of assessment , for research purposes . by 2005 , 7,387 diabetic patients were enrolled in the registry since december 1996 . we sequentially excluded 1 ) 328 patients with type 1 diabetes or missing data on types of diabetes ; 2 ) 45 patients with non - chinese or unknown nationality ; 3 ) 175 patients with a known history of cancer or receiving cancer treatment at enrollment ; 4 ) 736 patients with missing values on any variables used in the analysis ( see table 1 for a list of variables ) ; and 5 ) 3,445 patients who used metformin during 2.5 years before enrollment . the cutoff point of 2.5 years was chosen because any duration longer than that did not lead to any noticeable changes in the hazard ratios ( hrs ) and 95% cis of metformin use for cancer ( supplementary table 1 ) . clinical and biochemical characteristics of the study cohort stratified according to occurrence of cancer during follow - up period data are median ( 25th to 75th percentile ) or n ( % ) . derived from fisher exact test . from enrollment to the earliest date of cancer , death , or censoring . patients attended the center after an 8-h fast and underwent a 4-h structured clinical assessment that included laboratory investigations . a sterile , random - spot urinary sample was collected to measure albumin - to - creatinine ratio ( acr ) . in this study , albuminuria was defined as an acr 2.5 mg / mmol in men and 3.5 mg / mmol in women . the abbreviated modification of diet in renal disease study formula recalibrated for chinese subjects ( 15 ) was used to estimate glomerular filtration rate ( gfr ) ( expressed in ml / min per 1.73 m ) : estimated gfr = 186 ( scr 0.011 ) ( age ) ( 0.742 if female ) 1.233 , where scr is serum creatinine expressed as mol / l ( original mg / dl converted to mol / l ) , and 1.233 is the adjusting coefficient for chinese subjects . total cholesterol , triglycerides , and hdl cholesterol were measured by enzymatic methods on a hitachi 911 automated analyzer ( boehringer mannheim , mannheim , germany ) using reagent kits supplied by the manufacturer of the analyzer , whereas ldl cholesterol was calculated using the friedewald equation ( 16 ) . drug usage data were extracted from the hong kong hospital authority central computer system , which recorded all drug dispensary data in public hospitals , including the start dates and end dates for each of the drugs of interest . in hong kong , all medications are dispensed on site in both inpatient and outpatient settings . these databases were matched by a unique identification number , the hong kong identity card number , which is compulsory for all residents in hong kong . all medical admissions of the cohort from enrollment to 30 july 2005 were retrieved from the hong kong hospital authority central computer system , which recorded admissions to all public hospitals in hong kong . collectively , these hospitals provide 95% of the total hospital bed - days in hong kong ( 17 ) . additionally , mortality data from the hong kong death registry during the period were retrieved and cross - checked with hospital discharge status . hospital discharge principle diagnoses , coded by the international statistical classification of diseases and related health problems 9th revision ( icd-9 ) , were used to identify cancer events . the outcome measure of this study was incident cancer ( fatal or nonfatal : codes 140208 ) during the follow - up period . we used biological interactions to test whether metformin use was associated with a greater cancer risk reduction in patients with low hdl cholesterol than in those with normal or high hdl cholesterol . the statistical analysis system ( release 9.10 ) was used to perform the statistical analysis ( sas institute , cary , nc ) , unless otherwise specified . follow - up time was calculated as the period in years from the first enrollment since 1 december 1996 to the date of the first cancer event , death , or censoring , whichever came first . cox proportional hazard regression was used to obtain the hrs and 95% cis of the variables of interest . we first plotted the full - range association of hdl cholesterol and cancer and further refined cutoff points of hdl cholesterol for cancer risk in the cohort without prevalent metformin users , using restricted cubic spline cox models ( 11 ) . then , we examined the biological interaction for cancer risk between low hdl cholesterol and nonuse of metformin using three measures : 1 ) relative excess risk caused by interaction ( reri ) ; 2 ) attributable proportion ( ap ) caused by interaction ; and 3 ) the synergy index ( s ) ( 18 ) . a detailed calculation method of additive interaction , including the definition of three indicator variables , an sas program , and a calculator in microsoft excel ( www.epinet.se ) , was described by the authors . the reri is the excess risk attributed to interaction relative to the risk without exposure . ap refers to the attributable proportion of disease , which is caused by the interaction in subjects with both exposures . s is the excess risk from both exposures when there is biological interaction relative to the risk from both exposures without interaction . a simulation study showed that reri performed best and ap performed fairly well , but s was problematic in the measure of additivity in the proportional hazard model ( 19 ) . the current study refined the criteria as either a statistically significant reri > 0 or ap > 0 to indicate biological interactions . the following two - step adjustment scheme was used in these analyses : 1 ) only adjusting for ldl cholesterol related cancer risk indicators ( ldl cholesterol 3.80 mmol / l and ldl cholesterol < 2.80 mmol / l plus albuminuria ) ( 11,12 ) , triglycerides ( 2 ) , and high hdl cholesterol , where appropriate ( 2 ) , and 2 ) further adjusting for age , sex , employment status , smoking status , alcohol intake , duration of diabetes , bmi ( 27.6 and < 24.0 kg / m ) ( 2 ) , systolic blood pressure ( sbp ) , and a1c ( 20 ) at enrollment and use of statins ( 13 ) , fibrates , other lipid - lowering drugs , ace inhibitors / angiotensin ii receptor blockers ( arbs ) ( 13 ) , and insulin ( 20 ) during follow - up . use of drugs during follow - up was defined as use of the drugs from enrollment to cancer , death , or censoring date , whichever came first . by definition , the use of any drugs after the first cancer event was coded as nonuse of these drugs , and any drug users had been given at least one prescription of the drug during follow - up . the total metformin dosage divided by the total number of days during which metformin was prescribed was used as daily metformin dosage . we also used propensity score to adjust for the likelihood of initiation of metformin during the follow - up period ( 21 ) . the former was obtained using a logistic regression procedure that includes the following independent variables : age ; sex ; bmi ; ldl cholesterol ; hdl cholesterol ; triglycerides ; tobacco and alcohol intake ; a1c ; sbp ; ln ( acr + 1 ) ; estimated gfr ; peripheral arterial disease ; retinopathy ; sensory neuropathy ; and history of cardiovascular disease ( coronary heart disease , myocardial infarction , and stroke ) at baseline ( c statistics = 0.73 ) . we then used stratified cox models on deciles of the score to adjust for the likelihood of metformin use . sensitivity analyses were performed to address 1 ) the impacts of undiagnosed cancer by limiting the analysis to patients who were followed for 2.5 years ( n = 2170 ) ; 2 ) the impact of incomplete exclusion of patients who used metformin during 2.5 years before enrollment by limiting the analysis to patients who were enrolled on or after 1 july 1998 ( n = 1707 ) ; 3 ) the impact of prevalent bias by reinclusion of 3,445 patients who used metformin during 2.5 years before enrollment ; and 4 ) inclusion of subjects with missing values in univariable analysis and without adjusting for the propensity score to maximize the valid sample size ( n of the valid sample size = 2,996 and n of the sample size with missing values in hdl cholesterol = 53 [ i.e. , 1.74% missing - value rate ] ) . patients attended the center after an 8-h fast and underwent a 4-h structured clinical assessment that included laboratory investigations . a sterile , random - spot urinary sample was collected to measure albumin - to - creatinine ratio ( acr ) . in this study , albuminuria was defined as an acr 2.5 mg / mmol in men and 3.5 mg / mmol in women . the abbreviated modification of diet in renal disease study formula recalibrated for chinese subjects ( 15 ) was used to estimate glomerular filtration rate ( gfr ) ( expressed in ml / min per 1.73 m ) : estimated gfr = 186 ( scr 0.011 ) ( age ) ( 0.742 if female ) 1.233 , where scr is serum creatinine expressed as mol / l ( original mg / dl converted to mol / l ) , and 1.233 is the adjusting coefficient for chinese subjects . total cholesterol , triglycerides , and hdl cholesterol were measured by enzymatic methods on a hitachi 911 automated analyzer ( boehringer mannheim , mannheim , germany ) using reagent kits supplied by the manufacturer of the analyzer , whereas ldl cholesterol was calculated using the friedewald equation ( 16 ) . drug usage data were extracted from the hong kong hospital authority central computer system , which recorded all drug dispensary data in public hospitals , including the start dates and end dates for each of the drugs of interest . in hong kong , these databases were matched by a unique identification number , the hong kong identity card number , which is compulsory for all residents in hong kong . all medical admissions of the cohort from enrollment to 30 july 2005 were retrieved from the hong kong hospital authority central computer system , which recorded admissions to all public hospitals in hong kong . collectively , these hospitals provide 95% of the total hospital bed - days in hong kong ( 17 ) . additionally , mortality data from the hong kong death registry during the period were retrieved and cross - checked with hospital discharge status . hospital discharge principle diagnoses , coded by the international statistical classification of diseases and related health problems 9th revision ( icd-9 ) , were used to identify cancer events . the outcome measure of this study was incident cancer ( fatal or nonfatal : codes 140208 ) during the follow - up period . we used biological interactions to test whether metformin use was associated with a greater cancer risk reduction in patients with low hdl cholesterol than in those with normal or high hdl cholesterol . the statistical analysis system ( release 9.10 ) was used to perform the statistical analysis ( sas institute , cary , nc ) , unless otherwise specified . follow - up time was calculated as the period in years from the first enrollment since 1 december 1996 to the date of the first cancer event , death , or censoring , whichever came first . cox proportional hazard regression was used to obtain the hrs and 95% cis of the variables of interest . we first plotted the full - range association of hdl cholesterol and cancer and further refined cutoff points of hdl cholesterol for cancer risk in the cohort without prevalent metformin users , using restricted cubic spline cox models ( 11 ) . then , we examined the biological interaction for cancer risk between low hdl cholesterol and nonuse of metformin using three measures : 1 ) relative excess risk caused by interaction ( reri ) ; 2 ) attributable proportion ( ap ) caused by interaction ; and 3 ) the synergy index ( s ) ( 18 ) . a detailed calculation method of additive interaction , including the definition of three indicator variables , an sas program , and a calculator in microsoft excel ( www.epinet.se ) , the reri is the excess risk attributed to interaction relative to the risk without exposure . ap refers to the attributable proportion of disease , which is caused by the interaction in subjects with both exposures . s is the excess risk from both exposures when there is biological interaction relative to the risk from both exposures without interaction . a simulation study showed that reri performed best and ap performed fairly well , but s was problematic in the measure of additivity in the proportional hazard model ( 19 ) . the current study refined the criteria as either a statistically significant reri > 0 or ap > 0 to indicate biological interactions . the following two - step adjustment scheme was used in these analyses : 1 ) only adjusting for ldl cholesterol related cancer risk indicators ( ldl cholesterol 3.80 mmol / l and ldl cholesterol < 2.80 mmol / l plus albuminuria ) ( 11,12 ) , triglycerides ( 2 ) , and high hdl cholesterol , where appropriate ( 2 ) , and 2 ) further adjusting for age , sex , employment status , smoking status , alcohol intake , duration of diabetes , bmi ( 27.6 and < 24.0 kg / m ) ( 2 ) , systolic blood pressure ( sbp ) , and a1c ( 20 ) at enrollment and use of statins ( 13 ) , fibrates , other lipid - lowering drugs , ace inhibitors / angiotensin ii receptor blockers ( arbs ) ( 13 ) , and insulin ( 20 ) during follow - up . use of drugs during follow - up was defined as use of the drugs from enrollment to cancer , death , or censoring date , whichever came first . by definition , the use of any drugs after the first cancer event was coded as nonuse of these drugs , and any drug users had been given at least one prescription of the drug during follow - up . the total metformin dosage divided by the total number of days during which metformin was prescribed was used as daily metformin dosage . we also used propensity score to adjust for the likelihood of initiation of metformin during the follow - up period ( 21 ) . the former was obtained using a logistic regression procedure that includes the following independent variables : age ; sex ; bmi ; ldl cholesterol ; hdl cholesterol ; triglycerides ; tobacco and alcohol intake ; a1c ; sbp ; ln ( acr + 1 ) ; estimated gfr ; peripheral arterial disease ; retinopathy ; sensory neuropathy ; and history of cardiovascular disease ( coronary heart disease , myocardial infarction , and stroke ) at baseline ( c statistics = 0.73 ) . we then used stratified cox models on deciles of the score to adjust for the likelihood of metformin use . sensitivity analyses were performed to address 1 ) the impacts of undiagnosed cancer by limiting the analysis to patients who were followed for 2.5 years ( n = 2170 ) ; 2 ) the impact of incomplete exclusion of patients who used metformin during 2.5 years before enrollment by limiting the analysis to patients who were enrolled on or after 1 july 1998 ( n = 1707 ) ; 3 ) the impact of prevalent bias by reinclusion of 3,445 patients who used metformin during 2.5 years before enrollment ; and 4 ) inclusion of subjects with missing values in univariable analysis and without adjusting for the propensity score to maximize the valid sample size ( n of the valid sample size = 2,996 and n of the sample size with missing values in hdl cholesterol = 53 [ i.e. , 1.74% missing - value rate ] ) . the median age of the cohort was 56 years ( 25th to 75th percentiles [ interquartile range { iqr } 4567 ] ) at enrollment . during 13,808 person - years of follow - up ( 5.51 years [ 3.087.39 ] ) , 129 patients developed cancer . in the cohort , 16.3% ( n = 433 ) of patients had low hdl cholesterol < 1.0 mmol / l and 46.7% ( n = 1,243 ) had hdl cholesterol 1.30 patients with low hdl cholesterol were more likely to use insulin , develop cancer , and die prematurely . patients who developed cancer were older , more likely to use tobacco and alcohol , and had longer disease duration . they had high ldl cholesterol , low hdl cholesterol , and albuminuria and were more likely to have premature death than those free of cancer . patients who developed cancer were less likely to use statins and metformin during the follow - up period than patients without cancer ( table 1 ) . compared with patients with hdl cholesterol 1.0 but < 1.3 mmol / l , patients with hdl cholesterol < 1.0 mmol / l ( hr 2.22 [ 95% ci 1.383.58 ] ) and those with hdl cholesterol 1.3 mmol / l ( 1.61 [ 1.052.46 ] ) had increased cancer risk in univariable analysis . after adjusting for other covariates ( supplementary fig . 1 ) , hdl cholesterol < 1.0 mmol / l for cancer remained significant ( 2.41 [ 1.463.96 ] ) but not hdl cholesterol 1.3 mmol / l ( p = 0.1197 ) . additional subgroup univariable and multivariable analyses indicate that low hdl cholesterol was associated with increased cancer risk only among those who did not use metformin but not among those who did ( power = 0.37 ) ( table 2 ) . hrs of different combinations of low hdl cholesterol and metformin use for cancer risk in type 2 diabetes * adjusted for ldl cholesterol related risk indicators ( ldl cholesterol 3.8 mmol / l and ldl cholesterol < 2.8 mmol / l plus albuminuria ) , hdl cholesterol 1.30 mmol / l ( not for models 7 and 8) , and the nonlinear association of triglycerides with cancer . further adjusted for age , sex , employment status , smoking status , alcohol intake , duration of diabetes , bmi ( 27.6 or < 24.0 kg / m ) , a1c , and sbp at enrollment and use of statins , fibrates , other lipid - lowering drugs , ace inhibitors / arbs , and insulin during follow - up . stratified cox model analyses on deciles of the propensity score of metformin use were included in models 512 to control for the likelihood of starting metformin therapy during follow - up . use of metformin was associated with a decreased risk of cancer in a dose - response manner . after adjusting for covariates , patients with hdl cholesterol < 1.0 mmol / l and who were not treated with metformin had a 5.8-fold risk of cancer compared with the referent group , who had hdl cholesterol 1.0 and used metformin . mmol / l but who were not treated with metformin also had higher cancer risk than the referent group . however , the cancer risk associated with hdl cholesterol < 1.0 mmol / l was rendered nonsignificant among those who used metformin ( table 2 and supplementary fig . 2 ) . there was a significant interaction between low hdl cholesterol and nonuse of metformin for cancer risk , after adjusting for covariates ( ap 0.44 [ 95% ci 0.110.78 ] ) ( table 3 ) . measures for estimation of biological interaction between low hdl cholesterol and nonuse of metformin for the risk of cancer in type 2 diabetes * adjusted for ldl cholesterol related risk indicators ( ldl cholesterol 3.8 mmol / l and ldl cholesterol < 2.8 mmol / l plus albuminuria ) , hdl cholesterol 1.30 mmol / l , and the nonlinear association of triglyceride with cancer . further adjusted for age , sex , employment status , smoking status , alcohol intake , duration of diabetes , bmi ( 27.6 or < 24.0 kg / m ) , a1c , and systolic blood pressure at enrollment and use of statins , fibrates , other lipid - lowering drugs , ace inhibitors / arbs , and insulin during follow - up . stratified cox model analyses on deciles of the propensity score of use of metformin were used to control for likelihood of starting metformin therapy during follow - up . consistently , the copresence of hdl cholesterol < 1.0 mmol / l and nonuse of metformin was associated with an increased risk of cancer at sites other than the digestive organs and peritoneum and , to a lesser degree , cancers of the digestive organs and peritoneum . copresence of both factors also was associated with an increased risk of fatal cancer and , to a lesser degree , nonfatal cancer ( table 4 ) . hrs of the copresence of hdl cholesterol < 1.0 mmol / l and nonuse of metformin during follow - up versus all other groups for site - specific cancers and fatal and nonfatal cancers * univariable cox models with stratification on deciles of the propensity score of use of metformin during follow - up were used to obtain the hrs . classification was based on the icd-9 ( there are overlaps among site - specific cancers ) . 46 nonfatal cancer events developed before fatal cancer . the series of sensitivity analyses showed a consistent trend toward an interactive effect of nonuse of metformin and hdl cholesterol < 1.0 mmol / l on the risk of cancer , although not all interactions in these sensitivity analyses reached statistical significance ( supplementary tables 2 and 3 ) . the median age of the cohort was 56 years ( 25th to 75th percentiles [ interquartile range { iqr } 4567 ] ) at enrollment . during 13,808 person - years of follow - up ( 5.51 years [ 3.087.39 ] ) , 129 patients developed cancer . in the cohort , 16.3% ( n = 433 ) of patients had low hdl cholesterol < 1.0 mmol / l and 46.7% ( n = 1,243 ) had hdl cholesterol 1.30 patients with low hdl cholesterol were more likely to use insulin , develop cancer , and die prematurely . patients who developed cancer were older , more likely to use tobacco and alcohol , and had longer disease duration . they had high ldl cholesterol , low hdl cholesterol , and albuminuria and were more likely to have premature death than those free of cancer . patients who developed cancer were less likely to use statins and metformin during the follow - up period than patients without cancer ( table 1 ) . compared with patients with hdl cholesterol 1.0 but < 1.3 mmol / l , patients with hdl cholesterol < 1.0 mmol / l ( hr 2.22 [ 95% ci 1.383.58 ] ) and those with hdl cholesterol 1.3 mmol / l ( 1.61 [ 1.052.46 ] ) had increased cancer risk in univariable analysis . after adjusting for other covariates ( supplementary fig . < 1.0 mmol / l for cancer remained significant ( 2.41 [ 1.463.96 ] ) but not hdl cholesterol 1.3 mmol / l ( p = 0.1197 ) . additional subgroup univariable and multivariable analyses indicate that low hdl cholesterol was associated with increased cancer risk only among those who did not use metformin but not among those who did ( power = 0.37 ) ( table 2 ) . hrs of different combinations of low hdl cholesterol and metformin use for cancer risk in type 2 diabetes * adjusted for ldl cholesterol related risk indicators ( ldl cholesterol 3.8 mmol / l and ldl cholesterol < 2.8 mmol / l plus albuminuria ) , hdl cholesterol 1.30 mmol / l ( not for models 7 and 8) , and the nonlinear association of triglycerides with cancer . further adjusted for age , sex , employment status , smoking status , alcohol intake , duration of diabetes , bmi ( 27.6 or < 24.0 kg / m ) , a1c , and sbp at enrollment and use of statins , fibrates , other lipid - lowering drugs , ace inhibitors / arbs , and insulin during follow - up . stratified cox model analyses on deciles of the propensity score of metformin use were included in models 512 to control for the likelihood of starting metformin therapy during follow - up . use of metformin was associated with a decreased risk of cancer in a dose - response manner . after adjusting for covariates , patients with hdl cholesterol < 1.0 mmol / l and who were not treated with metformin had a 5.8-fold risk of cancer compared with the referent group , who had hdl cholesterol 1.0 and used metformin . mmol / l but who were not treated with metformin also had higher cancer risk than the referent group . however , the cancer risk associated with hdl cholesterol < 1.0 mmol / l was rendered nonsignificant among those who used metformin ( table 2 and supplementary fig . 2 ) . there was a significant interaction between low hdl cholesterol and nonuse of metformin for cancer risk , after adjusting for covariates ( ap 0.44 [ 95% ci 0.110.78 ] ) ( table 3 ) . measures for estimation of biological interaction between low hdl cholesterol and nonuse of metformin for the risk of cancer in type 2 diabetes * adjusted for ldl cholesterol related risk indicators ( ldl cholesterol 3.8 mmol / l and ldl cholesterol < 2.8 mmol / l plus albuminuria ) , hdl cholesterol 1.30 mmol / l , and the nonlinear association of triglyceride with cancer . further adjusted for age , sex , employment status , smoking status , alcohol intake , duration of diabetes , bmi ( 27.6 or < 24.0 kg / m ) , a1c , and systolic blood pressure at enrollment and use of statins , fibrates , other lipid - lowering drugs , ace inhibitors / arbs , and insulin during follow - up . stratified cox model analyses on deciles of the propensity score of use of metformin were used to control for likelihood of starting metformin therapy during follow - up . consistently , the copresence of hdl cholesterol < 1.0 mmol / l and nonuse of metformin was associated with an increased risk of cancer at sites other than the digestive organs and peritoneum and , to a lesser degree , cancers of the digestive organs and peritoneum . copresence of both factors also was associated with an increased risk of fatal cancer and , to a lesser degree , nonfatal cancer ( table 4 ) . hrs of the copresence of hdl cholesterol < 1.0 mmol / l and nonuse of metformin during follow - up versus all other groups for site - specific cancers and fatal and nonfatal cancers * univariable cox models with stratification on deciles of the propensity score of use of metformin during follow - up were used to obtain the hrs . classification was based on the icd-9 ( there are overlaps among site - specific cancers ) . 46 nonfatal cancer events developed before fatal cancer . the series of sensitivity analyses showed a consistent trend toward an interactive effect of nonuse of metformin and hdl cholesterol < 1.0 mmol / l on the risk of cancer , although not all interactions in these sensitivity analyses reached statistical significance ( supplementary tables 2 and 3 ) . in this study , we observed that hdl cholesterol < 1.0 mmol / l and nonuse of metformin was associated with a 5.8-fold cancer risk compared with metformin users with hdl cholesterol 1.0 the significant additive interaction indicates that the increased cancer risk as a result of a combination of nonuse of metformin and hdl cholesterol < 1.0 mmol / l was more than the addition of the risks attributed to the presence of either nonuse of metformin or low hdl cholesterol alone . in other words , the significant interaction suggests that the use of metformin may confer an extra cancer benefit in type 2 diabetic patients with low hdl cholesterol . although there are ongoing debates about the associations between insulin usage and cancer in diabetes , epidemiological studies have consistently found that the use of metformin is associated with reduced cancer risk . among these studies , libby et al . ( 9 ) reported that metformin use was associated with a 54% ( 95% ci 4760 ) lower crude incidence and a 37% ( 2547 ) lower adjusted incidence of cancer than metformin nonusers over a period of 10 years . in support of these findings , we also found a 50% lower adjusted cancer risk among metformin users with hdl cholesterol 1.0 several lines of evidence support the pivotal role of ampk , which can be triggered by a large number of upstream signals , in maintaining energy homeostasis by providing a balance between energy expenditure through lipolysis and energy storage through protein and glycogen synthesis . activation of ampk by the tumor suppressor , lkb1 , promotes glucose uptake , increases fatty acid oxidation , and reduces protein and lipid synthesis . metformin is known to activate the ampk pathway , possibly through the activation of the lkb1 suppressor gene ( 22 ) . on the other hand , hyperglycemia can downregulate apoa - i gene transcription , which is the major lipoprotein component of hdl lipid particles ( 23 ) . in this regard , apoa - i has been shown to stimulate phosphorylation of ampk and acc ( 5 ) . more recently , kimura et al . ( 24 ) reported that hdl can activate ampk through binding to both sphingosine 1-phosphate receptors / gi proteins and scavenger receptor class b type i ( sr - bi)/protein pdzk1 , with lkb1 being involved in the sr - bi signaling . given the close relationship between hdl cholesterol and the ampk pathway , the interactive effects between metformin use and hdl cholesterol on cancer risk is thus plausible . 2 ) hospital principle discharge diagnosis was used to retrieve cancer events in the cohort , and this approach may have missed a small number of cancer events . 3 ) the use of drug dispensary data are an indirect method and may overestimate exposure because drug acquisition is only a surrogate marker for actual drug consumption . although our definition of drug use should not introduce major bias ( 25 ) , some unmeasured confounding factors may exist . 4 ) the sample size of the study was not large enough to address whether there were sex - specific cutoff points of hdl cholesterol for the risk of cancer . 5 ) there were insufficient numbers of patients / events to explore the relationships between hdl cholesterol status , metformin exposure , and risk of specific cancers . 6 ) reri did not reach statistical significance . however , reris were significant in sensitivity analyses 3 and 4 with larger sample sizes , suggesting that the marginally significant reri in the analysis is possibly attributed to insufficient power . 7 ) these findings were only derived from a chinese cohort and need to be replicated in other ethnic populations . in conclusion , the use of metformin might confer stronger benefits in reducing cancer risk in patients with hdl cholesterol < 1.0 mmol / l . although low hdl cholesterol is not an indication for metformin usage , if our findings can be independently replicated , patients with low hdl cholesterol with or without type 2 diabetes might be candidate subjects for clinical trials that formally test the anticancer effects of metformin or agents that modulate the apoa - i lkb1ampk pathway .
objectivethe amp - activated protein kinase ( ampk ) pathway is a master regulator in energy metabolism and may be related to cancer . in type 2 diabetes , low hdl cholesterol predicts cancer , whereas metformin usage is associated with reduced cancer risk . both metformin and apolipoprotein a1 activate the ampk signaling pathway . we hypothesize that the anticancer effects of metformin may be particularly evident in type 2 diabetic patients with low hdl cholesterol.research design and methodsin a consecutive cohort of 2,658 chinese type 2 diabetic patients enrolled in the study between 1996 and 2005 , who were free of cancer and not using metformin at enrollment or during 2.5 years before enrollment and who were followed until 2005 , we measured biological interactions for cancer risk using relative excess risk as a result of interaction ( reri ) and attributable proportion ( ap ) as a result of interaction . a statistically significant reri > 0 or ap > 0 indicates biological interaction.resultsduring 13,808 person - years of follow - up ( median 5.51 years ) , 129 patients developed cancer . hdl cholesterol < 1.0 mmol / l was associated with increased cancer risk among those who did not use metformin , but the association was not significant among those who did . use of metformin was associated with reduced cancer risk in patients with hdl cholesterol < 1.0 mmol / l and , to a lesser extent , in patients with hdl cholesterol 1.0 mmol / l . hdl cholesterol < 1.0 mmol / l plus nonuse of metformin was associated with an adjusted hazard ratio of 5.75 ( 95% ci 3.0310.90 ) compared with hdl cholesterol 1.0 mmol / l plus use of metformin , with a significant interaction ( ap 0.44 [ 95% ci 0.110.78]).conclusionsthe anticancer effect of metformin was most evident in type 2 diabetic patients with low hdl cholesterol .
gro j1744@xmath028 is a recently discovered accretion - powered pulsar which shows repetitive x - ray bursts ( @xcite ; @xcite ; @xcite ; and references therein ) . its unusual bursting behavior has been compared ( see below ) to that of the rapid burster ( mxb 1730@xmath0335 ) , which was discovered 20 years ago by lewin et al . analysis of the x - ray bursts from the rapid burster showed that it produced two distinct types of burst : type i , attributed to thermonuclear flashes on the surface of a neutron star ; and type ii , attributed to the release of gravitational potential energy due to spasmodic accretion . the mechanism responsible for the type ii bursts has not been fully understood , although it is almost certainly related to an accretion disk instability ( for reviews see lewin , van paradijs , & taam 1993,1995 ) . rapidly repetitive type ii bursts had previously been observed only from the rapid burster , so if the bursts from gro j1744@xmath028 are also of type ii as convincingly argued by lewin et al . ( 1996)then a comparison of these sources may constrain theories of the burst mechanism . the pulse period of gro j1744@xmath028 is 467 ms and the neutron star is in a 11.8 day binary orbit about a low - mass donor star ( @xcite ) . during the first 12 hours of bursting observed with batse on december 2 , 1995 , bursts occurred every 3 to 8 minutes , and for one three hour period the burst intervals clustered around @xmath4 s ( @xcite ) . subsequently the burst intervals became longer and more erratic and the burst rate settled at about 3040 per day ( corrected for earth occultation and live time ; @xcite ) . burst durations were initially @xmath1 2030 s , and settled down to @xmath1 510 s. unlike the type ii bursts from the rapid burster , no relationship between the burst fluence and the time to the next ( or previous ) burst has been reported28 shows a statistically significant but weak correlation between the burst fluence scaled to the persistent emission level and the time to the next burst ( kommers et al . ( @xcite ; @xcite ) . using the first observations of gro j1744@xmath028 with the _ rossi x - ray timing explorer _ ( _ rxte _ ) , swank ( 1996 ) noted that the bursts were followed by a characteristic `` dip '' in the persistent emission level which took a few minutes to recover . no such dips were seen before the bursts . subsequent _ rxte _ observations showed that the bursts were sometimes preceded by steadily increasing variability in the persistent emission , including `` micro '' and `` mini '' bursts ( @xcite ; @xcite ; @xcite ) . extensive reviews of the rapid burster and its complex behavior can be found in lewin , van paradijs , & taam ( 1993 , 1995 ) ; see also references therein . here we discuss the features of the type ii bursts from this source that are relevant to our comparison with gro j1744@xmath028 . the time intervals between type ii bursts from the rapid burster range from @xmath1 10 s to @xmath1 1 hr , with the shorter intervals being more common . burst durations range from @xmath5 s to @xmath6 s. the burst repetition pattern is that of a relaxation oscillator : the fluence of a type ii burst is roughly proportional to the time to the next burst ( @xcite ; lewin et al . persistent x - ray emission between the type ii bursts is observed following long ( duration @xmath7 s ) bursts . the persistent flux emerges gradually after high - fluence bursts and decreases prior to the next burst ( @xcite ; @xcite ; @xcite ) . these pre- and post - burst features are referred to as `` dips '' in the persistent emission . the spectrum of the persistent emission is relatively soft during the dip just after a burst . it then rapidly increases in hardness , remaining hard for @xmath1 12 minutes before gradually decreasing to again become very soft during the dip preceding the next burst ( @xcite ) . the 12 minute period of spectrally hard emission corresponds to a `` hump '' in the persistent emission light curve . lubin et al . ( 1992 ) found `` naked eye '' quasi - periodic oscillations ( qpo ) following 10 of 95 type ii bursts from the rapid burster observed with _ exosat _ in august 1985 . the oscillations occurred _ only _ during the spectrally hard humps ( which immediately followed the post - burst dips ) . the frequency of the oscillations ranged from 0.039 to 0.056 hz , with a period decrease of 3050% observed over the @xmath3 s lifetime of the oscillations . the fractional root - mean - square ( rms ) variation in the oscillations was 5 to 15% . in 8 of the 10 cases , the `` naked eye '' oscillations were accompanied by @xmath8 hz qpo with rms variations of 6 to 19% ( @xcite ) . several authors have already noted similarities between the bursts from gro j1744@xmath028 and the rapid burster . after the discovery of the bursts from gro j1744@xmath028 , kouveliotou et al . ( 1996 ) suggested that the release of gravitational potential energy following an accretion instability might be responsible for the bursts . lewin et al . ( 1996 ) made a detailed comparison between the two systems . noting that both are transient low - mass x - ray binaries in which the accretor is a neutron star , they concluded that the bursts from gro j1744@xmath028 must be type ii based on ( i ) the hardness of the burst spectra , ( ii ) the lack of spectral evolution during the bursts , ( iii ) the fact that the ratio of integrated energy in the persistent emission to that in the bursts was initially too small to allow for a thermonuclear burst mechanism , and ( iv ) the presence of dips in the persistent emission following bursts ( @xcite ) . subsequently , sturner & dermer ( 1996 ) reached the same conclusion . the possibility of thermonuclear burning in gro j1744@xmath028 has been discussed by bildsten & brown ( 1997 ) . in this paper , we present another similarity between the rapid burster and gro j1744@xmath028 . both sources show transient qpo during spectrally hard emission intervals following bursts . since january 18 , 1996 , _ rxte _ has performed numerous observations of gro j1744@xmath028 . the data discussed here were taken by the pca instrument and are publicly available . between january 18 and april 26 , 1996 , there were 94 main bursts observed ( @xcite ) . to refer to these bursts individually , we number them sequentially from 1 to 94 . the light curves of the bursts reveal a rich phenomenology in the burst profiles . figure [ figlc ] ( a ) shows the `` dip '' in persistent emission which follows some bursts , as first noted by swank ( 1996 ) . a broad `` shoulder '' ( or `` plateau '' ) of emission above the mean pre - burst level immediately follows some bursts , as shown in figure [ figlc ] ( b ) . the shoulder occurs _ before _ the dip whenever both features are present . some bursts show large - amplitude oscillations with frequencies @xmath9 hz during their shoulder , as shown in figure [ figlc ] ( c ) ( @xcite ) . oscillations can be seen during the shoulders of the light curves immediately following at least 10 of the 94 bursts . these ten bursts were numbers 8 , 12 , 14 , 18 , 43 , 53 , 65 , 68 , 77 , and 94 . typically 5 to 15 cycles of the qpo are apparent , with as many as @xmath10 cycles seen in the case of burst 65 . the low frequency of the oscillations and the low numbers of cycles make it difficult to use fourier power spectra to study these qpo . we have instead found the mean periods of the oscillations by estimating the time intervals between successive maxima in the count rates . the mean frequency of the oscillations over the ensemble of bursts is 0.38 @xmath11 0.04 hz . the mean frequency of the oscillations after individual bursts varies from 0.35 hz @xmath11 0.02 hz in burst 65 to 0.49 @xmath11 0.03 hz in burst 12 . ( the uncertainty in these figures represents the uncertainty in the mean frequency . ) the period of the qpo following a given burst typically wanders non - monotonically about a mean period by @xmath1 20 percent over the lifetime of the oscillations . using the @xmath12 statistic we exclude ( at the @xmath13 level or better in each of the 10 bursts ) the null hypothesis that the period between successive qpo pulses is constant . to identify any overall tendency towards increasing or decreasing qpo periods , we looked for bursts where the spearman rank - order correlation coefficient ( @xmath14 ) indicated that the time intervals between qpo pulses were correlated ( or anti - correlated ) with arrival time . in no case did we find a positive @xmath14 , which would have indicated an overall trend towards increasing periods . for bursts 18 , 53 , 65 ( see figure 2 ) , and 94 , we found an anti - correlation at the @xmath13 level or greater . in these bursts the qpo period tends to decrease by 10 to 20 percent over the lifetimes ( @xmath1 50 to 80 s ) of the oscillations . the fractional rms variations of the oscillations range from @xmath1 5 to 13% . using the counts to flux conversion @xmath15 erg @xmath16 s@xmath17 per count ( total pca band ; @xcite ) the rms variations in flux units range from @xmath18 erg @xmath16 s@xmath17 to @xmath19 erg @xmath16 s@xmath17 . the shoulders during which the post - burst oscillations occur are spectrally harder than the persistent emission . figure [ fighardness ] ( top panel ) shows a hardness ratio for burst 65 . the hardness is defined as the count rate ( averaged over 1.0 s ) in the 35.6 kev range divided by that in the 23 kev range . the middle panel shows the power spectrum as a function of time . higher power levels are shown with darker shades . the presence of the qpo is shown by a dark band at 0.4 hz lasting from the end of the burst until @xmath20 s after the peak . the bottom panel shows the burst intensity profile . the spectrum of the burst counts is clearly harder than that of the persistent emission , but even after the main part of the burst subsides the hardness ratios during the presence of the qpo ( 20 to 100 s after the burst ) exceed those of the pre - burst persistent emission . the qpo following burst 65 show a modulation envelope reminiscent of `` beating '' between oscillations at separate frequencies ; see the bottom panel of figure [ fighardness ] . the count rates shown in the inset have been smoothed with a boxcar average to highlight the envelope of the ( roughly ) 0.05 hz `` beats '' . if this modulation envelope can be attributed to simple beating , the two narrow - band oscillations must occur concurrently and have roughly comparable amplitudes . an estimate of the two frequencies involved can be obtained from the mean frequency @xmath21 and the beat frequency @xmath22 . the mean frequency for the @xmath10 cycles of qpo following this burst is 0.35 @xmath11 0.02 hz . the two frequencies which are beating must then be roughly @xmath23 hz and @xmath24 hz . the average frequency wanders by about 20% , so if beating is responsible for the apparent modulation envelope then the frequencies of the two narrow - band oscillations wander as well . the presence of a similar phenomenon following other bursts is difficult to determine because fewer cycles are available . over the course of the january may 1996 _ rxte _ observations , the burst fluence , peak flux , and persistent emission level each decreased ( approximately linearly ) by a factor of @xmath1 45 ( @xcite ) . the average frequency and rms amplitude of the post - burst qpo show no significant ( @xmath25 ) correlations with any of these quantities . qpo with nearly the same frequency are seen in bursts for which the persistent emission level differs by a factor of @xmath8 . although our analysis focused on the december 1995may 1996 outburst of gro j1744@xmath028 , we note that post - burst oscillations were also detected following a burst in june 1996 . this burst occurred when the source temporarily resumed bursting activity at a lower level than the main december 1995may 1996 outburst ( @xcite ) . kommers et al . ( 1996 ) reported 0.4 hz oscillations during a @xmath10 s `` shoulder '' following the sixth of 7 bursts observed with the pca on june 4 , 1996 . the fractional rms amplitude of these oscillations was @xmath26 percent . a second large outburst from gro j1744@xmath028 has been in progress since december 2 , 1996 , but we have not yet analyzed these observations ( @xcite ; @xcite ) . the @xmath1 0.4 hz oscillations following some bursts from gro j1744@xmath028 are reminiscent of oscillations that have been seen following some type ii bursts from the rapid burster . this likeness between the two sources complements the comparisons made by lewin et al . although it is not certain that the post - burst oscillations are the same phenomenon in both sources , the following similarities are worth consideration . ( i ) the post - burst oscillations follow bursts that do _ not _ show the profiles or spectral evolution characteristic of type i ( thermonuclear flash ) bursts . ( ii ) the oscillations occur during a period of spectrally hard emission . ( iii ) when an overall change in qpo period is observed , it is a decrease : 1020% for gro j1744@xmath028 , and 3050% for the rapid burster . ( iv ) the fractional rms amplitude of the oscillations is roughly 515% . there are some differences , however . ( i ) the frequency of the post - burst oscillations in gro j1744@xmath028 is roughly 0.4 hz while in the rapid burster , the post - burst oscillations have frequencies of 0.04 hz and are in some cases accompanied by 4 hz qpo . ( ii ) in gro j1744@xmath028 , the oscillations occur _ before _ the dip in the persistent emission ; but in the rapid burster , they appear _ after _ the dip ( @xcite ) . cannizzo ( 1996a , 1996b ) has shown that a simple numerical model of a thermal - viscous accretion disk instability can reproduce some features of the gro j1744@xmath028 bursts . in his model , interplay between radial and vertical energy transport in the disk causes oscillations in the ratio of gas pressure to total pressure ( denoted by @xmath27 ) . these oscillations increase in amplitude and eventually become non - linear , leading to a lightman - eardley instability ( @xcite ) . for specific choices of the inner and outer disk radii ( @xmath28 cm , @xmath29 cm ) and the form of the viscous stress , integration of the time - dependent model reproduces the @xmath30 s recurrence times and the @xmath31 s durations of the bursts as well as the post - burst dips . cannizzo ( 1996b ) also points out that the increased variability _ before _ bursts may be related to the oscillations in @xmath27 that precede the instability . these oscillations may occur in a variety of modes , but in his idealized case appear to have frequencies of about 0.05 hz ( @xcite ) . it remains to be seen whether this model can explain the oscillations reported here . abramowicz , chen , & taam ( 1995 ) have shown that low - frequency oscillations in mass flow can arise in accretion disk - corona systems . in their model , the development of a lightman - eardley instability is moderated by energy dissipation in the coronal region , which lies above the thin disk . they propose that the strong @xmath32 hz oscillations seen in the rapid burster may arise from such a mechanism . the frequency of these qpo may be relatively insensitive to the accretion rate , since it is determined by several competing factors including the radius of the inner accretion disk . the spectrum of the qpo is expected to be hard whenever the qpo originate from the hot inner region ( abramowicz et al . 1995 ) . the common feature responsible for oscillatory behavior in these models is the presence of a mechanism which couples the vertical energy transport to the radial energy flux . in cannizzo s model , this mechanism also acts to produce the bursts . to make a quantitative comparison of these models with the characteristics of the post - burst oscillations in gro j1744@xmath028 and the rapid burster , one would have to know what qpo frequencies are expected based on the physical parameters that distinguish the two sources . the magnetic field strength and inner disk radius in particular should be quite different ( @xcite ; @xcite ; @xcite ) , which might account for the factor of @xmath33 difference in post - burst qpo frequency . detailed information of this kind has not yet been presented for the models of cannizzo ( 1996a , b ) or abramowicz et al . ( 1995 ) . the presence of transient post - burst qpo is another similarity between the bursts from gro j1744@xmath028 and the type ii bursts from the rapid burster . combined with the comparison of lewin et al . ( 1996 ) , this similarity is further evidence that the bursts from gro j1744@xmath028 are of type ii . if the same disk instability is responsible for the type ii bursts from each of these sources , then the post - burst oscillations provide another observational benchmark for models of the burst mechanism as well as the qpo . j. m. k. and d. w. f. acknowledge support from national science foundation graduate research fellowships during the preliminary phase of this research . j. m. k. acknowledges subsequent support from a nasa graduate student researchers program fellowship ngt8 - 52816 . r. r. was supported by the nasa graduate student researchers program under grant ngt-51368 . w. h. g. l.acknowledges support from nasa under grant nag5 - 2046 . j. v. p.acknowledges support from nasa under grant nag5 - 2755 . c. k.acknowledges support from nasa under grant nag5 - 2560 . we thank dr . ed morgan for helpful discussions and assistance with the _ rxte _ data archive at mit . abramowicz , m. a. , chen , x. , & taam , r. e. 1995 , , 452 , 379 bildsten , l. , & brown , e. f. 1997 , , in press cannizzo , j. k. 1996a , , 466 , l31 cannizzo , j. k. , 1996b , preprint finger , m. h. , koh , d. t. , nelson , r. w. , prince , t. a. , vaughan , b. a. , & wilson , r. b. 1996 , , 381 , 291 . fishman , g. j. , et al . 1996 , , no . 6290 giles , a. b. , & strohmayer , t. 1996 , , no . 6338 giles , a. b. , swank , j. h. , jahoda , k. , zhang , w. , strohmayer , t. , stark , m. h. , & morgan , e. h. 1996 , , 469 , l25 jahoda , k. , strohmayer , t. , & corbet , r. 1996 , , no . 6414 kommers , j. m. , rutledge , r. e. , fox , d. w. , lewin , w. h. g. , morgan , e. h. , kouveliotou , c. , & van paradijs , j. 1996 , , no . kommers , j. m. , et al . 1997 , in preparation kouveliotou , c. , van paradijs , j. , fishman , g. j. , briggs , m. s. , kommers , j. m. , harmon , b. a. , meegan , c. a. , & lewin , w. h. g. 1996 , , 379 , 799 . kouveliotou , c. , deal , k. , richardson , g. , briggs , m. , fishman , g. , & van paradijs , j. 1997 , , no . 6530 lewin , w. h. g. , et al . 1976 , , 207 , l95 lewin , w. h. g. , rutledge , r. e. , kommers , j. m. , van paradijs , j. , & kouveliotou , c. 1996 , , 462 , l39 lewin , w. h. g. , van paradijs , j. , & taam , r. e. 1993 , space sci . rev . , 62 , 223 lewin , w. h. g. , van paradijs , j. , & taam , r. e. 1995 , in x - ray binaries , eds . w. h. g. lewin , j. van paradijs , & e. p. j. van den heuvel ( cambridge university press ) , 175 lightman , a. p. , & eardley , d. m. 1974 , , 187 , l1 lubin , l. m. , lewin , w. h. g. , rutledge , r. e. , van paradijs , j. , van der klis , m. , & stella , l. 1992 , , 258 , 759 marshall , h. l. , ulmer , m. p. , hoffman , j. a. , doty , j. , & lewin , w. h. g. 1979 , , 227 , 555 stark , m. j. , jahoda , k. , swank , j. , & strohmayer , t. 1997 , , no . 6548 stella , l. , haberl , f. , lewin , w. h. g. , parmar , a. n. , van der klis , m. , & van paradijs , j. 1988a , , 327 , l13 stella , l. , haberl , f. , lewin , w. h. g. , parmar , a. n. , van paradijs , j. , & white , n. e. 1988b , , 324 , 379 strickman , m. s. , et al . 1996 , , 464 , l131 sturner , s. j. , & dermer , c. d. 1996 , , 465 , l31 swank , j. 1996 , , no . 6291 van paradijs , j. , cominsky , l. , & lewin , w. h. g. 1979 , , 189 , 387 .
the repetitive x - ray bursts from the accretion - powered pulsar gro j1744@xmath028 show similarities to the type ii x - ray bursts from the rapid burster . several authors ( notably lewin et al . ) have suggested that the bursts from gro j1744@xmath028 are type ii bursts ( which arise from the sudden release of gravitational potential energy ) . in this paper , we present another similarity between these sources . _ rossi x - ray timing explorer _ observations of gro j1744@xmath028 show that at least 10 out of 94 bursts are followed by quasi - periodic oscillations ( qpo ) with frequencies of @xmath1 0.4 hz . the period of the oscillations decreases over their @xmath280 s lifetime , and they occur during a spectrally hard `` shoulder '' ( or `` plateau '' ) which follows the burst . in one case the qpo show a modulation envelope which resembles simple beating between two narrow - band oscillations at @xmath1 0.325 and @xmath1 0.375 hz . using _ exosat _ observations , lubin et al . found qpo with frequencies of 0.039 to 0.056 hz following 10 out of 95 type ii bursts from the rapid burster . as in gro j1744@xmath028 the period of these oscillations decreased over their @xmath3 s lifetime , and they occurred only during spectrally hard `` humps '' in the persistent emission . even though the qpo frequencies differ by a factor of @xmath1 10 , we believe that this is further evidence that a similar accretion disk instability is responsible for the type ii bursts from these two sources .
in this article we study the asymptotic behavior of symmetric functions of representation theoretic origin , such as schur rational functions or characters of symplectic or orthogonal groups , etc , as their number of variables tends to infinity . in order to simplify the exposition we stick to schur functions in the introduction where it is possible , but most of our results hold in a greater generality . the rational schur function @xmath2 is a symmetric laurent polynomial in variables @xmath3 , , @xmath4 . they are parameterized by @xmath5tuples of integers @xmath6 ( we call such @xmath5tuples _ signatures _ , they form the set @xmath7 ) and are given by weyl s character formula as @xmath8_{i , j=1}^{n}}{\prod_{i < j}(x_i - x_j)}.\ ] ] our aim is to study the asymptotic behavior of the _ normalized _ symmetric polynomials @xmath9 and also @xmath10 for some @xmath11 . here @xmath12 is allowed to vary with @xmath5 , @xmath13 is any fixed number and @xmath14 are complex numbers , which may or may not vary together with @xmath5 , depending on the context . note that there are explicit expressions ( weyl s dimension formulas ) for the denominators in formulas and , therefore , their asymptotic behavior is straightforward . the asymptotic analysis of expressions , is important because of the various applications in representation theory , statistical mechanics and probability , including : * for any @xmath13 and any _ fixed _ @xmath15 , such that @xmath16 , the convergence of @xmath17 ( from ) to some limit and the identification of this limit can be put in representation theoretic framework as the approximation of indecomposable characters of the infinite dimensional unitary group @xmath18 by normalized characters of the unitary groups @xmath19 , the latter problem was first studied by kerov and vershik @xcite . * the convergence of @xmath20 ( from ) for any @xmath13 and any _ fixed _ @xmath15 is similarly related to the _ quantization _ of characters of @xmath18 , see @xcite . * the asymptotic behavior of can be put in the context of random matrix theory as the study of the harish chandra - itzykson zuber integral @xmath21 where @xmath22 is a fixed hermitian matrix of finite rank and @xmath23 is an @xmath24 matrix changing in a regular way as @xmath25 . in this formulation the problem was thoroughly studied by guionnet and mada @xcite . * a normalized schur function can be interpreted as the expectation of a certain observable in the probabilistic model of uniformly random lozenge tilings of planar domains . the asymptotic analysis of as @xmath25 with @xmath26 and fixed @xmath27s gives a way to prove the local convergence of random tiling to a distribution of random matrix origin the gue corners process ( the name gue minors process is also used ) . informal argument explaining that such convergence should hold was suggested earlier by okounkov and reshetikhin in @xcite . * when @xmath28 is a _ staircase young diagram _ with @xmath29 rows of lengths @xmath30 , gives the expectation of a certain observable for the uniformly random configurations of the six vertex model with domain wall boundary conditions ( equivalently , alternating sign matrices ) . asymptotic behavior as @xmath25 with @xmath26 and fixed @xmath27 gives a way to study the local limit of the six vertex model with domain wall boundary conditions near the boundary . * for the same staircase @xmath28 the expression involving with @xmath31 and schur polynomials replaced by the characters of symplectic group gives the mean of the boundary - to - boundary current for the completely packed @xmath1 dense loop model , see @xcite . the asymptotics ( now with fixed @xmath32 , not depending on @xmath5 ) gives the limit behavior of this current , significant for the understanding of this model . in the present article we develop a new unified approach to study the asymptotics of normalized schur functions , ( and also for more general symmetric functions like symplectic characters and polynomials corresponding to the root system @xmath33 ) , which gives a way to answer all of the above limit questions . there are 3 main ingredients of our method : 1 . we find simple contour integral representations for the normalized schur polynomials , with @xmath34 , i.e. for @xmath35 and also for more general symmetric functions of representation theoretic origin . 2 . we study the asymptotics of the above contour integrals using the _ steepest descent _ method . we find formulas expressing , as @xmath36 determinants of expressions involving , and combining the asymptotics of these formulas with asymptotics of compute limits of , . in the rest of the introduction we provide a more detailed description of our results . in section [ section_intro_method ] we briefly explain our methods . in sections [ section_intro_rt ] , [ section_intro_lozenge ] , [ section_intro_asm ] , [ section_intro_loop ] , [ section_intro_matrix ] we describe the applications of our method in asymptotic representation theory , probability and statistical mechanics . finally , in section [ section_intro_compare ] we compare our approach for studying the asymptotics of symmetric functions with other known methods . in the next papers we also apply the techniques developed here to the study of other classes of lozenge tilings @xcite and to the investigation of the asymptotic behavior of decompositions of tensor products of representations of classical groups into irreducible components @xcite . the main ingredient of our approach to the asymptotic analysis of symmetric functions is the following integral formula , which is proved in theorem [ theorem_integral_representation_schur_1 ] . let @xmath37 , and let @xmath3 , , @xmath38 be complex numbers . denote @xmath39 with @xmath40 @xmath41s in the numerator and @xmath5 @xmath41s in the denominator . for any complex number @xmath42 other than @xmath43 and @xmath41 we have @xmath44 where the contour @xmath45 encloses all the singularities of the integrand . we also prove various generalizations of formula : one can replace @xmath41s by the geometric series @xmath46 ( theorem [ theorem_integral_representation_schur_q ] ) , schur functions can be replaced with characters of symplectic group ( theorems [ theorem_symplectic_integral_q ] and [ theorem_symplectic_integral_1 ] ) or , more , generally , with multivariate jacobi polynomials ( theorem [ theorem_jacobi_singlevar ] ) . in all these cases a normalized symmetric function is expressed as a contour integral with integrand being the product of elementary factors . the only exception is the most general case of jacobi polynomials , where we have to use certain hypergeometric series . recently ( and independently of the present work ) a formula similar to for the characters of orthogonal groups @xmath47 was found in @xcite in the study of the mixing time of certain random walk on @xmath47 . a close relative of our formula can be also found in section 3 of @xcite . using formula we apply tools from complex analysis , mainly the method of steepest descent , to compute the limit behavior of these normalized symmetric functions . our main asymptotic results along these lines are summarized in propositions [ proposition_convergence_mildest ] , [ proposition_convergence_strongest ] , [ prop_convergence_gue_case ] for real @xmath42 and in propositions [ proposition_convergence_extended ] and [ prop_convergence_gue_extended ] for complex @xmath42 . the next important step is the formula expressing @xmath48 in terms of @xmath49 which is proved in theorem [ theorem_multivariate_schur_1 ] : we have @xmath50_{i , j=1}^{k } \left(\prod_{j=1}^k s_{\lambda}(x_j;n,1)\frac{(x_j-1)^{n-1}}{(n-1)!}\right),\end{gathered}\ ] ] where @xmath51 is the differential operator @xmath52 . formula can again be generalized : @xmath41s can be replaced with geometric series @xmath46 ( theorem [ theorem_multivariate_schur_q ] ) , schur functions can be replaced with characters of the symplectic group ( theorems [ theorem_simplectic_multi_q ] , [ theorem_symp_multivar_1 ] ) or , more , generally , with multivariate jacobi polynomials ( theorem [ theorem_jacobi_multi ] ) . in principle , formulas similar to can be found in the literature , see e.g. ( * ? ? ? * proposition 6.2 ) , @xcite . the advantage of formula is its relatively simple form , but it is not straightforward that this formula is suitable for the @xmath25 limit . however , we are able to rewrite this formula in a different form ( see proposition [ proposition_multivariate_expansion ] ) , from which this limit transition is immediate . combining the limit formula with the asymptotic results for @xmath53 we get the full asymptotics for @xmath48 . as a side remark , since we deal with analytic functions and convergence in our formulas is always ( at least locally ) uniform , the differentiation in formula does not introduce any problems . let @xmath19 denote the group of all @xmath24 unitary matrices . embed @xmath19 into @xmath54 as a subgroup acting on the space spanned by first @xmath5 coordinate vectors and fixing @xmath55st vector , and form the infinite dimensional unitary group @xmath18 as an inductive limit @xmath56 recall that a ( normalized ) _ character _ of a group @xmath57 is a continuous function @xmath58 , @xmath59 satisfying : 1 . @xmath60 is constant on conjugacy classes , i.e. @xmath61 , 2 . @xmath60 is positive definite , i.e. the matrix @xmath62_{i , j=1}^k$ ] is hermitian non - negative definite , for any @xmath63 , 3 . @xmath64 . an _ extreme character _ is an extreme point of the convex set of all characters . if @xmath57 is a compact group , then its extreme characters are normalized matrix traces of irreducible representations . it is a known fact ( see e.g. the classical book of weyl @xcite ) that irreducible representations of the unitary group @xmath19 are parameterized by signatures , and the value of the trace of the representation parameterized by @xmath28 on a unitary matrix with eigenvalues @xmath65 is @xmath66 . using these facts and applying the result above to @xmath19 we conclude that the normalized characters of @xmath19 are the functions @xmath67 for `` big '' groups such as @xmath18 the situation is more delicate . the study of characters of this group was initiated by voiculescu @xcite in 1976 in connection with _ finite factor representations _ of @xmath18 . voiculescu gave a list of extreme characters , later independently boyer @xcite and vershik - kerov @xcite discovered that the classification theorem for the characters of @xmath18 follows from the result of edrei @xcite on the characterization of totally positive toeplitz matrices . nowadays , several other proofs of voiculescu edrei classification theorem is known , see @xcite , @xcite , @xcite . the theorem itself reads : [ theorem_voiculescu ] the extreme characters of @xmath18 are parameterized by the points @xmath68 of the infinite - dimensional domain @xmath69 where @xmath70 is the set of sextuples @xmath71 such that @xmath72 @xmath73 the corresponding extreme character is given by the formula @xmath74 where @xmath75 our interest in characters is based on the following fact . [ prop_character_is_limit ] every extreme normalized character @xmath60 of @xmath18 is a uniform limit of extreme characters of @xmath19 . in other words , for every @xmath60 there exists a sequence @xmath76 such that for every @xmath13 @xmath77 uniformly on the torus @xmath78 , where @xmath79 . in the context of representation theory of @xmath18 this statement was first observed by kerov and vershik @xcite . however , this is just a particular case of a very general convex analysis theorem which was reproved many times in various contexts ( see e.g. @xcite , @xcite , @xcite ) . the above proposition raises the question which sequences of characters of @xmath19 approximate characters of @xmath18 . solution to this problem was given by kerov and vershik @xcite . let @xmath80 be a young diagram with row lengths @xmath81 , column lengths @xmath82 and whose length of main diagonal is @xmath83 . introduce _ modified frobenius coordinates _ : @xmath84 note that @xmath85 . given a signature @xmath86 , we associate two young diagrams @xmath87 and @xmath88 to it : the row lengths of @xmath87 are the positive @xmath89 s , while the row lengths of @xmath88 are minus the negative ones . in this way we get two sets of modified frobenius coordinates : @xmath90 , @xmath91 and @xmath92 , @xmath93 . [ theorem_u_vk ] let @xmath94 and suppose that the sequence @xmath76 is such that @xmath95 @xmath96 then for every @xmath13 @xmath97 uniformly on the torus @xmath78 . theorem [ theorem_u_vk ] is an immediate corollary of our results on asymptotics of normalized schur polynomials , and a new short proof is given in section [ section_u_infty ] . note the remarkable multiplicativity of voiculescu edrei formula for the characters of @xmath18 : the value of a character on a given matrix ( element of @xmath18 ) is expressed as a product of the values of a single function at each of its eigenvalues . there exists an independent representation theoretic argument explaining this multiplicativity . clearly , no such multiplicativity exists for finite @xmath5 , i.e. for the characters of @xmath19 . however , we claim that the formula should be viewed as a manifestation of _ approximate multiplicativity _ for ( normalized ) characters of @xmath19 . to explain this point of view we start from @xmath98 , in this case simplifies to @xmath99}{y - x}\end{gathered}\ ] ] more generally proposition [ proposition_multivariate_expansion ] claims that for any @xmath13 formula implies that , informally , @xmath100 therefore , states that normalized characters of @xmath19 are approximately multiplicative and they become multiplicative as @xmath25 . this is somehow similar to the work of diaconis and freedman @xcite on _ finite exchangeable sequences_. in particular , in the same way as results of @xcite immediately imply de finetti s theorem ( see e.g. @xcite ) , our results immediately imply the multiplicativity of characters of @xmath18 . in @xcite a @xmath0deformation of the notion of character of @xmath18 was suggested . analogously to proposition [ prop_character_is_limit ] , a @xmath0character is a limit of schur functions , but with different normalization . this time the sequence @xmath101 should be such that for every @xmath13 @xmath102 converges uniformly on the set @xmath103 . an analogue of theorem [ theorem_u_vk ] is the following one : [ theorem_u_q_approx ] let @xmath104 . extreme @xmath0characters of @xmath18 are parameterized by the points of set @xmath105 of all non - decreasing sequences of integers : @xmath106 suppose that a sequence @xmath76 is such that for any @xmath107 @xmath108 then for every @xmath13 @xmath109 converges uniformly on the set @xmath103 and these limits define the @xmath0character of @xmath18 . using the @xmath0analogues of formulas and we give in section [ section_u_q_infty ] a short proof of the second part of theorem [ theorem_u_q_approx ] , see theorem [ theorem_q_limit_multivar ] . this should be compared with @xcite , where the proof of the same statement was quite involved . we go beyond the results of @xcite , give new formulas for the @xmath0characters and explain what property replaces the multiplicativity of voiculescu edrei characters given in theorem [ theorem_voiculescu ] . consider a tiling of a domain drawn on the regular triangular lattice of the kind shown at figure [ fig_polyg_domain ] with rhombi of 3 types , where each rhombus is a union of 2 elementary triangles . such rhombi are usually called _ lozenges _ and they are shown at figure [ figure_loz ] . the configuration of the domain is encoded by the number @xmath5 which is its width and @xmath5 integers @xmath110 which are the positions of _ horizontal lozenges _ sticking out of the right boundary . if we write @xmath111 , then @xmath28 is a signature of size @xmath5 , see left panel of figure [ fig_polyg_domain ] . due to combinatorial constraints the tilings of such domain are in correspondence with tilings of a certain polygonal domain , as shown on the right panel of figure [ fig_polyg_domain ] let @xmath112 denote the domain encoded by a signature @xmath28 . it is well known that each lozenge tiling can be identified with a stepped surface in @xmath113 ( the three types of lozenges correspond to the three slopes of this surface ) and with a perfect matching of a subgraph of a hexagonal lattice , see e.g.@xcite . note that there are finitely many tilings of @xmath112 and let @xmath114 denote a uniformly random lozenge tiling of @xmath112 . the interest in lozenge tilings is caused by their remarkable asymptotic behavior . when @xmath5 is large the rescaled stepped surface corresponding to @xmath114 concentrates near a deterministic limit shape . in fact , this is true also for more general domains , see @xcite . one feature of the limit shape is the formation of so called _ frozen regions _ ; in terms of tilings , these are the regions where asymptotically with high probability only single type of lozenges is observed . this effect is visualized in figure [ figure_hex ] , where a sample from the uniform measure on tilings of the simplest tilable domain hexagon is shown . it is known that in this case the boundary of the frozen region is the inscribed ellipse , see @xcite , for more general polygonal domains the frozen boundary is an inscribed algebraic curve , see @xcite and also @xcite . in this article we study the local behavior of lozenge tiling near a _ turning point _ of the frozen boundary , which is the point where the boundary of the frozen region touches ( and is tangent to ) the boundary of the domain . okounkov and reshetikhin gave in @xcite a non - rigorous argument explaining that the scaling limit of a tiling in such situation should be governed by the _ gue corners _ process ( introduced and studied by baryshnikov @xcite and johansson nordenstam @xcite ) , which is the joint distribution of the eigenvalues of a gaussian unitary ensemble ( gue)random matrix ( i.e. hermitian matrix with independent gaussian entries ) and of its top left corner square submatrices . in one model of tilings of infinite polygonal domains , the proof of the convergence can be based on the determinantal structure of the correlation functions of the model and on the double integral representation for the correlation kernel and it was given in @xcite . another rigorous argument , related to the asymptotics of _ orthogonal polynomials _ exists for the lozenge tilings of hexagon ( as in figure [ figure_hex ] ) , see @xcite , @xcite . given @xmath114 let @xmath115 be the horizontal lozenges at the @xmath13th vertical line from the left . ( horizontal lozenges are shown in blue in the left panel of figure [ fig_polyg_domain ] . ) we set @xmath116 and denote the resulting random signature @xmath117 of size @xmath13 as @xmath118 . further , let @xmath119 denote the distribution of @xmath13 ( ordered ) eigenvalues of a random hermitian matrix from a gaussian unitary ensemble . [ theorem_gue_intro ] let @xmath76 , @xmath120 be a sequence of signatures . suppose that there exist a non - constant piecewise - differentiable weakly decreasing function @xmath121 such that @xmath122 as @xmath25 and also @xmath123 . then for every @xmath13 as @xmath25 we have @xmath124 in the sense of weak convergence , where @xmath125 under the same assumptions as in theorem [ theorem_gue_intro ] the ( rescaled ) joint distribution of @xmath126 horizontal lozenges on the left @xmath13 lines weakly converges to the joint distribution of the eigenvalues of the @xmath13 top - left corners of a @xmath36 matrix from a gue ensemble . note that , in principle , our domains may approximate a non polygonal limit domain as @xmath25 , thus , the results of @xcite describing the limit shape in terms of algebraic curves are not applicable here and not much is known about the exact shape of the frozen boundary . in particular , even the explicit expression for the coordinate of the point where the frozen boundary touches the left boundary ( which we get as a side result of theorem [ theorem_gue_intro ] ) seems not to be present in the literature . our approach to the proof of theorem [ theorem_gue_intro ] is the following : we express the expectations of certain observables of uniformly random lozenge tilings through normalized schur polynomials @xmath127 and investigate the asymptotics of these polynomials . in this case we prove and use the following asymptotic expansion ( given in proposition [ prop_convergence_gue_case ] and proposition [ gue_multivar_asymptotics ] ) @xmath128 we believe that our approach can be extended to a natural @xmath0deformation of uniform measure , which assigns the weight @xmath129 to lozenge tiling with volume @xmath130 below the corresponding stepped surface ; and also to lozenge tilings with axial symmetry , as in @xcite , @xcite . in the latter cases the schur polynomials are replaced with characters of orthogonal or symplectic groups and the limit object also changes . we postpone the thorough study of these cases to a future publication . we note that there might be another approach to the proof of theorem [ theorem_gue_intro ] . recently there was progress in understanding random tilings of polygonal domains , petrov found double integral representations for the correlation kernel describing the local structure of tilings of a wide class of polygonal domains , see @xcite ( and also @xcite for a similar result in context of random matrices ) . starting from these formulas , one could try to prove the gue corners asymptotics along the lines of @xcite . an _ alternating sign matrix _ of size @xmath5 is a @xmath24 matrix whose entries are either @xmath43 , @xmath41 or @xmath131 , such that the sum along every row and column is @xmath41 and , moreover , along each row and each column the nonzero entries alternate in sign . alternating sign matrices are in bijection with configurations of the six - vertex model with domain - wall boundary conditions as shown at figure [ fig_asm ] , more details on this bijection are given in section [ section_asm ] . a good review of the six vertex model can be found e.g. in the book @xcite by baxter . 2 @xmath132 interest in asms from combinatorial perspective emerged since their discovery in connection with dodgson condensation algorithm for determinant evaluations . initially , questions concerned enumeration problems , for instance , finding the total number of asms of given size @xmath133 ( this was the long - standing _ asm conjecture _ proved by zeilberger @xcite and kuperberg @xcite , the full story can be found in the bressoud s book @xcite ) . physicists interest stems from the fact that asms are in one - to - one bijection with configurations of the six vertex model . many questions on asms still remain open . examples of recent breakthroughs include the razumov stroganov @xcite conjecture relating asms to yet another model of statistical mechanics ( so - called o(1 ) loop model ) , which was finally proved very recently by cantinni and spontiello @xcite , and the still open question on a bijective proof of the fact that totally symmetric self - complementary plane partitions and asms are equinumerous . a brief up - to - date introduction to the subject can be found e.g. in @xcite . our interest in asms and the six vertex model is probabilistic . we would like to know how a _ uniformly random _ asm of size @xmath133 looks like when @xmath133 is large . conjecturally , the features of this model should be similar to those of lozenge tilings : we expect the formation of a limit shape and various connections with random matrices . the properties of the limit shape for asms were addressed by colomo and pronko @xcite , however their arguments are mostly not mathematical , but physical . in the present article we prove a partial result toward the following conjecture . [ conjecture_asm ] fix any @xmath13 . as @xmath134 the probability that the number of @xmath135s in the first @xmath13 rows of a uniformly random asm of size @xmath133 is maximal ( i.e. there is one @xmath135 in second row , two @xmath135s in third row , etc ) tends to @xmath41 , and , thus , @xmath41s in first @xmath13 rows are interlacing . after proper centering and rescaling , the distribution of the positions of @xmath41s tends to the gue corners process as @xmath134 . let @xmath136 denote the sum of coordinates of @xmath41s minus the sum of coordinates of @xmath135s in the @xmath13th row of the uniformly random asm of size @xmath133 . we prove that the centered and rescaled random variables @xmath136 converge to the collection of i.i.d . gaussian random variables as @xmath134 . [ theorem_asm_intro ] for any fixed @xmath13 the random variable @xmath137 weakly converges to the normal random variable @xmath138 . moreover , the joint distribution of any collection of such variables converges to the distribution of independent normal random variables @xmath138 . * we also prove a bit stronger statement , see theorem [ theorem_asm ] for the details . note that theorem [ theorem_asm_intro ] agrees with conjecture [ conjecture_asm ] . indeed , if the latter holds , then @xmath136 converges to the difference of the sums of the eigenvalues of a @xmath36 gue random matrix and of its @xmath139 top left submatrix . but these sums are the same as the traces of the corresponding matrices , therefore , the difference of sums equals the bottom right matrix element of the @xmath36 matrix , which is a gaussian random variable by the definition of gue . our proof of theorem [ theorem_asm_intro ] has two components . first , a result of okada @xcite , based on earlier work of izergin and korepin @xcite , @xcite , shows that sums of certain quantities over all asms can be expressed through schur polynomials ( in an equivalent form this was also shown by stroganov @xcite ) . second , our method gives the asymptotic analysis of these polynomials . in fact , we claim that theorem [ theorem_asm_intro ] together with an additional probabilistic argument implies conjecture [ conjecture_asm ] . however , this argument is unrelated to the asymptotics of symmetric polynomials and , thus , is left out of the scope of the ( already long ) present paper ; the proof of conjecture [ conjecture_asm ] based on theorem [ theorem_asm_intro ] is presented by one of the authors in @xcite . in the literature one can find another probability measure on asms assigning the weight @xmath140 to the matrix with @xmath141 ones . for this measure there are many rigorous mathematical results , due to the connection to the uniform measure on _ domino tilings of the aztec diamond _ , see @xcite , @xcite . the latter measure can be viewed as a _ determinantal point process _ , which gives tools for its analysis . an analogue of conjecture [ conjecture_asm ] for the tilings of aztec diamond was proved by johansson and nordenstam @xcite . in regard to the combinatorial questions on asms , we note that there has been interest in _ refined _ enumerations of alternating sign matrices , i.e. counting the number of asms with fixed positions of @xmath41s along the boundary . in particular , colomo pronko @xcite , @xcite , behrend @xcite and ayyer romik @xcite found formulas relating @xmath13refined enumerations to @xmath41refined enumerations for asms . some of these formulas are closely related to particular cases of our multivariate formulas ( theorem [ theorem_multivariate_schur_1 ] ) for staircase young diagrams . recently found parafermionic observables in the so - called completely packed @xmath1 dense loop model in a strip are also simply related to symmetric polynomials , see @xcite . the @xmath1 dense loop model is one of the representations of the percolation model on the square lattice . for the critical percolation models similar observables and their asymptotic behavior were studied ( see e.g. @xcite ) , however , the methods involved are usually completely different from ours . a configuration of the @xmath1 loop model in a vertical strip consists of two parts : a tiling of the strip on a square grid of width @xmath142 and infinite height with squares of two types shown in figure [ fig : loop_model_squares ] ( left panel ) , and a choice of one of the two types of boundary conditions for each @xmath143 segment along each of the vertical boundaries of the strip ; the types appearing at the left boundary are shown in figure [ fig : loop_model_squares ] ( right panel ) . let @xmath144 denote the set of all configurations of the model in the strip of width @xmath142 . an element of @xmath145 is shown in figure [ fig : loop_model_squares ] . note that the arcs drawn on squares and boundary segments form closed loops and paths joining the boundaries . therefore , the elements of @xmath144 have an interpretation as collections of non - intersecting paths and closed loops . ( 0,-1 ) rectangle ( -2,1 ) ; ( -2,0 ) arc [ start angle = 270 , end angle = 360,radius = 1 ] ; ( 0,0 ) arc [ start angle = 90 , end angle = 180,radius = 1 ] ; ( 0,-1 ) rectangle ( -2,1 ) ; ( -1,1 ) arc [ start angle = 180 , end angle = 270,radius = 1 ] ; ( -1,-1 ) arc [ start angle = 0 , end angle = 90,radius = 1 ] ; ( 0,-2)(2,0)(0,2)(0,-2 ) ; ( @xmath146 ) arc [ start angle=135,delta angle = 90 , radius=1 ] ; ( 0,-2)(2,0)(0,2)(0,-2 ) ; ( 0,1)(1,1 ) ; ( 0,-1)(1,-1 ) ; ( 0,-1 ) rectangle ( -2,1 ) ; ( -2,0 ) arc [ start angle = 270 , end angle = 360,radius = 1 ] ; ( 0,0 ) arc [ start angle = 90 , end angle = 180,radius = 1 ] ; ( 0,-1 ) rectangle ( -2,1 ) ; ( -1,1 ) arc [ start angle = 180 , end angle = 270,radius = 1 ] ; ( -1,-1 ) arc [ start angle = 0 , end angle = 90,radius = 1 ] ; ( 0,-1 ) rectangle ( -2,1 ) ; ( -1,1 ) arc [ start angle = 180 , end angle = 270,radius = 1 ] ; ( -1,-1 ) arc [ start angle = 0 , end angle = 90,radius = 1 ] ; ( 0,-1 ) rectangle ( -2,1 ) ; ( -1,1 ) arc [ start angle = 180 , end angle = 270,radius = 1 ] ; ( -1,-1 ) arc [ start angle = 0 , end angle = 90,radius = 1 ] ; ( 0,-1 ) rectangle ( -2,1 ) ; ( -2,0 ) arc [ start angle = 270 , end angle = 360,radius = 1 ] ; ( 0,0 ) arc [ start angle = 90 , end angle = 180,radius = 1 ] ; ( 0,-1 ) rectangle ( -2,1 ) ; ( -1,1 ) arc [ start angle = 180 , end angle = 270,radius = 1 ] ; ( -1,-1 ) arc [ start angle = 0 , end angle = 90,radius = 1 ] ; ( 0,-1 ) rectangle ( -2,1 ) ; ( -1,1 ) arc [ start angle = 180 , end angle = 270,radius = 1 ] ; ( -1,-1 ) arc [ start angle = 0 , end angle = 90,radius = 1 ] ; ( 0,-1 ) rectangle ( -2,1 ) ; ( -2,0 ) arc [ start angle = 270 , end angle = 360,radius = 1 ] ; ( 0,0 ) arc [ start angle = 90 , end angle = 180,radius = 1 ] ; ( 0,-1 ) rectangle ( -2,1 ) ; ( -1,1 ) arc [ start angle = 180 , end angle = 270,radius = 1 ] ; ( -1,-1 ) arc [ start angle = 0 , end angle = 90,radius = 1 ] ; ( 0,-1 ) rectangle ( -2,1 ) ; ( -1,1 ) arc [ start angle = 180 , end angle = 270,radius = 1 ] ; ( -1,-1 ) arc [ start angle = 0 , end angle = 90,radius = 1 ] ; ( 0,-1 ) rectangle ( -2,1 ) ; ( -2,0 ) arc [ start angle = 270 , end angle = 360,radius = 1 ] ; ( 0,0 ) arc [ start angle = 90 , end angle = 180,radius = 1 ] ; ( 0,-1 ) rectangle ( -2,1 ) ; ( -2,0 ) arc [ start angle = 270 , end angle = 360,radius = 1 ] ; ( 0,0 ) arc [ start angle = 90 , end angle = 180,radius = 1 ] ; ( 0,-1 ) rectangle ( -2,1 ) ; ( -1,1 ) arc [ start angle = 180 , end angle = 270,radius = 1 ] ; ( -1,-1 ) arc [ start angle = 0 , end angle = 90,radius = 1 ] ; ( 0,-1 ) rectangle ( -2,1 ) ; ( -2,0 ) arc [ start angle = 270 , end angle = 360,radius = 1 ] ; ( 0,0 ) arc [ start angle = 90 , end angle = 180,radius = 1 ] ; ( 0,-1 ) rectangle ( -2,1 ) ; ( -1,1 ) arc [ start angle = 180 , end angle = 270,radius = 1 ] ; ( -1,-1 ) arc [ start angle = 0 , end angle = 90,radius = 1 ] ; ( 0,-1 ) rectangle ( -2,1 ) ; ( -1,1 ) arc [ start angle = 180 , end angle = 270,radius = 1 ] ; ( -1,-1 ) arc [ start angle = 0 , end angle = 90,radius = 1 ] ; ( 0,-1 ) rectangle ( -2,1 ) ; ( -2,0 ) arc [ start angle = 270 , end angle = 360,radius = 1 ] ; ( 0,0 ) arc [ start angle = 90 , end angle = 180,radius = 1 ] ; ( 0,-1 ) rectangle ( -2,1 ) ; ( -2,0 ) arc [ start angle = 270 , end angle = 360,radius = 1 ] ; ( 0,0 ) arc [ start angle = 90 , end angle = 180,radius = 1 ] ; ( 0,-1 ) rectangle ( -2,1 ) ; ( -2,0 ) arc [ start angle = 270 , end angle = 360,radius = 1 ] ; ( 0,0 ) arc [ start angle = 90 , end angle = 180,radius = 1 ] ; ( 0,-1 ) rectangle ( -2,1 ) ; ( -2,0 ) arc [ start angle = 270 , end angle = 360,radius = 1 ] ; ( 0,0 ) arc [ start angle = 90 , end angle = 180,radius = 1 ] ; ( 0,-1 ) rectangle ( -2,1 ) ; ( -1,1 ) arc [ start angle = 180 , end angle = 270,radius = 1 ] ; ( -1,-1 ) arc [ start angle = 0 , end angle = 90,radius = 1 ] ; ( 0,-1 ) rectangle ( -2,1 ) ; ( -2,0 ) arc [ start angle = 270 , end angle = 360,radius = 1 ] ; ( 0,0 ) arc [ start angle = 90 , end angle = 180,radius = 1 ] ; ( 0,-1 ) rectangle ( -2,1 ) ; ( -2,0 ) arc [ start angle = 270 , end angle = 360,radius = 1 ] ; ( 0,0 ) arc [ start angle = 90 , end angle = 180,radius = 1 ] ; ( 0,-1 ) rectangle ( -2,1 ) ; ( -1,1 ) arc [ start angle = 180 , end angle = 270,radius = 1 ] ; ( -1,-1 ) arc [ start angle = 0 , end angle = 90,radius = 1 ] ; ( 0,-2)(2,0)(0,2)(0,-2 ) ; ( @xmath146 ) arc [ start angle=135,delta angle = 90 , radius=1 ] ; ( 0,-2)(2,0)(0,2)(0,-2 ) ; ( 0,1)(1,1 ) ; ( 0,-1)(1,-1 ) ; ( 0,-2)(2,0)(0,2)(0,-2 ) ; ( 0,1)(1,1 ) ; ( 0,-1)(1,-1 ) ; ( 0,-2)(2,0)(0,2)(0,-2 ) ; ( 0,1)(1,1 ) ; ( 0,-1)(1,-1 ) ; at ( 4,1 ) x ; at ( 4,3 ) y ; ( 10,0)(11,0 ) ; ( 10,2)(11,2 ) ; ( -3,4)(-2,4 ) ; ( -3,6)(-2,6 ) ; ( -2,1 ) circle[radius=1 ] ; ( 10,4)(11,4 ) ; ( 10,6)(11,6 ) ; at ( -5,3 ) @xmath147 ; at ( 13,3 ) @xmath148 ; ( -2,-2)(10,-2 ) node[midway , fill = white]@xmath142 ; in the simplest homogeneous case a probability distribution on @xmath144 is defined by declaring the choice of one of the two types of squares to be an independent bernoulli random variable for each square of the strip and for each segment of the boundary . i.e. for each square of the strip we flip an unbiased coin to choose one of the two types of squares ( shown in figure [ fig : loop_model_squares ] ) and similarly for the boundary conditions . more generally , the type of a square is chosen using a ( possibly signed or even complex ) weight defined as a certain function of its horizontal coordinate and depending on @xmath142 parameters @xmath149 ; two other parameters @xmath147 , @xmath148 control the probabilities of the boundary conditions and , using a parameter @xmath0 , the whole configuration is further weighted by its number of closed loops . we refer the reader to @xcite and references therein for the exact dependence of weights on the parameters of the model and for the explanation of the choices of parameters . fix two points @xmath42 and @xmath150 and consider a configuration @xmath151 . there are finitely many paths passing between @xmath42 and @xmath150 . for each such path @xmath152 we define the current @xmath153 as @xmath43 if @xmath152 is a closed loop or joins points of the same boundary ; @xmath41 if @xmath152 joins the two boundaries and @xmath42 lies above @xmath152 ; @xmath131 if @xmath152 joins the two boundaries and @xmath42 lies below @xmath152 . the total current @xmath154 is the sum of @xmath153 over all paths passing between @xmath42 and @xmath150 . mean total current _ @xmath155 is defined as the expectation of @xmath156 . two important properties of @xmath155 are skew - symmetry @xmath157 and additivity @xmath158 these properties allow to express @xmath159 as a sum of several instances of the mean total current between two horizontally adjacent points @xmath160 and the mean total current between two vertically adjacent points @xmath161 the authors of @xcite present a formula for @xmath162 and @xmath163 which , based on certain assumptions , expresses them through the symplectic characters @xmath164 where @xmath165 . the precise relationship is given in section [ section : dense_loop_model ] . our approach allows us to compute the asymptotic behavior of the formulas of @xcite as the lattice width @xmath166 , see theorem [ theorem_dense_loop ] . in particular , we prove that the leading term in the asymptotic expansion is independent of the boundary parameters @xmath147 , @xmath148 . this problem was presented to the authors by de gier @xcite , @xcite during the program `` random spatial processes '' at msri , berkeley . let @xmath22 and @xmath167 be two @xmath24 hermitian matrices with eigenvalues @xmath168 and @xmath169 , respectively . the harish chandra formula @xcite , @xcite ( sometimes known also as itzykson zuber @xcite formula in physics literature ) is the following evaluation of the integral over the unitary group : @xmath170 where the integration is with respect to the normalized haar measure on the unitary group @xmath19 . comparing with the definition of schur polynomials and using weyl s dimension formula @xmath171 we observe that when @xmath172 the above matrix integral is the normalized schur polynomial times explicit product , i.e.@xmath173 guionnet and mada studied ( after some previous results in the physics literature , see @xcite and references therein ) the asymptotics of the above integral as @xmath25 when the rank of @xmath22 is finite and does not depend on @xmath5 . this is precisely the asymptotics of . therefore , our methods ( in particular , propositions [ proposition_convergence_mildest ] , [ proposition_convergence_strongest ] , [ proposition_convergence_extended ] ) give a new proof of some of the results of @xcite . in the context of random matrices the asymptotics of this integral in the case when rank of @xmath22 grows as the size of @xmath22 grows was also studied , see e.g.@xcite , @xcite . however , currently we are unable to use our methods for this case . since asymptotics of symmetric polynomials as the number of variables tends to infinity already appeared in various contexts in the literature , it makes sense to compare our approach to the ones used before . in the context of asymptotic representation theory the known approach ( see @xcite , @xcite , @xcite , @xcite ) is to use the so - called _ binomial formulas_. in the simplest case of schur polynomials such formula reads as @xmath174 where the sum is taken over all young diagrams @xmath80 with at most @xmath13 rows , and @xmath175 are certain ( explicit ) coefficients . in the asymptotic regime of theorem [ theorem_u_vk ] the convergence of the left side of implies the convergence of numbers @xmath175 to finite limits as @xmath25 . studying the possible asymptotic behavior of these numbers one proves the limit theorems for normalized schur polynomials . another approach uses the decomposition @xmath176 where the sum is taken over all signatures of length @xmath13 . recently in @xcite and @xcite @xmath36 determinantal formulas were found for the coefficients @xmath177 . again , these formulas allow the asymptotic analysis which leads to the limit theorems for normalized schur polynomials . the asymptotic regime of theorem [ theorem_u_vk ] is distinguished by the fact that @xmath178 is bounded as @xmath25 . this no longer holds when one studies asymptotics of lozenge tilings , asms , or @xmath1 loop model . as far as authors know , in the latter limit regime neither formulas nor give simple ways to compute the asymptotics . the reason for that is the fact that for any fixed @xmath80 both @xmath175 and @xmath177 would converge to zero as @xmath25 and more delicate analysis would be required to reconstruct the asymptotics of normalized schur polynomials . yet another , but similar approach to the proof of theorem [ theorem_u_vk ] was used in @xcite but , as far as authors know , it also does not extend to the regime we need for other applications . on the other hand the random matrix asymptotic regime of @xcite is similar to the one we need for studying lozenge tilings , asms , or @xmath1 loop model . the approach of @xcite is based on the matrix model and the proofs rely on _ large deviations _ for gaussian random variables . however , it seems that the results of @xcite do not suffice to obtain our applications : for @xmath179 only the first order asymptotics ( which is the limit of @xmath180 ) was obtained in @xcite , while our applications require more delicate analysis . it also seems that the results of @xcite ( even for @xmath34 ) can not be applied in the framework of the representation theoretic regime of theorem [ theorem_u_vk ] . we would like to thank jan de gier for presenting the problem on the asymptotics of the mean total current in the @xmath1 dense loop model , which led us to studying the asymptotics of normalized schur functions and beyond . we would also like to thank a. borodin , ph . di francesco , a. okounkov , a. ponsaing , g. olshanski and l. petrov for many helpful discussions . was partially supported by rfbr - cnrs grants 10 - 01 - 93114 and 11 - 01 - 93105 . was partially supported by a simons postdoctoral fellowship . this project started while both authors were at msri ( berkeley ) during the random spatial processes program . in this section we set up notations and introduce the symmetric functions of our interest . a _ partition _ ( or a _ young diagram _ ) @xmath28 is a collection of non - negative numbers @xmath181 , such that @xmath182 . the numbers @xmath89 are _ row lengths _ of @xmath28 , and the numbers @xmath183 are _ column lengths _ of @xmath28 . more generally a _ signature _ @xmath28 of size @xmath5 is an @xmath5tuple of integers @xmath184 . the set of all signatures of size @xmath5 is denoted @xmath7 . it is also convenient to introduce _ strict signatures _ , which are @xmath5tuples satisfying strict inequalities @xmath185 ; they from the set @xmath186 . we are going to use the following identification between elements of @xmath187 and @xmath186 : @xmath188 where we set @xmath189 . the subset of @xmath187 ( @xmath186 ) of all signatures ( strict signatures ) with non - negative coordinates is denoted @xmath190 ( @xmath191 ) . one of the main objects of study in this paper are the rational schur functions , which originate as the characters of the irreducible representations of the unitary group @xmath19 ( equivalently , of irreducible rational representations of the general linear group @xmath192 ) . irreducible representations are parameterized by elements of @xmath187 , which are identified with the _ dominant weights _ , see e.g. @xcite or @xcite . the value of the character of the irreducible representation @xmath193 indexed by @xmath194 , on a unitary matrix with eigenvalues @xmath195 is given by the _ schur function _ , @xmath196_{i , j=1}^n}{\prod_{i < j}(u_i - u_j)},\ ] ] which is a symmetric laurent polynomial in @xmath197 . the denominator in is the vandermonde determinant and we denote it through @xmath198 : @xmath199_{i , j=1}^n=\prod_{i < j}(u_i - u_j).\ ] ] when the numbers @xmath200 form a geometric progression , the determinant in can be evaluated explicitly as @xmath201 in particular , sending @xmath202 we get @xmath203 where we used the notation @xmath204 the identity gives the dimension of @xmath193 and is known as the weyl s dimension formula . in what follows we intensively use the normalized versions of schur functions : @xmath205 in particular , @xmath206 the schur functions are characters of type @xmath22 ( according to the classification of root systems ) , their analogues for other types are related to the _ multivariate jacobi polynomials_. for @xmath207 and @xmath208 let @xmath209 denote the classical jacobi polynomials orthogonal with respect to the weight @xmath210 on the interval @xmath211 $ ] , see e.g.@xcite , @xcite . we use the normalization of @xcite , thus , the polynomials can be related to the gauss hypergeometric function @xmath212 : @xmath213 for any strict signature @xmath214 set @xmath215_{i , j=1}^n}{\delta(x_1,\dots , x_n)}.\ ] ] and for any ( non - strict ) @xmath216 define @xmath217 where @xmath218 is a constant chosen so that the leading coefficient of @xmath219 is @xmath41 . the polynomials @xmath219 are ( a particular case of ) @xmath220 multivariate jacobi polynomials , see e.g. @xcite and also @xcite , @xcite , @xcite . we also use their normalized versions @xmath221 again , there is an explicit formula for the denominator in and also for its @xmath0version . for special values of parameters @xmath222 and @xmath223 the functions @xmath224 can be identified with spherical functions of classical riemannian symmetric spaces of compact type , in particular , with normalized characters of orthogonal and symplectic groups , see e.g. ( * ? ? ? * section 6 ) . let us give more details on the latter case of the symplectic group @xmath225 , as we need it for one of our applications . this case corresponds to @xmath226 and here the formulas can be simplified . the value of character of irreducible representation of @xmath225 parameterized by @xmath216 on symplectic matrix with eigenvalues @xmath227 is given by ( see e.g. @xcite , @xcite ) @xmath228_{i , j=1}^n } { \det\left[x_i^{n+1-j } - x_i^{-n-1+j}\right]_{i , j=1}^n}.\ ] ] the denominator in the last formula can be evaluated explicitly and we denote it @xmath229 @xmath230_{i , j=1}^n \\ = \prod_i ( x_i - x_i^{-1 } ) \prod_{i < j}(x_i+x_i^{-1 } - ( x_j+x_j^{-1 } ) ) = \dfrac{\prod_{i < j}(x_i - x_j)(x_ix_j-1)\prod_i ( x_i^2 - 1 ) } { ( x_1\cdots x_n)^n}.\end{gathered}\ ] ] the normalized symplectic character is then defined as @xmath231 in particular @xmath232 and both denominators again admit explicit formulas . in most general terms , in the present article we study the symmetric functions @xmath233 , @xmath234 , @xmath235 , their asymptotics as @xmath25 and its applications . * some further notations . * we intensively use the @xmath0algebra notations @xmath236_q = \frac{q^m-1}{q-1},\quad [ a]_q!=\prod_{m=1}^a [ m]_q,\ ] ] and @xmath0-pochhammer symbol @xmath237 since there are lots of summations and products in the text where @xmath238 plays the role of the index , we write @xmath239 for the imaginary unit to avoid the confusion . in this section we derive integral formulas for normalized characters of one variable and also express the multivariate normalized characters as determinants of differential ( or , sometimes , difference ) operators applied to the product of the single variable normalized characters . we first exhibit some general formulas , which we later specialize to the cases of schur functions , symplectic characters and multivariate jacobi polynomials . [ definition_class ] for a given sequence of numbers @xmath240 , a collection of functions @xmath241 , @xmath120 , @xmath242 ( or @xmath191 ) is called a _ class of determinantal symmetric functions _ with parameter @xmath243 , if there exist functions @xmath244 , @xmath245 , @xmath246 , numbers @xmath247 , and linear operator @xmath248 such that for all @xmath5 and @xmath80 we have 1 . @xmath249_{i , j=1}^n}{\delta(x_1,\ldots , x_n)}\ ] ] 2 . @xmath250 3 . @xmath251 ( @xmath252 for the case of @xmath253 and @xmath254 for the case @xmath255 ) are eigenfunctions of @xmath248 acting on @xmath42 with eigenvalues @xmath256 , i.e. @xmath257 4 . @xmath258 for all @xmath259 as above . [ proposition_general_multivar ] for @xmath260 , as in definition [ definition_class ] we have the following formula @xmath261_{i , j=1}^k}{\delta(x_1,\ldots , x_k ) } \prod_{i=1}^k\left ( \frac{{\mathcal{a}}_{\mu}(x_i,\theta_1,\ldots,\theta_{n-1})}{{\mathcal{a}}_{\mu}(\theta_1,\ldots,\theta_n ) } \prod_{j=1}^{n-1}(x_i-\theta_j)\frac { c_n}{c_{n-1}}\right),\end{gathered}\ ] ] where @xmath262 is operator @xmath248 acting on variable @xmath32 . * remark . * since operators @xmath262 commute , we have @xmath263_{i , j=1}^k=\prod_{i < j}(t_i - t_j).\ ] ] we also note that some of the denominators in can be grouped in the compact form @xmath264 moreover , in our applications , the coefficients @xmath265 will be inversely proportional to @xmath266 , so we will be able to write alternative formulas where the prefactors are simple ratios of vandermondes . we will compute the determinant from property ( 1 ) of @xmath267 by summing over all @xmath36 minors in the first @xmath13 columns . the rows will be indexed by @xmath268 and @xmath269 . @xmath270 will be the complement of @xmath271 in @xmath272 $ ] . we have @xmath273 for each set @xmath271 we have @xmath274 we also have that @xmath275_{\ell , j=1}^k\sum_{\sigma\in s_k } ( -1)^{\sigma } \prod_{\ell=1}^k g(x_{\sigma_i};\mu_{i_\ell})\\ = \sum_{\sigma \in s_k } ( -1)^{\sigma}\det\bigg[\alpha(\mu_{i_\ell})^{j-1}g(x_{\sigma_j};\mu_{i_\ell})\bigg]_{\ell , j=1}^k = \sum_{\sigma \in s_k } ( -1)^{\sigma}\det\bigg[t_{\sigma_j}^{j-1}g(x_{\sigma_j};\mu_{i_\ell})\bigg]_{\ell , j=1}^k\\ = \det\bigg [ t_i^{j-1}\bigg]_{\ell , j=1}^k \sum_{\sigma \in s_k}\prod_{\ell=1}^k g(x_{\sigma_i};\mu_{i_\ell}).\end{gathered}\ ] ] combining , and we get @xmath276_{i , j=1}^k}{\delta(x_1,\ldots , x_k ) } \sum_{i=\{i_1<i_2<\cdots < i_k\}}\sum_{\sigma\in s(k)}\prod_\ell \frac{g(x_\ell;\mu_{i_{\sigma_\ell } } ) } { \beta(\mu_{i_{\sigma_\ell } } ) \prod_{j \neq i_{\sigma_\ell } } ( \alpha(\mu_{i_{\sigma_\ell}})-\alpha(\mu_j))}.\end{gathered}\ ] ] note that double summation in the last formula is a summation over all ( ordered ) collections of distinct numbers . we can also include into the sum the terms where some indices @xmath277 coincide , since application of the vandermonde of linear operators annihilates such terms . therefore , equals @xmath278_{i , j=1}^k } { \delta(x_1,\ldots , x_k)}\prod_{\ell=1}^k \sum_{i_\ell=1}^n \frac{g(x_\ell;\mu_{i_\ell } ) } { \beta(\mu_{i_\ell})\prod_{j \neq i_\ell } ( \alpha(\mu_{i_\ell})-\alpha(\mu_j))}.\ ] ] when @xmath34 the operators and the product over @xmath279 disappear , so we see that the remaining sum is exactly the univariate ratio @xmath280 and we obtain the desired formula . [ proposition_integral_general ] under the assumptions of definition [ definition_class ] we have the following integral formula for the normalized univariate @xmath281 @xmath282 here the contour @xmath45 includes only the poles of the integrand at @xmath283 , @xmath284 . as a byproduct in the proof of proposition [ proposition_general_multivar ] we obtained the following formula : @xmath285 evaluating the integral in as the sum of residues we arrive at the right side of . here we specialize the formulas of section [ section_general ] to the schur functions . rational schur functions @xmath286 ( as above we identify @xmath194 with @xmath287 ) are class of determinantal functions with @xmath288 @xmath289_q ! } , \quad [ tf](x)=\frac{f(qx)-f(x)}{q-1}.\ ] ] this immediately follows from the definition of schur functions and evaluation formula . propositions [ proposition_general_multivar ] and [ proposition_integral_general ] specialize to the following theorems . for any signature @xmath194 and any @xmath290 we have [ theorem_multivariate_schur_q ] @xmath291_q!}{\prod_{i=1}^k\prod_{j=1}^{n - k } ( x_i - q^{j-1})}\times \\ \frac{\det\big [ d_{i , q}^{j-1}\big]_{i , j=1}^k}{\delta(x_1,\ldots , x_k ) } \prod_{i=1}^k \frac{s_{\lambda}(x_i;n , q ) \prod_{j=1}^{n-1}(x_i - q^{j-1})}{[n-1]_q!},\end{gathered}\]]where @xmath292 is the difference operator acting on the function @xmath293 by the formula @xmath294(x_i)=\frac{f(qx_i)-f(x_i)}{q-1}.\ ] ] [ theorem_integral_representation_schur_q ] for any signature @xmath194 and any @xmath295 other than @xmath43 or @xmath296 , @xmath297 we have @xmath298_q!q^{\binom{n-1}{2}}(q-1)^{n-1 } } { \prod_{i=1}^{n-1}(x - q^{i-1})}\cdot \frac{\ln(q)}{2\pi { \mathbf i } } \oint_c \frac{x^zq^{z}}{\prod_{i=1}^n(q^z - q^{\lambda_i+n - i } ) } dz,\ ] ] where the contour @xmath45 includes the poles at @xmath299 and no other poles of the integrand . * there is an alternative derivation of theorem [ theorem_integral_representation_schur_q ] suggested by a. okounkov . let @xmath300 with @xmath301 . the definition of schur polynomials implies the following symmetry for any @xmath302 @xmath303 using this symmetry , @xmath304 where @xmath305 is the complete homogeneous symmetric function . integral representation for @xmath306 can be obtained using their generating function ( see e.g. ( * ? ? ? * chapter i , section 2 ) ) @xmath307 extracting @xmath308 as @xmath309 we arrive at the integral representation equivalent to theorem [ theorem_integral_representation_schur_q ] . in fact the symmetry holds in a greater generality , namely , one can replace schur functions with _ macdonald polynomials _ , which are their @xmath310deformation , see ( * ? ? ? * chapter vi ) . this means that , perhaps , theorem [ theorem_integral_representation_schur_q ] can be extended to the macdonald polynomials . on the other hand , we do not know whether a simple analogue of theorem [ theorem_multivariate_schur_q ] for macdonald polynomials exists . sending @xmath202 in theorems [ theorem_multivariate_schur_q ] , [ theorem_integral_representation_schur_q ] we get for any signature @xmath194 and any @xmath290 we have [ theorem_multivariate_schur_1 ] @xmath311_{i , j=1}^{k } } { \delta(x_1,\ldots , x_k)}\prod_{j=1}^k s_{\lambda}(x_j;n,1)(x_j-1)^{n-1},\end{gathered}\ ] ] [ theorem_multivariate_schur_1 ] where @xmath312 is the differential operator @xmath313 . [ theorem_integral_representation_schur_1 ] for any signature @xmath194 and any @xmath295 other than @xmath43 or @xmath41 we have @xmath314 where the contour @xmath45 includes all the poles of the integrand . let us state and prove several corollaries of theorem [ theorem_multivariate_schur_1 ] . for any integers @xmath315 , such that @xmath316 , define the polynomial @xmath317 as @xmath318,\ ] ] it is easy to see ( e.g. by induction on @xmath319 ) that @xmath320 is a polynomial in @xmath42 of degree @xmath321 and its coefficients are bounded as @xmath322 . also , @xmath323 . [ proposition_multivariate_expansion]for any signature @xmath194 and any @xmath290 we have @xmath324}{n^\ell } p_{j,\ell , n}(x_i ) ( x_i-1)^{\ell+k - j } \right]_{i , j=1}^k.\end{gathered}\ ] ] we apply theorem [ theorem_multivariate_schur_1 ] and , noting that @xmath325 = \sum_{\ell=0}^j \binom{j}{\ell}\left(x\frac\partial{\partial x}\right)^\ell[f(x)]\left(x\frac\partial{\partial x}\right)^{j-\ell}[g(x)]\ ] ] for any @xmath326 and @xmath327 , we obtain @xmath328_{i , j=1}^k\\ = \dfrac { \det\left [ \sum_{\ell=0}^{j-1 } d_{i,1}^{\ell } [ s_{\lambda}(x_i;n,1 ) ] \binom{j-1}{\ell } \frac{(n - j)!}{(n-1)!}\frac{d_{i,1}^{j-\ell-1}(x_i-1)^{n-1}}{(x_i-1)^{n - k } } \right]_{i , j=1}^k}{\delta(x_1,\ldots , x_k)}. \qedhere\end{gathered}\ ] ] [ corollary_multiplicativity_for_u ] suppose that the sequence @xmath76 is such that @xmath329 uniformly on compact subsets of some region @xmath330 , then for any @xmath13 @xmath331 uniformly on compact subsets of @xmath332 . since @xmath333 is a polynomial , it is an analytic function . therefore , the uniform convergence implies that the limit @xmath334 is analytic and all derivatives of @xmath335 converge to the derivatives of @xmath334 . now suppose that all @xmath32 are distinct . then we can use proposition [ proposition_multivariate_expansion ] and get as @xmath25 @xmath336_{i , j=1}^k}{\delta(x_1,\ldots , x_k ) } \\=\frac { \det\bigg [ ( x_i-1)^{k - j } { s_{\lambda(n)}(x_i;n,1 ) } x_i^{j-1 } + o(1/n ) \bigg]_{i , j=1}^k}{\delta(x_1,\ldots , x_k ) } \\=\prod_{i=1}^k s_{\lambda(n)}(x_i;n,1 ) \frac { \det\bigg [ ( x_i-1)^{k - j } x_i^{j-1 } \bigg]_{i , j=1}^k + o(1/n)}{\delta(x_1,\ldots , x_k ) } \\=\prod_{i=1}^k s_{\lambda(n)}(x_i;n,1)\left(1 + \frac{o(1/n)}{\delta(x_1,\ldots , x_k)}\right),\end{gathered}\ ] ] where @xmath337 is uniform over compact subsets of @xmath332 . we conclude that @xmath338 uniformly on compact subsets of @xmath339 since the left - hand side of is analytic with only possible singularities at @xmath43 for all @xmath5 , the uniform convergence in also holds when some of @xmath32 coincide . [ mult_logarithm ] suppose that the sequence @xmath76 is such that @xmath340 uniformly on compact subsets of some region @xmath330 , in particular , there is a well defined branch of logarithm in @xmath341 for large enough @xmath5 . then for any @xmath13 @xmath342 uniformly on compact subsets of @xmath332 . notice that @xmath343,\ ] ] i.e. it is a polynomial in the derivatives of @xmath344 of degree @xmath345 and so @xmath346.\ ] ] thus , when @xmath347 exists , then @xmath348 converges and so does @xmath349}{n^\ell } p_{j,\ell , n}(x_i ) ( x_i-1)^{\ell+k - j } \right]_{i , j=1}^k } { \prod_{i=1}^k s_{\lambda(n)}(x_i;n,1 ) \delta(x_1,\ldots , x_k)}.\ ] ] now taking logarithms on both sides of , dividing by @xmath5 , factoring out @xmath350 and sending @xmath25 we get the statement . [ corollary_multiplicativity_for_gue ] suppose that for some number @xmath22 @xmath351 uniformly on compact subsets of domain @xmath352 as @xmath322 . then @xmath353 uniformly on compact subsets of @xmath354 . let @xmath355 . since @xmath356 are entire functions , @xmath357 is analytic on @xmath358 . notice that @xmath359 therefore @xmath360_{y=\sqrt{n}\ln x } \\= n^{\ell/2}\left[\sum_{r=0}^\ell \binom{l}{r}g_n^{(\ell - r)}(y)(-a)^rn^{r/2 } e^{-a\sqrt{n}y}\right]_{y=\sqrt{n}\ln x}\\=n^\ell \left[e^{-a\sqrt{n}y}g_n(y)\left(1+o(1/\sqrt{n})\right)\right]_{y=\sqrt{n}\ln x},\end{gathered}\ ] ] since the derivatives of @xmath356 are uniformly bounded on compact subsets of @xmath358 as @xmath25 . further , @xmath361 and @xmath362 with @xmath363 uniformly bounded on compact sets . thus , setting @xmath364 in proposition [ proposition_multivariate_expansion ] , we get ( for distinct @xmath27 ) @xmath365_{i , j=1}^k\\ = g_n(y_1)\cdots g_n(y_k ) \dfrac{\det \left [ ( x_i-1)^{k - j}\left(1+o(1/\sqrt{n})\right)\right]_{i , j=1}^k}{\delta(x_1,\ldots , x_k)}\\= g_n(y_1)\cdots g_n(y_k)\left(1+o(1/\sqrt{n})\right).\end{gathered}\ ] ] since the convergence is uniform , it also holds without the assumption that @xmath27 are distinct . in this section we specialize the formulas of section [ section_general ] to the characters @xmath366 of the symplectic group . for @xmath367 let @xmath368_{i , j=1}^n}{\delta(x_1,\ldots , x_n)}.\ ] ] clearly , for @xmath216 we have @xmath369 where @xmath370 is a character of the symplectic group @xmath225 . [ prop_simplectic_satisfies ] family @xmath371 forms a class of determinantal functions with @xmath372 @xmath373 @xmath374(x)=\frac{f(qx)+f(q^{-1}x)}{(q-1)^2}.\ ] ] immediately follows from the definitions and identity @xmath375 let us now specialize proposition [ proposition_general_multivar ] . we have that @xmath376 [ theorem_simplectic_multi_q ] for any signature @xmath216 and any @xmath290 we have @xmath377_{i , j=1}^k\prod_{i=1}^k { \mathfrak{x}}_{\lambda}(x_i;n , q)\frac{\delta_s(x_i , q,\ldots , q^{n-1})}{\delta_s(q,\ldots , q^n ) } \ ] ] where @xmath378 is the difference operator @xmath379 acting on variable @xmath32 . * note that in proposition [ prop_simplectic_satisfies ] the difference operator differed by the shift @xmath380 . however , this does not matter , as in the end we use the operator @xmath381 . proposition [ proposition_integral_general ] after algebraic manipulations to compute the coefficient in front of the integral yields . [ theorem_symplectic_integral_q ] for any signature @xmath216 and any @xmath382 we have @xmath383_q ! } { ( xq;q)_{n-1 } ( x^{-1}q;q)_{n-1 } ( x - x^{-1})[n]_q}\\ \frac{1}{2\pi { \mathbf i } } \oint \frac { ( x^{z+1}-x^{-z-1})}{\prod\limits_{i=1}^n \left(q^{z+1}+q^{-z-1 } - q^{-\lambda_i+n - i-1 } -q^{\lambda_i + n - i+1 } \right ) } dz\end{gathered}\ ] ] with contour @xmath45 enclosing the singularities of the integrand at @xmath384 . theorem [ theorem_symplectic_integral_q ] looks very similar to the integral representation for schur polynomials , this is summarized in the following statement . [ symplectic_via_schur ] let @xmath385 , we have @xmath386 where @xmath387 is a signature of size @xmath29 given by @xmath388 for @xmath284 and @xmath389 for @xmath390 . first notice that for any @xmath391 , we have @xmath392 where @xmath45 encloses the singularities of the integrand at @xmath384 and @xmath393 encloses all the singularities . indeed , to prove this just write both integrals as the sums or residues . further , @xmath394 therefore , the integrand in transforms into @xmath395 where @xmath396 . the contour integral of is readily identified with that of theorem [ theorem_integral_representation_schur_q ] for @xmath397 . it remains only to match the prefactors . next , sending @xmath202 we arrive at the following 3 statements . define @xmath398 [ theorem_symp_multivar_1 ] for any signature @xmath216 and any @xmath290 we have @xmath399_{i , j=1}^k \prod_{i=1}^k { \mathfrak{x}}_{\lambda}(x_i;n,1)\frac{(x_i - x_i^{-1 } ) ( 2-x_i - x_i^{-1})^{n-1 } } { 2(2n-1)!}.\end{gathered}\ ] ] * remark . * the statement of theorem [ theorem_symp_multivar_1 ] was also proved by de gier and ponsaing , see @xcite . [ theorem_symplectic_integral_1 ] for any signature @xmath216 we have @xmath400 where the contour includes only the poles at @xmath401 for @xmath284 . [ proposition_schur_simplectic_1 ] for any signature @xmath216 we have @xmath402 where @xmath387 is a signature of size @xmath29 given by @xmath388 for @xmath284 and @xmath389 for @xmath390 . * remark . * we believe that the statement of proposition [ proposition_schur_simplectic_1 ] should be known , but we are unable to locate it in the literature . analogously to the treatment of the multivariate schur case we can also derive the same statements as in proposition [ proposition_multivariate_expansion ] and corollaries [ corollary_multiplicativity_for_u ] , [ mult_logarithm ] , [ corollary_multiplicativity_for_gue ] for the multivariate normalized symplectic characters . here we specialize the formulas of section [ section_general ] to the multivariate jacobi polynomials . we do not present the formula for the @xmath0version of , although it can be obtained in a similar way . recall that for @xmath216 @xmath403 we produce the formulas in terms of polynomials @xmath404 , @xmath367 and , thus , introduce their normalizations as @xmath405 these normalized polynomials are related to the normalized jacobi via @xmath406 where as usual @xmath407 for @xmath284 . the polynomials @xmath408 , @xmath367 are a class of determinantal functions with @xmath409 @xmath410 @xmath411 we have ( see e.g. ( * ? ? ? * section 2c ) and references therein ) @xmath412 and also ( see e.g. @xcite , @xcite ) @xmath413 p_m(x;a , b),\ ] ] now the statement follows from the definition of polynomials @xmath414 . specializing proposition [ proposition_general_multivar ] , using the fact that for @xmath415 we have @xmath416 and @xmath417 , we obtain the following . [ theorem_jacobi_multi ] for any @xmath216 and any @xmath290 we have @xmath418_{i , j=1}^k } { 2^{\binom{k}{2 } } \delta(z_1+z_1^{-1},\ldots , z_k+z_k^{-1 } ) } \prod_{i=1}^k j_{\lambda}(z_i;n , a , b)\frac{(z_i+z_i^{-1}-2)^{n-1 } \gamma(n+a+b)}{\gamma(n+a)\gamma(2n-1+a+b)}\end{gathered}\ ] ] where @xmath419 is the differential operator @xmath420 next , we specialize proposition [ proposition_integral_general ] to the case of multivariate jacobi polynomials . note that thanks to the symmetry under @xmath421 of the integrand we can extend the contour @xmath45 to include all the poles . [ theorem_jacobi_singlevar ] for any @xmath216 we have @xmath422 where the contour includes the poles of the integrand at @xmath423 and @xmath111 for @xmath284 . here we derive the asymptotics for the single - variable normalized schur functions @xmath424 . in what follows @xmath425 and @xmath426 mean uniform estimates , not depending on any parameters and @xmath427 stays for a positive constant which might be different from line to line . suppose that we are given a sequence of signatures @xmath76 ( or , even , more generally , @xmath428 with @xmath429 ) . we are going to study the asymptotic behavior of @xmath333 as @xmath25 under the assumption that there exists a function @xmath121 for which as @xmath25 the vector @xmath430 converges to @xmath431 in a certain sense which is explained below . let @xmath432 denote the corresponding norms of the difference of the vectors @xmath433 and @xmath434 : @xmath435 in order to state our results we introduce for any @xmath436 the equation @xmath437 note that a solution to can be interpreted as an _ inverse hilbert transform_. we also introduce the function @xmath438 @xmath439 observe that can be rewritten as @xmath440 . [ proposition_convergence_mildest ] for @xmath441 , suppose that @xmath121 is piecewise - continuous , @xmath442 is bounded , @xmath443 tends to zero as @xmath25 , and @xmath444 is the ( unique ) real root of . let further @xmath445 be such that @xmath446 is outside the interval @xmath447 $ ] for all @xmath5 large enough . then @xmath448 * remark 1 . * note that piecewise - continuity of @xmath121 is a reasonable assumption since @xmath326 is monotonous . * remark 2 . * a somehow similar statement was proven by guionnet and mada , see ( * ? ? ? * theorem 1.2 ) . when @xmath121 is smooth , proposition [ proposition_convergence_mildest ] can be further refined . for @xmath449 denote @xmath450 [ proposition_convergence_strongest ] let @xmath445 be such that @xmath444 ( which is the ( unique ) real root of ) is outside the interval @xmath447 $ ] for all large enough @xmath5 . suppose that for a twice - differentiable function @xmath121 @xmath451 uniformly on an open @xmath452 set in @xmath453 , containing @xmath446 . assume also that @xmath454 and @xmath455 . then as @xmath25 @xmath456 the remainder @xmath457 is uniform over @xmath150 belonging to compact subsets of @xmath458 and such that @xmath459 . * remark . * if the complete asymptotic expansion of @xmath460 as @xmath25 is known , then , with some further work , we can obtain the expansion of @xmath461 up to arbitrary precision . in such expansion , @xmath457 in proposition [ proposition_convergence_strongest ] is replaced by a power series in @xmath462 with coefficients being the analytic functions of @xmath150 . the general procedure is as follows : we use the expansion of @xmath460 ( instead of only the first term ) everywhere in the below proof and further obtain the asymptotic expansion for each term independently through the steepest descent method . this level of details is enough for our applications and we will not discuss it any further ; all the technical details can be found in any of the classical treatments of the steepest descent method , see e.g. @xcite , @xcite . [ prop_convergence_gue_case ] suppose that @xmath121 is piecewise - differentiable , @xmath463 ( i.e. it is bounded ) , and @xmath464 goes to @xmath43 as @xmath25 . then for any fixed @xmath465 @xmath466 as @xmath467 , where @xmath125 moreover , the remainder @xmath457 is uniform over @xmath468 belonging to compact subsets of @xmath469 . we prove the above three propositions simultaneously . first observe that we can assume without loss of generality that @xmath470 . indeed , when we add some positive integer @xmath341 to all coordinates of @xmath101 , the function @xmath471 is multiplied by @xmath472 , but the right sides in propositions [ proposition_convergence_mildest ] , [ proposition_convergence_strongest ] , [ prop_convergence_gue_case ] also change accordingly . we start investigating the asymptotic behavior of the integral in the right side of the integral representation of theorem [ theorem_integral_representation_schur_1 ] @xmath473 changing the the variables @xmath474 transforms into @xmath475 from now on we study the integral @xmath476 where the contour @xmath477 encloses all the poles of the integrand . write the integrand as @xmath478 unless otherwise stated , we choose the principal branch of logarithm with a cut along the negative real axis . observe that @xmath479 under the conditions of proposition [ proposition_convergence_mildest ] , and basing on the approximation @xmath480 we have @xmath481 where @xmath22 is the smallest interval in @xmath482 containing the four points @xmath483 } ( f(t)-1+t))$ ] , @xmath484 } ( f(t)-1+t))$ ] , @xmath485 , @xmath486 . under the conditions of proposition [ prop_convergence_gue_case ] we have @xmath487 using a usual second - order approximation of the integral ( trapezoid formula ) we can write @xmath488 and depending on the smoothness of @xmath326 we have the following estimates on the remainder @xmath248 : 1 . if @xmath326 is piecewise - continuous , then @xmath489 2 . if @xmath326 is piecewise - differentiable , then @xmath490 3 . if @xmath326 is twice differentiable ( on the whole interval ) , then @xmath491 the integral transforms into the form @xmath492 here @xmath493 has subexponential growth as @xmath25 . note that @xmath494 is a continuous function in @xmath495 , while @xmath496 has discontinuities along the real axis , both these functions are harmonic outside the real axis . the asymptotic analysis of the integrals of the kind is usually performed using the so - called steepest descent method ( see e.g. @xcite , @xcite . ) . we will deform the contour to pass through the critical point of @xmath497 . this point satisfies the equation @xmath498 in general , equation ( which is the same as ) may have several roots and one has to be careful to choose the needed one . in the present section @xmath150 is a real number which simplifies the matter . if @xmath499 then has a unique real root @xmath500 , moreover @xmath501 . indeed the integral in is a monotonous function of @xmath502 changing from @xmath503 down to zero . in the same way if @xmath504 then there is a unique real root @xmath500 and it satisfies @xmath505 . moreover , @xmath506 as @xmath507 . in what follows , without loss of generality , we assume that @xmath499 . next , we want to prove that one can deform the contour @xmath477 into @xmath508 which passes through @xmath500 in such a way that @xmath509 has maximum at @xmath500 . when @xmath150 is real , the vertical line is the right contour . indeed , ( for @xmath510 ) the integrand decays like @xmath511 as @xmath512 on the vertical line and exponentially decays as @xmath513 , therefore the integral over this vertical line is well defined and the conditions on @xmath446 guarantee , that the value of the integral does not change when we deform the contour . moreover , if @xmath514 , @xmath515 , @xmath516 then immediately from the definitions follows that @xmath517 now the integrand is exponentially small ( compared to the value at @xmath446 ) everywhere on the contour @xmath508 outside arbitrary neighborhood of @xmath446 . inside the neighborhood we can do the taylor expansion for @xmath497 . denoting @xmath518 and @xmath519 , the integral turns into @xmath520 where @xmath521 } { \mathcal{f}}'''(w;f)\ ] ] and @xmath522 when we approximate the integral over vertical line by the integral over the @xmath523-neighborhood ( reduction of to the first line in ) the relative error can be bounded as @xmath524 next , we estimate the relative error in the approximation in ( i.e. the sign @xmath525 in ) . suppose that @xmath526 and divide the integration segment into a smaller subsegment @xmath527{\sqrt{n}/|\tilde\delta|}$ ] and its complement . when we omit the @xmath528 term in the exponent we get the relative error at most @xmath529 when integrating over the smaller subsegment ( which comes from the factor @xmath530 itself ) and @xmath531 when integrating over its complement ( which comes from the estimate of the integral on this complement ) . when we replace the integral over @xmath532 $ ] by the integral over @xmath533 in we get the error @xmath534 finally , there is an error of @xmath535}|r_n(w)-r_n(w_0)|\ ] ] coming from the factor @xmath536 . summing up , the total relative error in the approximation in is at most constant times @xmath537}|r_n(w)-r_n(w_0)|.\end{gathered}\ ] ] combining and we get @xmath538 using stirling s formula we arrive at @xmath539 with the relative error in being the sum of , and @xmath337 coming from stirling s approximation , and @xmath523 satisfying @xmath526 . now we are ready to prove the three statements describing the asymptotic behavior of normalized schur polynomials . use and note that after taking logarithms and dividing by @xmath5 the relative error in vanishes . again this follows from . it remains to check that the error term in is negligible . indeed , all the derivatives of @xmath540 , as well as @xmath541 , @xmath542 , @xmath543 are bounded in this limit regime . thus , choosing @xmath544 we conclude that all the error terms vanish . the equation for @xmath446 reads @xmath545 clearly , as @xmath25 we have @xmath546 . thus , we can write @xmath547 denote @xmath548 and rewrite as @xmath549 if follows that as @xmath25 we have @xmath550 next , let us show that the error in is negligible . for that , choose @xmath523 in to be @xmath551 . note that @xmath552 is of order @xmath553 , and @xmath554 ( and , thus , also @xmath542 ) is of order @xmath555 on the integration contour and @xmath541 is of order @xmath462 . the inequality @xmath526 is satisfied . the term coming from is bounded by @xmath556 and is negligible . as for the first term in it is negligible , the second one is bounded by @xmath557 and negligible , the third one is bounded by @xmath558 which is again negligible . turning to the fourth term , the definition of @xmath536 implies that both @xmath536 and @xmath559 can be approximated as @xmath560 as @xmath25 and we are done . note that @xmath561 as @xmath25 . now yields that @xmath562 as @xmath25 we have @xmath563 and using @xmath564 thus , @xmath565 to finish the proof observe that @xmath566 thus , transforms into @xmath567 the propositions of the previous section deal with @xmath568 when @xmath150 is _ real_. in this section we show that under mild assumptions the results extend to complex @xmath150s . in the notations of the previous section , suppose that we are given a weakly - decreasing non - negative function @xmath121 , the complex function @xmath438 is defined through , @xmath150 is an arbitrary complex number and @xmath446 is a critical point of @xmath497 , i.e. a solution of equation . we call a simple piecewise - smooth contour @xmath569 in @xmath453 a _ steepest descent contour _ for the above data if the following conditions are satisfied . 1 . @xmath570 , 2 . the vector @xmath571 is tangent to @xmath572 at point @xmath43 , 3 . @xmath573 has a global maximum at @xmath574 , 4 . the following integral is finite @xmath575 * remark . * often the steepest descent contour can be found as a level line @xmath576 . * example 1 . * suppose that @xmath577 . then @xmath578 and @xmath579 and for any @xmath150 such that @xmath580 , the critical point is @xmath581 suppose that @xmath582 is not a negative real number , this implies that @xmath446 does not belong to the segment @xmath583 $ ] . figure [ figure_contours ] sketches the level lines @xmath584 . in order to understand that the picture looks like that , observe that there are 4 level lines going out of @xmath446 . level lines can not cross , because @xmath585 has no other critical points except for @xmath446 . when @xmath586 , we have @xmath587 , therefore level lines intersect a circle of big radius @xmath588 in 2 points and the picture should have two infinite branches which are close to the rays of the line @xmath589 and one loop . since the only points where @xmath590 is not analytic form the segment @xmath583 $ ] , the loop should enclose some points of this segment ( real part of analytic complex functions without critical points can not have closed level lines ) . now the plane is divided into three regions @xmath591 and @xmath45 as shown in figure [ figure_contours ] . @xmath592 in @xmath22 and @xmath45 , while @xmath593 , this can be seen by analyzing @xmath594 for very large @xmath595 . there are two smooth curves @xmath596 passing through @xmath446 . one of them has a tangent vector parallel to @xmath597 and another one parallel to @xmath598 . take the former one , then it should lie inside the region @xmath167 . in the neighborhood of @xmath446 this is our steepest descent contour . the only property which still might not hold is number @xmath599 . but in this case , we can modify the contour outside the neighborhood of @xmath446 , so that @xmath594 rapidly decays along it . again , this is always possible because for @xmath586 , we have @xmath600 . * example 2 . * more generally let @xmath601 , then @xmath602 and @xmath603 for any @xmath150 such that @xmath580 , the critical point i d @xmath604 note that if we set @xmath605 , then @xmath606 which is a constant plus @xmath607 from example 1 . therefore , the linear transformation of the steepest descent contour of example 1 gives a steepest descent contour for example 2 . [ proposition_convergence_extended ] suppose that @xmath121 , @xmath150 and @xmath446 are such that there exists a steepest descent contour @xmath572 and , moreover , the contour of integration in can be deformed to @xmath572 without changing the value of the integral . then propositions [ proposition_convergence_mildest ] and [ proposition_convergence_strongest ] hold for this @xmath121 , @xmath150 and @xmath446 . the proof of propositions [ proposition_convergence_mildest ] and [ proposition_convergence_strongest ] remains almost the same . the only changes are in formula and subsequent estimates of errors . note that condition @xmath599 in the definition of steepest descent contour guarantees that the integral over @xmath572 outside arbitrary neighborhood of @xmath446 is still negligible as @xmath25 . observe that the integration in now goes not over the segment @xmath608 $ ] but over the neighborhood of @xmath446 on the curve @xmath609 . this means that in the relative error calculation a new term appears , which is a difference of the integral @xmath610 over the interval @xmath611 $ ] of real line and over the part of rescaled curve @xmath612 inside circle of radius @xmath613 around the origin . the difference of the two integrals equals to the integral of @xmath614 over the lines connecting their endpoints . but since @xmath615 is tangent to @xmath572 at @xmath43 , it follows that for small @xmath523 the error is the integral of @xmath614 over segment joining @xmath616 and @xmath617 plus the integral of @xmath614 joining @xmath618 and @xmath619 with @xmath620 and similarly for @xmath621 . clearly , these integrals exponentially decay as @xmath25 and we are done . it turns out that in the context of proposition [ prop_convergence_gue_case ] the required contour always exists . [ prop_convergence_gue_extended ] proposition [ prop_convergence_gue_case ] is valid for any @xmath622 . recall that in the context of proposition [ prop_convergence_gue_case ] @xmath623 and goes to @xmath43 as @xmath25 , while @xmath624 goes to infinity . in what follows without loss of generality we assume that @xmath468 is not an element of @xmath625 . ( in order to work with @xmath626 we should choose other branches of logarithms in all arguments . ) let us construct the right contour passing through the point @xmath446 . choose positive number @xmath627 such that @xmath628 for all @xmath629 . set @xmath630 to be the minimal strip ( which is a region between two parallel lines ) in complex plane parallel to the vector @xmath631 and containing the disk of radius @xmath627 around the origin . since @xmath446 is a saddle point of @xmath497 , in the neighborhood of @xmath446 the are two smooth curves @xmath632 intersecting at @xmath446 . along one of them @xmath633 has maximum at @xmath446 , along another one it has minimum ; we need the former one . define the contour @xmath572 to be the smooth curve @xmath634 until it leaves @xmath630 and the curve ( straight line ) @xmath589 outside @xmath630 . let us prove that @xmath633 has no local extremum on @xmath572 except for @xmath446 , which would imply that @xmath446 is its global maximum on @xmath572 . first note , that outside @xmath630 we have @xmath635 with the first term here being a constant , while the second being monotonous along the contour . therefore , outside @xmath630 we can not have local extremum . next , straightforward computation shows that if @xmath5 is large enough , then one can always choose two independent of @xmath5 constants @xmath636 and @xmath637 such that @xmath638 for @xmath495 in @xmath630 satisfying @xmath639 or @xmath640 . it follows , that is @xmath633 had a local extremum , then such extremum would exist at some point @xmath641 satisfying @xmath642 . but since @xmath643 is constant on the contour inside @xmath630 , we conclude that @xmath644 is also a critical point of @xmath497 . however , there are no critical points other than @xmath446 in this region . now we use the contour @xmath572 and repeat the argument of proposition [ prop_convergence_gue_case ] using it . note that the deformation of the original contour of into @xmath572 does not change the value of the integral . the only part of proof of proposition [ prop_convergence_gue_case ] which we should modify is the estimate for the relative error in . here we closely follow the argument of proposition [ proposition_convergence_extended ] . the only change is that the bound on @xmath645 and @xmath621 is now based on the following observation : the straight line defined by @xmath646 ( which is the main part of the contour @xmath572 ) is parallel to the vector @xmath647 . on the other hand , @xmath648 * remark . * in the proof of proposition [ prop_convergence_gue_extended ] we have shown , in particular , that the steepest descent contour exists and , thus , asymptotic theorem is valid for all complex @xmath150 , which are close enough to @xmath41 . this is somehow similar to the results of guionnet and mada , cf . * theorem 1.4 ) . consider a tiling of a domain drawn on the regular triangular lattice of the kind shown at figure [ fig_polyg_domain ] with rhombi of 3 types which are usually called _ lozenges_. the configuration of the domain is encoded by the number @xmath5 which is its width and @xmath5 integers @xmath110 which are the positions of _ horizontal lozenges _ sticking out of the right boundary . if we write @xmath111 , then @xmath28 is a signature of size @xmath5 , see left panel of figure [ fig_polyg_domain ] . due to combinatorial constraints the tilings of such domain are in correspondence with tilings of a certain polygonal domain , as shown on the right panel of figure [ fig_polyg_domain ] . let @xmath112 denote the domain encoded by @xmath194 and define @xmath114 to be a _ uniformly random _ lozenge tiling of @xmath112 . we are interested in the asymptotic properties of @xmath114 as @xmath25 and @xmath28 changes in a certain regular way . given @xmath114 let @xmath115 be horizontal lozenges at @xmath13th vertical line from the left . ( horizontal lozenges are shown in blue in the left panel of figure [ fig_polyg_domain ] . ) we again set @xmath116 and denote the resulting random signature @xmath117 of size @xmath13 via @xmath118 . recall that the gue random matrix ensemble is a probability measure on the set of @xmath36 hermitian matrix with density proportional to @xmath649 . let @xmath119 denote the distribution of @xmath13 ( ordered ) eigenvalues of such random matrix . in this section we prove the following theorem . [ theorem_gue ] let @xmath76 , @xmath120 be a sequence of signatures . suppose that there exist a non - constant piecewise - differentiable weakly decreasing function @xmath121 such that @xmath122 as @xmath25 and also @xmath123 . then for every @xmath13 as @xmath25 we have @xmath124 in the sense of weak convergence , where @xmath125 * remark . * for any non - constant weakly decreasing @xmath121 we have @xmath650 . [ cor_minors ] under the same assumptions as in theorem [ theorem_gue ] the ( rescaled ) joint distribution of @xmath126 horizontal lozenges on the left @xmath13 lines weakly converges to the joint distribution of the eigenvalues of the @xmath13 top - left corners of a @xmath36 matrix from gue ensemble . indeed , conditionally on @xmath651 the distribution of the remaining @xmath652 lozenges is uniform subject to interlacing conditions and the same property holds for the eigenvalues of the corners of gue random matrix , see @xcite for more details . let us start the proof of theorem [ theorem_gue ] . [ prop_distribution_of_lozenges ] the distribution of @xmath118 is given by : @xmath653 where @xmath654 is the skew schur polynomial . let @xmath655 and @xmath656 . we say that @xmath117 and @xmath80 interlace and write @xmath657 , if @xmath658 we also agree that @xmath659 consists of a single point point , _ empty _ signature @xmath660 and @xmath661 for all @xmath662 . for @xmath663 and @xmath664 with @xmath665 let @xmath666 denote the number of sequences @xmath667 such that @xmath668 , @xmath669 and @xmath670 . note that through the identification of each @xmath671 with configuration of horizontal lozenges on a vertical line , each such sequence corresponds to a lozenge tiling of a certain domain encoded by @xmath117 and @xmath80 , so that , in particular the tiling on the left panel of figure [ fig_polyg_domain ] corresponds to the sequence @xmath672 it follows that @xmath673 on the other hand the _ combinatorial formula _ for ( skew ) schur polynomials ( see e.g. ( * ? ? ? * chapter i , section 5 ) ) yields that for @xmath663 and @xmath664 with @xmath665 we have @xmath674 introduce multivariate normalized bessel function @xmath675 , @xmath676 , @xmath677 through @xmath678 the functions @xmath675 appear naturally as a result of computation of harish - chandra - itzykson - zuber matrix integral . their relation to schur polynomials is explained in the following statement . [ prop_bessel_vs_schur ] for @xmath679 we have @xmath680 immediately follows from the definition of schur polynomials and the evaluation of @xmath681 given in . we study @xmath118 through its moment generating functions @xmath682 , where @xmath676 , @xmath683 as above and @xmath684 stays for the expectation . note that for @xmath34 the function @xmath682 is nothing else but usual one - dimensional moment generating function @xmath685 . we have @xmath686 [ prop_tiling_gen_function ] let @xmath687 and @xmath688 and let @xmath689 , then ( see e.g. ( * ? ? ? * chapter i , section 5 ) ) @xmath690 therefore , propositions [ prop_distribution_of_lozenges ] and [ prop_bessel_vs_schur ] yield @xmath691 the counterpart of proposition [ prop_tiling_gen_function ] for @xmath119 distribution is the following . we have [ prop_gue_gen_function ] @xmath692 let @xmath693 be a ( fixed ) diagonal @xmath36 matrix with eigenvalues @xmath14 and let @xmath22 be random @xmath36 hermitian matrix from @xmath694 ensemble . let us compute @xmath695 from one hand , standard integral evaluation shows that is equal to the right side of . on the other hand , we can rewrite as @xmath696 where @xmath697 is probability distribution of @xmath119 , @xmath698 is normalized haar measure on the unitary group @xmath699 and @xmath700 is hermitian matrix ( e.g. diagonal ) with eigenvalues @xmath701 . the evaluation of the integral over unitary group in is well - known , see @xcite , @xcite , @xcite , @xcite and the answer is precisely @xmath702 . thus , transforms into the left side of . in what follows we need the following technical proposition . [ prop_convergence_of_gen_func ] let @xmath703 , @xmath120 be a sequence of @xmath13-dimensional random variables . suppose that there exists a random variable @xmath704 such that for every @xmath676 in a neighborhood of @xmath705 we have @xmath706 then @xmath707 in the sense of weak convergence of random variables . for @xmath34 this is a classical statement , see e.g. ( * ? ? ? * section 30 ) . for general @xmath13 this statement is , perhaps , less known , but it can be proven by the same standard techniques as for @xmath34 . next , note that the definition implies the following property for the moment generating function of @xmath13-dimensional random variable @xmath708 : @xmath709 also observe that for any non - constant weakly decreasing @xmath121 we have @xmath650 . taking into the account propositions [ prop_tiling_gen_function ] , [ prop_gue_gen_function ] , [ prop_convergence_of_gen_func ] , and the observation that @xmath710 tends to @xmath41 when @xmath711 , theorem [ theorem_gue ] now reduces to the following statement . [ gue_multivar_asymptotics ] in the assumptions of theorem [ theorem_gue ] for any @xmath13 reals @xmath712 we have : @xmath713 for @xmath34 this is precisely the statement of proposition [ prop_convergence_gue_case ] . for general @xmath13 we combine proposition [ prop_convergence_gue_extended ] and corollary [ corollary_multiplicativity_for_gue ] . recall that _ alternating sign matrix _ of size @xmath5 is a @xmath24 matrix filled with @xmath43s @xmath41s and @xmath714 in such a way that the sum along every row and column is @xmath41 and , moreover , along each row and each column @xmath41s and @xmath131s are alternating , possible separated by arbitrary amount of @xmath43s . alternating sign matrices are in bijection with configurations of the six - vertex model with domain - wall boundary conditions . the configurations of the @xmath715vertex model are assignments of one of 6 types shown in figure [ figure_six ] to the vertices of @xmath24 grid in such a way that arrows along each edge joining two adjacent vertices point the same direction ; arrows are pointing inwards along the vertical boundary and arrows are pointing outwards along the horizontal boundary . in order to get asm we replace the vertex of each type with @xmath43 , @xmath41 or @xmath135 , as shown in figure [ figure_six ] , see e.g. @xcite for more details . figure [ fig_asm ] in the introduction gives one example of asm and corresponding configuration of the @xmath715vertex model . let @xmath716 denote the set of all alternating sign matrices of size @xmath5 or , equivalently , all configurations of six - vertex model with domain wall boundary condition . equip @xmath716 with _ uniform _ probability measure and let @xmath717 be a random element of @xmath716 . we are going study the asymptotic properties of @xmath717 as @xmath25 . for @xmath718 let @xmath719 , @xmath720 , @xmath721 denote the number of vertices in horizontal line @xmath238 of types @xmath222 , @xmath223 and @xmath722 , respectively . likewise , let @xmath723 , @xmath724 and @xmath725 be the same quantities in vertical line @xmath345 . also let @xmath726 , @xmath727 and @xmath728 be @xmath729 functions equal to the number of vertices of types @xmath222 , @xmath223 and @xmath722 , respectively , at the intersection of vertical line @xmath345 and horizontal line @xmath238 . to simplify the notations we view @xmath730 , @xmath731 and @xmath732 as random variables and omit their dependence on @xmath733 . [ theorem_asm ] for any fixed @xmath345 the random variable @xmath734 weakly converges to the normal random variable @xmath138 . the same is true for @xmath735 , @xmath736 and @xmath737 . moreover , the joint distribution of any collection of such variables converges to the distribution of independent normal random variables @xmath138 . in the rest of this section is devoted to the proof of thorem [ theorem_asm ] . 6 types of vertices in six vertex model are divided into 3 groups , as shown in figure [ figure_six ] . define a weight depending on the position @xmath738 ( @xmath238 is the vertical coordinate ) of the vertex and its type as follows : @xmath739 where @xmath740 , @xmath195 are parameters and from now and till the end of the section we set @xmath741 ( notice that this implies @xmath742 . ) let the weight @xmath743 of a configuration be equal to the product of weights of vertices . the partition function of the model can be explicitly evaluated in terms of schur polynomials . we have [ prop_partition_function_of_six_vertex ] @xmath744 where @xmath745 . see @xcite , @xcite , @xcite . the following proposition is a straightforward corollary of proposition [ prop_partition_function_of_six_vertex ] . [ prop_observable_6_vertex ] fix any @xmath133 distinct vertical lines @xmath746 and @xmath259 distinct horizontal lines @xmath747 and any set of complex numbers @xmath748 , @xmath749 . we have @xmath750\\ = \left(\prod_{k=1}^n u_{k}^{-1 } \right ) \frac{s_{\lambda(n)}(u_{1},\dots , u_{n},1^{2n - n})}{s_{\lambda(n)}(1^{2n } ) } , \end{gathered}\ ] ] @xmath751\\=\left(\prod_{\ell=1}^n v_{\ell}^{-1}\right ) \frac{s_{\lambda(n)}(v_{1},\dots , v_{m},1^{2n - m})}{s_{\lambda(n)}(1^{2n})}\end{gathered}\ ] ] and , more generally @xmath752 \\ \times \prod_{\ell=1}^m \left[\left(\frac{q^{-1}-q v_{\ell}^2 } { q^{-1}-q}\right)^{\widehat a_{j_\ell } } \left(\frac{q^{-1 } v_{\ell}^2 -q } { q^{-1}-q}\right)^{\widehat b_{j_\ell } } \left(v_{\ell}\right)^{\widehat c_{j_\ell}}\right ] \\ \times \prod_{k=1}^n\prod_{\ell=1}^m \left [ \left(\frac{(q^{-1 } u_{k}^2-q v_\ell^2 ) ( q^{-1}-q ) } { ( q^{-1 } u_{k}^2-q ) ( q^{-1}-q v_{\ell}^2)}\right)^{a^{i_k , j_\ell } } \left(\frac{(q^{-1 } v_{\ell}^2 -q u_k^2 ) ( q^{-1}-q ) } { ( q^{-1 } -q u_{k}^2)(q^{-1 } v_{\ell}^2 -q ) } \right)^{b^{i_k , j_\ell}}\right ] \bigg ) \\ = \left(\prod_{\ell=1}^m v_{\ell}^{-1 } \prod_{k=1}^n u_{k}^{-1}\right ) \frac{s_{\lambda(n)}(u_{1},\dots , u_{n},v_{1},\dots , v_{m},1^{2n - n - m})}{s_{\lambda(n)}(1^{2n } ) } , \end{gathered}\ ] ] where all the above expectations @xmath753 are taken with respect to the uniform measure on @xmath716 . we want to study @xmath25 limits of observables of proposition [ prop_observable_6_vertex ] . suppose that @xmath754 , @xmath755 . then we have two parameters @xmath756 and @xmath757 . suppose that as @xmath25 we have @xmath758 and @xmath238 remains fixed . then we can use proposition [ prop_convergence_gue_case ] to understand the asymptotics of the righthand side of . as for the left - hand side of , note that @xmath732 is uniformly bounded , in fact @xmath759 because of the combinatorics of the model . therefore , the factors involving @xmath732 in the observable become negligible as @xmath25 . also note that @xmath760 , therefore the observable can be rewritten as @xmath761 with @xmath57 satisfying the estimate @xmath762 with some constant @xmath45 ( independent of all other parameters ) . now let @xmath763 be an auxiliary variable and choose @xmath764 such that @xmath765 now the observable ( as a function of @xmath763 ) turns into @xmath766 times @xmath767 . therefore , the expectation in is identified with the exponential moment generating function for @xmath768 . in order to obtain the asymptotics we should better understand the function @xmath769 . rewrite as @xmath770 recall that @xmath771 , therefore @xmath772 note that the last two terms cancel out and we get @xmath773 now we compute @xmath774 \\ = \exp\left[- \sqrt{n } qz + q { \mathbf i}\sqrt{3 } z^2/2 -q^2 z^2/2 + o(1)\right ] = \exp\left [ -\sqrt{n } qz - z^2/2 + o(1)\right ] \end{gathered}\ ] ] summing up , the observable of is now rewritten as @xmath775 \exp\left [ \frac{a_i - n/2}{\sqrt{n } } z \right]\ ] ] now combining with propositions [ prop_convergence_gue_case ] , [ prop_convergence_gue_extended ] ( note that parameter @xmath5 in these two propositions differs by the factor @xmath776 from that of ) we conclude that ( for any complex @xmath763 ) the expectation of is asymptotically @xmath777,\ ] ] where @xmath326 is the function @xmath778 . using and computing @xmath779 we get @xmath780.\ ] ] now we are ready to prove theorem [ theorem_asm ] . choose @xmath781 and @xmath782 to be related to @xmath783 and @xmath784 , respectively , in the same way as @xmath763 was related to @xmath785 ( through and ) . then , combining the asymptotics with corollary [ corollary_multiplicativity_for_gue ] we conclude that the righthand side of as @xmath25 is @xmath786 \prod_{\ell=1}^m \exp\left [ -\sqrt{n } z'_k { \mathbf i}\frac{\sqrt{3}}{2 } -\frac{5}{16 } ( z'_k)^2 + o(1)\right].\ ] ] now it is convenient to choose @xmath787 ( @xmath788 ) to be purely imaginary @xmath789 ( @xmath790 ) . summing up the above discussion , observing that the case @xmath791 , @xmath792 is almost the same as @xmath754 , @xmath755 , ( only the sign of @xmath730 changes ) , and that the observable has a multiplicative structure , and the third ( double ) product in is negligible as @xmath25 , we conclude that as @xmath25 for all real @xmath793 , @xmath794 @xmath795 \\= \exp\left [ -\frac{3}{16}\left(\sum_{k=1}^n s_k^2 + \sum_{\ell=1}^n ( s'_\ell)^2 \right)\right].\end{gathered}\ ] ] the remainder @xmath457 in the left side of is uniform in @xmath796 , @xmath797 and , therefore , it can be omitted . indeed , this follows from @xmath798-\mathbb e_n\exp\left [ \frac{a_i - n/2}{\sqrt{n } } s { \mathbf i}\right]\right|\\\le \mathbb e_n \left|\exp\left [ \frac{a_i - n/2}{\sqrt{n } } s{\mathbf i}\right]\right| o(1)= o(1).\end{gathered}\ ] ] hence , yields that the characteristic function of the random vector @xmath799 converges as @xmath25 to @xmath800\ ] ] since convergence of characteristic functions implies weak convergence of distributions ( see e.g. ( * ? ? ? * section 26 ) ) the proof of theorem [ theorem_asm ] is finished . in @xcite de gier , nienhuis and ponsaing study the completely packed @xmath1 dense loop model and introduce the following quantities related to the symplectic characters . following the notation from @xcite we set @xmath801 where @xmath802 is given by @xmath803 for @xmath804 . further , set @xmath805\ ] ] define @xmath806 and @xmath807 in particular , @xmath808 is a function of @xmath149 and @xmath809 , while @xmath810 also depends on additional parameters @xmath811 and @xmath0 . de gier , nienhuis and ponsaing showed that @xmath808 and @xmath810 are related to the mean total current in the @xmath1 dense loop model , which was presented in section [ section_intro_loop ] . more precisely , they prove that under certain factorization assumption and with an appropriate choice of weights of configurations of the model , @xmath812 is the mean total current between two horizontally adjacent points in the strip of width @xmath142 : @xmath813 and @xmath700 is the mean total current between two vertically adjacent points in the strip of width @xmath142 : @xmath814 see @xcite for the details . this connection motivated the question of the limit behavior of @xmath812 and @xmath815 as the width @xmath142 tends to infinity , this was asked in @xcite , @xcite . in the present paper we compute the asymptotic behavior of these two quantities in the homogeneous case , i.e. when @xmath816 , @xmath817 [ theorem_dense_loop ] as @xmath818 we have @xmath819 and @xmath820 * remark 1 . * when @xmath821 , @xmath808 is identical zero and so is our asymptotics . * remark 2 . * the fully homogeneous case corresponds to @xmath822 , @xmath823 . in this case @xmath824 * remark 3 . * the leading asymptotics terms do not depend on the boundary parameters @xmath147 and @xmath148 . the rest of this section is devoted to the proof of theorem [ theorem_dense_loop ] . [ prop : univariate_dense_loop ] the normalized symplectic character for @xmath825 is asymptotically given for even @xmath142 by @xmath826 and for odd @xmath142 by @xmath827 for some analytic functions @xmath828 , @xmath829 , and @xmath830 such that @xmath831 and @xmath832 we will apply the formula from proposition [ proposition_schur_simplectic_1 ] to express the normalized symplectic character as a normalized schur function . the corresponding @xmath833 is given by @xmath834 for @xmath804 and @xmath835 for @xmath836 , which is equivalent to @xmath837 for all @xmath838 . we will apply proposition [ proposition_convergence_strongest ] to directly derive the asymptotics for @xmath839 . for the specific signature we find that @xmath840 and @xmath841 + ( 1 + 4 w ) \ln\left[\frac14 + w\right]\right)\end{gathered}\ ] ] in particular , we have @xmath842 - \ln\left[\frac14 + w\right]\right),\ ] ] @xmath843 the root of @xmath440 , referred to as the critical point , is given by @xmath844 example 2 of section [ subsection_complex_points ] shows that a steepest descent contour exists for any complex values of @xmath150 for which @xmath845 $ ] , i.e. if @xmath846 is not a negative real number . the values at @xmath446 are @xmath847 and @xmath848 in order to apply proposition [ proposition_convergence_strongest ] we need to ensure the convergence of @xmath849 , defined in section [ section : steepest_descent ] as @xmath850 substituting the values for @xmath833 , using the formula @xmath851 , and approximating the sums by integrals we get @xmath852 for some functions @xmath853 ( the error term in the second order approximation of the logarithm ) , @xmath854(the error term in the approximation of the riemann sum by an integral ) . while both of these functions are bounded in @xmath495 and @xmath142 , they could depend on the parity of @xmath142 . first , note that ( using @xmath855 ) @xmath856 the last sum can be again approximated by the integral similarly to ; therefore @xmath857 next , @xmath854 appears when we approximate the integral by its riemann sum . since the trapezoid formula for the integral gives @xmath858 approximation and denoting @xmath859 , we have for even @xmath142 @xmath860 and for odd @xmath142 @xmath861 therefore , we have @xmath862 and hence we obtain as @xmath818 @xmath863 and @xmath864 substituting into proposition [ proposition_convergence_strongest ] the expansion of @xmath865 and explicit values found above we obtain @xmath866 proposition [ proposition_schur_simplectic_1 ] then immediately gives @xmath867 as @xmath868 . we will now proceed to derive the multivariate formulas needed to compute @xmath869 . first of all , notice that @xmath870 for @xmath871 define @xmath872 @xmath873.\ ] ] then @xmath874 is a constant and thus we have @xmath875 @xmath876 therefore , we can work with @xmath877 instead of @xmath878 and with @xmath879 instead of @xmath785 . for any function @xmath880 and variables @xmath881 we define @xmath882 [ proposition_ul_ratio_general ] suppose that @xmath883 where @xmath884 @xmath885 and @xmath886 are some analytic functions of @xmath42 and let @xmath887 . then for any @xmath13 we have @xmath888 \\= c_1(x_0,x_{k+1};l ) + \sum_{i=1}^k 2\big(\widehat{b}_2(x)-b_2(x)\big)\frac{(-1)^l}{l } + \\ \ln\left [ ( \xi(x_{k+1})^2-\xi(x_0)^2 ) + \frac{2}{l}\big(b(x_0, .. ,x_k;\xi ) - b(x_1, .. ,x_{k+1};\xi ) + c_2(x_0,x_{k+1 } ) \big)\right ] \\ + o(l^{-1}).\end{gathered}\ ] ] apply theorem [ theorem_symp_multivar_1 ] to express the multivariate normalized character in terms of @xmath889 and @xmath890 as follows @xmath891 which is applied with @xmath892 , @xmath893 and define for any @xmath5 and @xmath259 @xmath894}{n^{2j-2}\alpha_n(x_i)h(x_i)^n } \right]_{i , j=1}^m\\= \frac{\delta\left(\frac{d_1 ^ 2}{n^2},\ldots,\frac{d_m^2}{n^2}\right)\prod_{i=1}^{m } \alpha_n(x_i)h(x_i)^n}{\prod_{i=1}^{m } \alpha_n(x_i)h(x_i)^n},\end{gathered}\ ] ] where , as above , @xmath895 . the second form in will be useful later . we can then rewrite the expression of interest as @xmath896\\ = const_1(l ) + \ln\left [ \frac{{\mathfrak{x}}_{\lambda}(x_0;l+1){\mathfrak{x}}_{\lambda}(x_{k+1};l+1)}{{\mathfrak{x}}_{\lambda}(x_0;l+2){\mathfrak{x}}_{\lambda}(x_{k+1};l+2 ) } \right ] \\+\ln \left[\prod_{i=1}^k \frac{{\mathfrak{x}}_{\lambda}(x_i;l+1)^2}{{\mathfrak{x}}_{\lambda}(x_i;l){\mathfrak{x}}_{\lambda}(x_i;l+2 ) } \right ] - \ln\left[\frac{(x_0 - 1)^2x_0^{-1}(x_{k+1}-1)^2x_{k+1}^{-1}}{x_0+x_0^{-1}-(x_{k+1}+x_{k+1}^{-1})}\right]\\ + \ln \frac{m_{l+1}(x_0,x_1,\ldots , x_k)m_{l+1}(x_1,\ldots , x_{k+1})}{m_l(x_1,\ldots , x_k)m_{l+2}(x_0,\ldots , x_{k+1})},\end{gathered}\ ] ] where @xmath897 will be part of @xmath898 . we investigate each of the other terms separately . first , we have that @xmath899 where the terms involving @xmath900 and @xmath901 are absorbed in @xmath902 and we notice that @xmath903 next we observe that for any @xmath279 and @xmath5 @xmath904}{n^\ell\alpha_n(x)h(x)^n } \\ = \xi(x)^\ell + \left(\binom{\ell}{2}q_1-\binom{\ell}{2 } \xi(x)^\ell + \ell r_1\xi(x)^{\ell-1}\right)\frac1n + o\left(n^{-3/2}\right),\end{gathered}\ ] ] where @xmath905 and @xmath906 . in particular , since @xmath907 is a polynomial in the left - hand side of , it is of the form @xmath908 for some function @xmath909 which depends only on @xmath880 and @xmath222 . that is , the second order asymptotics of @xmath907 does not depend on the second order asymptotics of @xmath910 . further , we have @xmath911 for any @xmath5 , so in formula we can replace @xmath912 and @xmath913 by @xmath914 without affecting the second order asymptotics . evaluation of @xmath341 directly will not lead to an easily analyzable formula , therefore we will do some simplifications and approximations beforehand . we will use lewis carroll s identity ( dodgson condensation ) , which states that for any square matrix @xmath22 we have @xmath915 where @xmath916 denotes the submatrix of @xmath22 obtained by removing the rows whose indices are in @xmath271 and columns whose indices are in @xmath917 . applying this identity to the matrix @xmath918}{l^{2j}\alpha_l(x_i)h(x_i)^l } \right]_{i , j=0}^{k+1}\ ] ] we obtain @xmath919_{i=[1:k+1]}^{j=[0:k-1,k+1 ] } \det\left [ \frac{d_i^{2j}(\alpha_l(x_i)h^l(x_i))}{l^{2j}\alpha_l(x_i)h(x_i)^l}\right]_{i , j=0}^k \\ - \det\left [ \frac{d_i^{2j}(\alpha_l(x_i)h^l(x_i))}{l^{2j}\alpha_l(x_i)h(x_i)^n}\right]_{i=[0:k]}^{j=[0:k-1,k+1]}\det\left [ \frac{d_i^{2j}(\alpha_l(x_i)h^l(x_i))}{l^{2j}\alpha_l(x_i)h(x_i)^l}\right]_{i , j+1=1}^{k+1 } , \end{gathered}\ ] ] where @xmath920 = \{0,1,\ldots , k-1,k+1\}$ ] . the second factors in the two products on the right - hand side above are just @xmath914 evaluated at the corresponding sets of variables . for the first factors , applying the alternate formula for @xmath914 from and using the fact that @xmath921^{j=[0:m-2,m]}_{i=[1:m]},\ ] ] we obtain @xmath922_{i=[1:k+1]}^{j=[0:k-1,k+1]}\\ = \frac{1}{\prod_{i=1}^{k+1 } \alpha_l(x_i)h(x_i)^l } \det\left [ ( d_i^2/l^2 ) ^j \right]_{i=[1:k+1]}^{j=[0:k-1,k+1 ] } \prod_{i=1}^{k+1 } \alpha_l(x_i)h(x_i)^l\\ = \frac{1}{\prod_{i=1}^{k+1 } \alpha_l(x_i)h(x_i)^l } \left(\sum_{i=1}^{k+1 } d_i^2/l^2 \right ) \delta(d_1 ^ 2/l^2,\ldots , d_{k+1}^2/l^2 ) \prod_{i=1}^{k+1 } \alpha_l(x_i)h(x_i)^l\\ = \frac{1}{\prod_{i=1}^{k+1 } \alpha_l(x_i)h(x_i)^l } \left(\sum_{i=1}^{k+1 } d_i^2/l^2 \right)\left [ \left(\prod_{i=1}^{k+1 } \alpha_l(x_i)h(x_i)^l\right ) m_l(x_1,\ldots , x_{k+1 } ) \right ] , \end{gathered}\ ] ] substituting these computations into we get @xmath923}{\prod_{i=1}^{k+1 } \alpha_l(x_i)h(x_i)^l m_l(x_1,\ldots , x_{k+1 } ) } \\ - \frac { ( \sum_{i=0}^k \frac{d_i^2}{l^2})\left[(\prod_{i=0}^k \alpha_l(x_i)h(x_i)^l ) m_l(x_0,\ldots , x_k)\right]}{\prod_{i=0}^k \alpha_l(x_i)h(x_i)^lm_l(x_0,\ldots , x_k)}\\ = \frac{d_{k+1}^2\alpha_l(x_{k+1})h(x_{k+1})^l}{l^2\alpha_l(x_{k+1})h(x_{k+1})^l } - \frac{d_{0}^2\alpha_l(x_{0})h(x_{0})^l}{l^2\alpha_l(x_{0})h(x_{0})^l}\\ + \frac{(\sum_{i=1}^{k+1 } d_i^2)[m_l(x_1,\ldots , x_{k+1})]}{l^2 m_l(x_1,\ldots , x_{k+1})}-\frac{(\sum_{i=0}^k d_i^2)[m_l(x_0,\ldots , x_{k } ) ] } { l^2 m_l(x_0,\ldots , x_{k } ) } \\ + 2\left ( \sum_{i=1}^{k+1 } \frac{d_i[\alpha_l(x_i)h(x_i)^l]}{l \alpha_l(x_i)h(x_i)^l}\frac{d_im_l(x_1,\ldots , x_{k+1})}{l m_l(x_1,\ldots , x_{k+1 } ) } - \sum_{i=0}^{k } \frac{d_i[\alpha_l(x_i)h(x_i)^l]}{l \alpha_l(x_i)h(x_i)^l}\frac { d_im_l(x_0,\ldots , x_{k})}{l m_l(x_0,\ldots , x_{k})}\right)\end{gathered}\ ] ] using the expansion for @xmath914 from equation and the expansion from we see that the only terms contributing to the first two orders of approximation in above are @xmath924 for some function @xmath925 not depending on @xmath142 , so @xmath926 . substituting this result into we arrive at the desired formula . proposition [ proposition_ul_ratio_general ] with @xmath927 , @xmath928 and @xmath929 shows that @xmath930 \bigg)\end{gathered}\ ] ] converges uniformly to @xmath43 and so its derivatives also converge to @xmath43 . proposition [ prop : univariate_dense_loop ] shows that in our case @xmath931 and thus @xmath932 . moreover , the function @xmath880 satisfies the following equation @xmath933 and so we can simplify the function @xmath167 as a sum as follows @xmath934 we thus have that @xmath935 which does not depend on @xmath936 . differentiating we obtain the asymptotics of @xmath937 as @xmath938\\ = { \mathbf i}\frac{\sqrt{3}}{4}(z^3-z^{-3})\end{gathered}\ ] ] for @xmath939 the computations is the same . recall that a character of @xmath18 is given by the function @xmath940 , which is defined on sequences @xmath200 such that @xmath941 for all large enough @xmath238 . also @xmath942 . by theorem [ theorem_voiculescu ] extreme characters of @xmath18 are parameterized by the points @xmath68 of the infinite - dimensional domain @xmath69 where @xmath70 is the set of sextuples @xmath71 such that @xmath72 @xmath73 now let @xmath86 be a signature , we associate two young diagrams @xmath87 and @xmath88 to it : row lengths of @xmath87 are positive of @xmath89 s , while row lengths of @xmath88 are minus negative ones . in this way we get two sets of modified frobenius coordinates : @xmath90 , @xmath91 and @xmath92 , @xmath93 . the following combinatorial identity is known ( see e.g. ( * ? ? ? * ( 5.15 ) ) and references therein ) @xmath951 introduce the following notation : @xmath952 and observe that implies that ( here @xmath577 ) @xmath953 now we can use propositions [ proposition_convergence_strongest ] and [ proposition_convergence_extended ] with the steepest descent contours of example 1 of section [ subsection_complex_points ] . recall that here @xmath577 , @xmath958 , @xmath959 and @xmath960 . we conclude that as @xmath25 @xmath961 substituting @xmath540 , @xmath446 , @xmath962 , using , and simplifying we arrive at @xmath963 the above convergence is uniform over compact subsets of @xmath964 ( here the parameter @xmath965 shrinks to zero as @xmath966 goes to infinity . ) decompose @xmath967 since @xmath968 is a polynomial , only finitely many coefficients @xmath969 are non - zero . the coefficients @xmath969 are _ non - negative _ , see e.g. ( * ? ? ? * chapter i , section 5 ) , also @xmath970 . we claim that @xmath973 . indeed this follows from the integral representations @xmath974 and similarly for @xmath950 . pointwise convergence for all but finitely many points of the unit circle and the fact that @xmath975 for @xmath976 implies that we can send @xmath25 in . now take two positive real numbers @xmath977 such that @xmath978 @xmath979 for @xmath495 satisfying @xmath980 and some positive integer @xmath341 write @xmath981 the third term goes zero as @xmath982 because the series @xmath983 converges for @xmath984 and @xmath985 . the second term goes to zero as @xmath982 because of , and @xmath986 . now for any @xmath987 we can choose @xmath341 such that each of the last two terms in are less than @xmath988 . since @xmath986 , the first term is a less than @xmath988 for large enough @xmath5 . therefore , all the expression is less than @xmath987 and the proof is finished . note that we can prove analogues of theorem [ theorem_u_vk ] for infinite - dimensional symplectic group @xmath992 and orthogonal group @xmath993 in exactly the same way as for @xmath18 . even the computations remain almost the same . this should be compared to the analogy between the argument based on binomial formulas of @xcite for characters of @xmath18 ( and their jack deformation ) and that of @xcite for characters corresponding to other root series . in @xcite a @xmath0deformation for the characters of @xmath18 related to the notion of quantum trace for quantum groups was proposed . one point of view on this deformation is that we _ define _ characters of @xmath18 through theorem [ theorem_u_vk ] , i.e. as all possible limits of functions @xmath127 , and then _ deform _ the function @xmath968 keeping the rest of the formulation the same . a `` good '' @xmath0deformation of turns out to be ( see @xcite for the details ) @xmath994 [ prop_q_limit ] suppose that @xmath101 is such that @xmath995 for every @xmath345 . then @xmath996 @xmath997 where the contour of integration @xmath508 consists of two infinite segments of @xmath998 going to the right and vertical segment @xmath999 $ ] with arbitrary @xmath1000 . convergence is uniform over @xmath42 belonging to compact subsets of @xmath1001 . * remark . * note that we can evaluate the integral in the definition of @xmath1002 as the sum of the residues : @xmath1003 the sum in is convergent for any @xmath42 . indeed , the product over @xmath1004 can be bounded from above by @xmath1005 . the product over over @xmath1006 is ( up to the factor bounded by @xmath1007 ) @xmath1008 note that for any fixed @xmath259 , if @xmath1009 then the last product is less than @xmath1010 we conclude that the absolute value of @xmath13th term in is bounded by @xmath1011 choosing large enough @xmath259 and @xmath1009 we conclude that converges . we start from the formula of theorem [ theorem_integral_representation_schur_q ] @xmath1012 where the contour contains only the real poles @xmath1013 , e.g. @xmath45 is the rectangle through @xmath1014 for a sufficiently large @xmath341 . note that for large enough ( compared to @xmath42 ) @xmath5 the integrand rapidly decays as @xmath1017 . therefore , we can deform the contour of integration to be @xmath508 which consists of two infinite segments of @xmath1018 going to the right and vertical segment @xmath1019 $ ] with some @xmath1020 . let us study the convergence of the integral . clearly , the integrand converges to the same integrand in @xmath1002 , thus , it remains only to check the contribution of infinite parts of contours . but note that for @xmath1021 , @xmath515 , we have @xmath1022 now the absolute value of each factor in denominator is greater than @xmath41 and each factor rapidly grows to infinity as @xmath512 . we conclude that the integrand in rapidly and uniformly in @xmath5 decays as @xmath1023 . it remains to deal with the singularities of the prefactors in and at @xmath1024 . but note that pre - limit function is analytic in @xmath42 ( indeed it is a polynomial ) and for the analytic functions uniform convergence on a contour implies the convergence everywhere inside . [ theorem_q_limit_multivar ] suppose that @xmath101 is such that @xmath995 for every @xmath345 . then @xmath1025 @xmath1026_{i , j=1}^k \prod_{i=1}^k f_{\nu}(x_iq^{k-1 } ) ( xq^{k-1};q)_{\infty}\end{gathered}\ ] ] convergence is uniform over each @xmath32 belonging to compact subsets of @xmath1001 . * remark . * the formula should be viewed as a @xmath0analogue of the multiplicativity in the voiculescu edrei theorem on characters of @xmath18 ( theorem [ theorem_voiculescu ] ) . there exist a natural linear transformation , which restores the multiplicitivity for @xmath0characters , see @xcite for the details . @xmath1027_{q^{-1 } } ! } { \prod_{i=1}^k\prod_{j=1}^{n - k } ( x_iq^k - q^{-j+1})}\times \\ \frac{(-1)^{\binom{k}{2}}\det\big [ d_{i , q}^{j-1}\big]_{i , j=1}^k}{q^{k \binom{k}{2 } } \delta(x_1,\ldots , x_k ) } \prod_{i=1}^k \frac{s_{\lambda}(x_iq^k;n , q^{-1 } ) \prod_{j=1}^{n-1}(x_iq^k - q^{-j+1})}{[n-1]_{q^{-1}}!}.\end{gathered}\ ] ] in order to simplify this expression we observe that @xmath1028_{q^{-1}}!}{[n-1]_{q^{-1 } } ! } \rightarrow q^ { n(i-1 ) -\binom{i-1}{2 } } $ ] as @xmath467 . also , @xmath1029 . last , we have @xmath1030 substituting all of these into the formula above , we obtain @xmath1031_{i , j=1}^k}{q^{k \binom{k}{2 } } \delta(x_1,\ldots , x_k ) } \prod_{i=1}^k s_{\lambda}(x_iq^k;n , q^{-1 } ) ( -1)^{n-1 } q^{-\binom{n-1}{2 } } ( xq^{k-1};q)_{n-1 } \\ = \frac{1}{q^{2\binom{k}{3 } } \prod_i ( x_iq^{k-1};q)_{\infty } } \frac{(-1)^{\binom{k}{2}}\det\big [ d_{i , q^{-1}}^{j-1}\big]_{i , j=1}^k } { \delta(x_1,\ldots , x_k ) } \prod_{i=1}^k f_{\nu}(x_iq^{k-1 } ) ( xq^{k-1};q)_{\infty}\end{gathered}\ ] ] r. e. behrend , p. di francesco , p. zinn justin , on the weighted enumeration of alternating sign matrices and descending plane partitions , j. of comb . theory , ser . a , 119 , no . 2 ( 2012 ) , 331363 . arxiv:1103.1176 . p. diaconis , d. freedman , partial exchangeability and sufficiency . indian stat . . golden jubilee intl conf . stat . : applications and new directions , j. k. ghosh and j. roy ( eds . ) , indian statistical institute , calcutta ( 1984 ) , pp . 205 - 236 . t. fonseca , p. zinn - justin , on the doubly refined enumeration of alternating sign matrices and totally symmetric self - complementary plane partitions , the electronic journal of combinatorics 15 ( 2008 ) , # r81 . arxiv:0803.1595 t. h. koornwinder , special functions associated with root systems : a first introduction for nonspecialists , in : special functions and differential equations ( k. srinivasa rao et al . , eds . ) , allied publishers , madras , 1998 , pp . 1024 r. koekoek and r. f. swarttouw , the askey scheme of hypergeometric orthogonal polynomials and its @xmath0-analogue , delft university of technology , faculty of information technology and systems , department of technical mathematics and informatics , report no . 98 - 17 , 1998 , http://aw.twi.tudelft.nl/~koekoek/askey/ a. okounkov , g. olshanski , limits of bc - type orthogonal polynomials as the number of variables goes to infinity . in : jack , hall littlewood and macdonald polynomials , american mathematical society contemporary mathematics series 417 ( 2006 ) , pp . arxiv : math/0606085 . g. olshanski , a. vershik , ergodic unitarily invariant measures on the space of infinite hermitian matrices . in : contemporary mathematical physics . f. a. berezin s memorial volume . 175 ( r. l. dobrushin et al . , eds ) , 1996 , pp . 137175 . arxiv : math/9601215 .
we develop a new method for studying the asymptotics of symmetric polynomials of representation theoretic origin as the number of variables tends to infinity . several applications of our method are presented : we prove a number of theorems concerning characters of infinite dimensional unitary group and their @xmath0deformations . we study the behavior of uniformly random lozenge tilings of large polygonal domains and find the gue eigenvalues distribution in the limit . we also investigate similar behavior for alternating sign matrices ( equivalently , six vertex model with domain wall boundary conditions ) . finally , we compute the asymptotic expansion of certain observables in @xmath1 dense loop model .
adverse reactions to systemic drug administration can have different clinical patterns such as erythema multiforme minor , major , steven johnsons syndrome , anaphylactic stomatitis , intraoral fixed drug eruptions , lichenoid drug reactions , and pemphigoid - like drug reactions . based on the severity and the number of mucosal sites involved , the disease has been subclassified into em minor and major . steven johnson syndrome is a more severe condition characterized by wide spread small blisters on torso and mucosal ulcerations with atypical skin target lesions triggered by drug intake . typical target skin lesions are necessary along with mucosal ulcerations to consider diagnosing them as em minor and major . many investigators have reported cases of oral mucosal ulcerations and lip lesions typical of em without any skin lesions . it has been reported that even if the primary attacks of oral em are confined to the oral mucosa the subsequent attacks can produce more severe forms of em involving the skin and hence it is important to identify and distinguish them from other ulcerative disorders involving oral cavity for early management and proper follow up . this article reports two cases of oral em highlighting the importance of distinguishing this disorder . a 21-year - old female patient visited the dental opd with the complaint of extensive ulceration of oral cavity and pain and inability to eat for the past 4 days . she gave a history of leg sprain for which she took diclofenac sodium subsequent to which she developed multiple small ulcerations that later transformed into extensive , irregular ulcerations of the oral mucosa . on extra oral examination , both upper and lower lips showed extensive irregular ulcerations , showing cracking and fissuring with blood encrustation . intraoral examination showed extensive , irregular ulcerations with yellow base and erythematous borders on buccal mucosa , palate , dorsal and ventral surfaces of the tongue [ figures 1 and 2 ] . case 1 irregular lip ulcerations with blood encrustations case 1 irregular buccal mucosal and tongue ulcerations with lip lesions the sudden onset , positive drug history , extensive ulcerations of the oral cavity , cracking , and fissuring of lips with bloody crusting lead to the diagnosis of oral erythema multiforme . the patient was advised to stop the diclofenac sodium medication and was treated with topical corticosteroids , mild analgesics , and local application of lignocaine gel to facilitate oral fluid intake . healing was noticed on the third day and the lesions were completely cleared without scarring in 10 days time [ figure 3 ] . patient was advised not to take any drug from the diclofenac group . case 1 after 10 days of treatment the oral mucosal and lip ulcerations are healed a 23-year - old female patient presented to the dental op with the complaint of painful ulcerations of the oral cavity for the past 5 days . she gave a history of bronchial asthma for which she took homeopathy medicine few weeks back , within days she developed oral ulcerations . she gave a history of multiple vesicles of the oral mucosa , buccal , and labial mucosa , which ruptured to form painful ulcerations . patient was unable to eat any hot and spicy food and was on liquid diet for the last 2 days . intraorally multiple ulcerations of the buccal and labial mucosa and palate were seen [ figure 4 ] . tongue showed white coating on the dorsal surface with irregular ulcerations of the right lateral border . case 2 palatal ulcerations with irregular blood encrusted lip lesions patient was advised to discontinue the homeopathic medicine and treated with cortico steroids ( prednisolone 10 mg ) twice a day for 3 days followed by tapering dose for 10 days and local application of topical anesthetic gel for pain relief . patient responded well to the treatment and healing of the lesions occurred within a week [ figure 5 ] . positive association between the drug intake and incidence of the lesions and the clinical appearance of the lesions lead to the diagnosis of oral erythema multiforme . a 21-year - old female patient visited the dental opd with the complaint of extensive ulceration of oral cavity and pain and inability to eat for the past 4 days . she gave a history of leg sprain for which she took diclofenac sodium subsequent to which she developed multiple small ulcerations that later transformed into extensive , irregular ulcerations of the oral mucosa . on extra oral examination , both upper and lower lips showed extensive irregular ulcerations , showing cracking and fissuring with blood encrustation . intraoral examination showed extensive , irregular ulcerations with yellow base and erythematous borders on buccal mucosa , palate , dorsal and ventral surfaces of the tongue [ figures 1 and 2 ] . case 1 irregular lip ulcerations with blood encrustations case 1 irregular buccal mucosal and tongue ulcerations with lip lesions the sudden onset , positive drug history , extensive ulcerations of the oral cavity , cracking , and fissuring of lips with bloody crusting lead to the diagnosis of oral erythema multiforme . the patient was advised to stop the diclofenac sodium medication and was treated with topical corticosteroids , mild analgesics , and local application of lignocaine gel to facilitate oral fluid intake . healing was noticed on the third day and the lesions were completely cleared without scarring in 10 days time [ figure 3 ] . patient was advised not to take any drug from the diclofenac group . a 23-year - old female patient presented to the dental op with the complaint of painful ulcerations of the oral cavity for the past 5 days . she gave a history of bronchial asthma for which she took homeopathy medicine few weeks back , within days she developed oral ulcerations . she gave a history of multiple vesicles of the oral mucosa , buccal , and labial mucosa , which ruptured to form painful ulcerations . patient was unable to eat any hot and spicy food and was on liquid diet for the last 2 days . intraorally multiple ulcerations of the buccal and labial mucosa and palate were seen [ figure 4 ] . tongue showed white coating on the dorsal surface with irregular ulcerations of the right lateral border . case 2 palatal ulcerations with irregular blood encrusted lip lesions patient was advised to discontinue the homeopathic medicine and treated with cortico steroids ( prednisolone 10 mg ) twice a day for 3 days followed by tapering dose for 10 days and local application of topical anesthetic gel for pain relief . patient responded well to the treatment and healing of the lesions occurred within a week [ figure 5 ] . positive association between the drug intake and incidence of the lesions and the clinical appearance of the lesions lead to the diagnosis of oral erythema multiforme . erythema multiforme ( em ) is an inflammatory disorder that affects the skin or mucous membrane or both . according to von hebra , who first described the disease in 1866 , the patients with erythema multiforme should have acrally distributed typical target lesions or raised edematous skin papules with or without mucosal involvement . in 1968 , kenneth described an inflammatory oral disorder with oral lesions typical of em but without any skin involvement . when lips are involved the typical blood encrusted lesions were seen . in this series of cases , the typical target skin lesions were seen during the recurrences not in their initial attacks . many investigators have suggested this as a third category of em known as oral em that are characterized by typical oral lesions of em but no target skin lesions . our two cases showed extensive irregular erythematous ulcerations in the buccal mucosa , labial mucosa , tongue , and palate along with blood encrusted lip ulcerations . biopsies are advised only in early vesicular lesions of erythema multiforme not in ulcerated ones since histopathologic appearances are nonspecific and nondiagnostic . our patients reported to us with advanced ulcerated lesions and hence the diagnosis had to be established based on the positive drug history , clinical appearance , and distribution of the lesion and exclusion of other ulcerative lesions . we were able to establish a temporal relationship between the drug intake and occurrence of the oral mucosal lesions . the oral ulcerations in our cases started within a few days of the drug intake and were resolved upon cessation of the drug . erythema multiforme is usually triggered by herpes simplex infections , but rarely by drug intake . when the lesions are confined only to the oral cavity the different differential diagnosis that has to be considered are herpes , autoimmune vescicullobullous lesions such as pemphigus vulgaris or bullous pemphigoid and other patterns of drug reactions . herpetic ulcers are smaller with regular borders than ulcers associated with em . extensive irregular ulcerations in the lining nonkeratinized mucosa as seen in our patients were typical of em and are not a feature of herpes infection . the presence of a temporal relationship between the drug intake and onset of the disease excludes the possibility of any infectious aetiologies . the positive drug histories associated with onset of ulcerations in our cases ruled out the possibility of other autoimmune vescicullobullous lesions like pemphigus vulgaris . unlike pemphigus vulgaris oral em have an acute onset and does not show any desquamative gingivitis . bullous lichen planus lesions that may have similar ulcerations should have wickham 's striae , which were absent in our cases excluding it as the diagnosis . other patterns of drug reactions like lichenoid drug reactions , pemphigoid - like drug reactions that resemble their namesake can be easily differentiated based on the clinical patterns as above mentioned . anaphylactic stomatitis often shows urticarial skin reactions with other signs and symptoms of anaphylaxis which were absent in our cases . in mucosal fixed drug eruptions the lesions are confined to localized areas of oral mucosa but in our cases there were wide spread lesions affecting labial , buccal , palatal , and tongue mucosa along with lip involvement . lesions of em minor are characterized by single mucosal ulcerations and typical target lesions of skin . erythema multiforme major is considered to be a more aggressive form characterized by involvement of multiple mucosa accompanied by typical target skin lesions.[57 ] the third category of em , also described by many investigators as oral em has the lesions confined to the oral mucosa and lips with no skin involvement . since our cases were evidently triggered by drug intake and they had typical lesions of em in the oral mucosa and lips with no skin involvement we came to a diagnosis of oral em . the most common drugs that trigger em lesions are long acting sulfa drugs especially sulphonamides , co - trimoxazole , phenytoin , carbamazepine and nonsteroidal antiinflammatory drugs such as diclofenac , ibuprofen , and salicylates . management of oral em involves identification of triggering agent . if it is found to be hsv infection patients have to be put on antiviral medications . if hsv is ruled out as a triggering agent and the culprit is an adverse drug reaction , the drug is immediately stopped . usually lesions of oral em can be treated palliatively with analgesics for oral pain , viscous lidocaine rinses , soothening mouth rinses , bland soft diet , avoidance of acidic and spicy food , systemic and topical antibiotics to prevent secondary infection . lesions of em usually respond to topical steroids , for more severe cases systemic corticosteroids are recommended . even though primary attacks of oral em are confined to the oral mucosa the subsequent attacks can produce more severe forms of em ( em minor and major ) involving the skin . hence , it is important to distinguish oral em for their early diagnosis , prompt management , and proper follow up .
oral erythema multiforme ( em ) is considered as a third category of em other than em minor and major . patients present with oral and lip ulcerations typical of em but without any skin target lesions . it has been reported that primary attacks of oral em is confined to the oral mucosa but the subsequent attacks can produce more severe forms of em involving the skin . hence , it is important to identify and distinguish them from other ulcerative disorders involving oral cavity for early management . this article reports two cases of oral em that presented with oral and lip ulcerations typical of em without any skin lesions and highlights the importance of early diagnosis and proper management .
let @xmath0 be a holomorphic quadratic differential . the line element @xmath1 induces a flat metric on @xmath2 which has cone - type singularites at the zeroes of @xmath3 where the cone angle is a integral multiple of @xmath4 . a _ saddle connection _ in @xmath2 is a geodesic segment with respect to the flat metric that joins a pair of zeroes of @xmath3 without passing through one in its interior . our main result is a new criterion for the unique ergodicity of the vertical foliation @xmath5 , defined by @xmath6 . * teichmller geodesics . * the complex structure of @xmath2 is uniquely determined by the atlas @xmath7 of natural parameters away from the zeroes of @xmath3 specified by @xmath8 . the evolution of @xmath2 under the teichmller flow is the family of riemann surfaces @xmath9 obtained by post - composing the charts with the @xmath10-linear map @xmath11 . it defines a unit - speed geodesic with respect to the teichmller metric on the moduli space of compact riemann surfaces normalised so that teichmller disks have constant curvature @xmath12 . the teichmller map @xmath13 takes rectangles to rectangles of the same area , stretching in the horizontal direction and contracting in the vertical direction by a factor of @xmath14 . by a _ rectangle _ in @xmath2 we mean a holomorphic map a product of intervals in @xmath15 such that @xmath16 pulls back to @xmath17 . all rectangles are assumed to have horizontal and vertical edges . let @xmath18 denote the length of the shortest saddle connection . let @xmath19 . our main result is the following . [ thm : main ] there is an @xmath20 such that if @xmath21 for some @xmath22 and for all @xmath23 , then @xmath5 is uniquely ergodic . theorem [ thm : main ] was announced in @xcite . in [ s : networks ] we present the main ideas that go into the proof of theorem [ thm : main ] and use them to prove an extension ( see theorem [ thm : recurrent ] of masur s result in @xcite asserting that a teichmller geodesic which accumulates in the moduli space of closed riemann surfaces is necessarily determined by a uniquely ergodic foliation . after briefly discussing the relationship between the various ways of describing rates in [ s : rates ] , we prove theorem [ thm : main ] in [ s : slow ] . then , in [ s : veech ] we sketch the characterisation of the set of nonergodic directions in the double cover of a torus , branched over two points , answering a question of w. veech , ( @xcite , p.32 , question 2 ) . if @xmath24 is a ( normalised ) ergodic invariant measure transverse to the vertical foliation @xmath5 then for any horizontal arc @xmath25 there is a full @xmath24-measure set of points @xmath26 satisfying @xmath27 where @xmath28 represents a vertical segment having @xmath29 as an endpoint . given @xmath25 , the set @xmath30 of points satisfying ( [ eq : erg : ave ] ) for _ some _ ergodic invariant @xmath24 has full lebesgue measure . we refer to the elements of @xmath30 as _ generic points _ and the limit in ( [ eq : erg : ave ] ) as the _ ergodic average _ determined by @xmath29 . to prove unique ergodicity we shall show that the ergodic averages determined by all generic points converge to the same limit . the ideas in this section were motivated by the proof of theorem 1.1 in @xcite . _ convention . _ when passing to a subsequence @xmath31 along the teichmller geodesic @xmath9 we shall suppress the double subscript notation and write @xmath32 instead of @xmath33 . similarly , we write @xmath34 instead of @xmath35 . [ lem : rectangle ] let @xmath36 and suppose there is a sequence @xmath31 such that for every @xmath37 the images of @xmath29 and @xmath38 under @xmath34 lie in a rectangle @xmath39 and the sequence of heights @xmath40 satisfy @xmath41 . then @xmath29 and @xmath38 determine the same ergodic averages . one can reduce to the case where @xmath42 and @xmath43 lie at the corners of @xmath44 . let @xmath45 ( resp . @xmath46 ) be the number of times the left ( resp . right ) edge of @xmath47 intersects @xmath25 . observe that @xmath45 and @xmath46 differ by at most one so that since @xmath48 , the ergodic averages for @xmath29 and @xmath38 approach the same limit . ergodic averages taken as @xmath49 are determined by fixed fraction of the tail : for any given @xmath50 @xmath51 this elementary observation is the motivation behind the following . [ def : visible ] a point @xmath29 is @xmath52-_visible _ from a rectangle @xmath53 if the vertical distance from @xmath29 to @xmath53 is at most @xmath52 times the height of @xmath53 . we have the following generalisation of lemma [ lem : rectangle ] . [ lem : visible ] if @xmath36 , @xmath31 and @xmath54 are such that for every @xmath37 the images of @xmath29 and @xmath38 under @xmath34 are @xmath52-visible from some rectangle whose height @xmath40 satisfies @xmath48 , then @xmath29 and @xmath38 determine the same ergodic averages . [ def : reachable ] we say two points are @xmath52-_reachable _ from each other if there is a rectangle @xmath53 from which both are @xmath52-visible . we also say two sets are @xmath52-_reachable _ from each other if every point of one is @xmath52-reachable from every point of the other . [ def : network ] given a collection @xmath55 of subsets of @xmath2 , we define an undirected graph @xmath56 whose vertex set is @xmath55 and whose edge relation is given by @xmath52-reachability . a subset @xmath57 is said to be @xmath52-_fully covered _ by @xmath55 if every @xmath58 is @xmath52-reachable from some element of @xmath55 . we say @xmath55 is a @xmath52-_network _ if @xmath56 is connected and @xmath2 is @xmath52-fully covered by @xmath55 . [ prop : masur ] if @xmath54 , @xmath59 , @xmath60 and @xmath31 are such that for all @xmath37 , there exists a @xmath52-network @xmath61 in @xmath32 consisting of at most @xmath62 squares , each having measure at least @xmath63 , then @xmath5 is uniquely ergodic . suppose @xmath5 is not uniquely ergodic . then we can find a distinct pair of ergodic invariant measures @xmath64 and @xmath65 and a horizontal arc @xmath25 such that @xmath66 . we construct a finite set of generic points as follows . by allowing repetition , we may assume each @xmath61 contains exactly @xmath62 squares , which shall be enumerated by @xmath67 . let @xmath68 be the set of points whose image under @xmath34 belongs to @xmath69 for infinitely many @xmath37 . note that @xmath70 has measure at least @xmath63 because it is a descending intersection of sets of measure at least @xmath63 . hence , @xmath70 contains a generic point ; call it @xmath71 . by passing to a subsequence we can assume the image of @xmath71 lies in @xmath69 for all @xmath37 . by a similar process we can find a generic point @xmath72 whose image belongs to @xmath73 for all @xmath37 . when passing to the subsequence , the generic point @xmath71 retains the property that its image lies in @xmath69 for all @xmath37 . continuing in this manner , we obtain a finite set @xmath74 consisting of @xmath62 generic points @xmath75 with the property that the image of @xmath75 under @xmath34 belongs to @xmath76 for all @xmath37 and @xmath77 . given a nonempty proper subset @xmath78 we can always find a pair of points @xmath79 and @xmath80 such that @xmath42 and @xmath43 are @xmath52-reachable from each other for infinitely many @xmath37 . this follows from the fact that @xmath81 is connected . by lemma [ lem : visible ] , the points @xmath29 and @xmath38 determine the same ergodic averages for any horizontal arc @xmath25 . since @xmath74 is finite , the same holds for any pair of points in @xmath74 . now let @xmath82 be a generic point whose ergodic average is @xmath83 , for @xmath84 . since @xmath32 is @xmath52-fully covered by @xmath61 , @xmath82 will have the same ergodic average as some point in @xmath74 , which contradicts @xmath66 . therefore , @xmath5 must be uniquely ergodic . [ def : sep : sys ] let @xmath0 be a holomorphic quadratic differential on a closed riemann surface of genus at least @xmath85 . two saddle connections are said to be _ disjoint _ if the only points they have in common , if any , are their endpoints . we call a collection of pairwise disjoint saddle connections a _ separating system _ if the complement of their union has at least two homotopically nontrivial components . by the _ length of a separating system _ we mean the total length of all its saddle connections . we shall blur the distinction between a separating system and the closed subset formed by the union of its elements . [ def : lengths ] let @xmath2 be a closed riemann surface and @xmath3 a holomorphic quadratic differential on @xmath2 . let @xmath86 denote the length of the shortest saddle connection in @xmath2 . let @xmath87 denote the infimum of the @xmath3-lengths of simple closed curves in @xmath2 that do not bound a disk . let @xmath88 denote the length of the shortest separating system . observe that @xmath89 [ rem : poles ] our arguments can also be applied to the case where @xmath2 is a punctured riemann surface and @xmath3 has a simple pole at each puncture . a saddle connection is a geodesic segment that joins two singularities ( zero or puncture , and not necessarily distinct ) without passing through one in its interior . in the definition of @xmath90 the infimum should be taken over simple closed curves that neither bound a disk nor is homotopic to a puncture . [ prop : network ] let @xmath91 be a stratum of holomorphic quadratic differentials . there are positive constants @xmath92 and @xmath93 such that for any @xmath60 there exists @xmath20 such that for any area one surface @xmath94 satisfying 1 . @xmath95 , and 2 . @xmath0 admits a complete delaunay triangulation @xmath96 with the property that the length of every edge is either less than @xmath97 or at least @xmath63 there exists a @xmath52-network of @xmath62 embedded squares in @xmath0 such that each square has side @xmath63 . by a _ _ long _ ) edge we mean an edge in @xmath96 of length less than @xmath97 ( resp . at least @xmath63 . ) by a _ small _ ( resp . _ large _ ) triangle , we mean a triangle in @xmath96 whose edges are all short ( resp . long . ) assuming @xmath98 , we note that all remaining triangles in @xmath96 have one short and two long edges ; we refer to them as _ medium _ triangles . to each triangle @xmath99 that has a long edge , i.e. any medium or large triangle , we associate an embedded square of side @xmath63 as follows . note that the circumscribing disk @xmath100 has diameter at least @xmath63 . let @xmath91 be the largest square concentric with @xmath100 whose interior is embedded and let @xmath101 be the length of its diagonal . if the boundary of @xmath91 contains a singularity then @xmath102 otherwise , there are two segments on the boundary of @xmath91 that map to the same segment in @xmath2 and @xmath100 contains a cylinder core curve has length at most @xmath101 . the boundary of this cylinder forms a separating system of length at most @xmath103 , so that @xmath102 . in any case , there is an embedded square with side @xmath63 at the center of the disk @xmath100 and we refer to this square as the _ central square _ associated to @xmath104 . for each pair @xmath105 where @xmath104 is a medium or large triangle and @xmath106 is a long edge on its boundary , we associate an embedded square @xmath107 of side @xmath63 that contains the midpoint of @xmath106 as follows . the same argument as before ensures that @xmath107 exists . note that @xmath107 is @xmath52-reachable from the central square associated to @xmath104 for any @xmath54 . note also that if @xmath108 is the square associated to @xmath109 where @xmath110 is the other triangle having @xmath106 on its boundary , then the union of the circumscribing disks contains a rectangle that contains @xmath111 , so that @xmath108 is @xmath52-reachable from @xmath107 for any @xmath54 . let @xmath55 the collection of the central squares associated to any medium or large triangles @xmath104 together with all the squares associated with all possible pairs @xmath105 where @xmath106 is a long edge on the boundary of a medium or large traingle @xmath104 . the number of elements in @xmath55 is bounded above by some @xmath93 . [ large ] there is a universal constant @xmath112 such that the area of any large triangle is at least @xmath113 . let @xmath104 be a large triangle , @xmath114 the length of its shortest side and @xmath115 the angle opposite @xmath114 . the circumsribing disk @xmath100 has diameter given by @xmath116 . if @xmath101 is large enough , then @xmath100 contains a maximal cylinder @xmath117 whose height @xmath118 and waist @xmath119 are related by ( @xcite ) @xmath120 since the diameter of each component of @xmath121 is at most @xmath119 , there is a curve of length at most @xmath122 joining two vertices of @xmath104 . hence , @xmath123 . since each side of @xmath104 is at least @xmath63 we have @xmath124 let @xmath125 be the union of all small triangles and short edges in @xmath96 . its topological boundary @xmath126 is the union of all short edges . let @xmath127 be a component in the complement of @xmath125 . if @xmath127 contains a large triangle , then lemma [ large ] implies it is homotopically nontrivial as soon as @xmath128 , which holds if @xmath97 is small enough . otherwise , @xmath127 is a union of medium triangles and is necessarily homeomorphic to an annulus . the core of this annulus can neither bound a disk nor be homotopic to a punture . therefore , each component in the complement of @xmath125 is homotopically nontrivial . assuming @xmath97 is small enough so that @xmath129 , we conclude that the complement of @xmath125 is connected , from which it follows that @xmath56 is connected for any @xmath54 . if @xmath125 has empty interior , then @xmath2 is @xmath52-fully covered by @xmath55 for any @xmath54 and we are done . hence , assume @xmath125 has nonempty interior @xmath130 and note that @xmath130 is homotopically trivial , for otherwise @xmath126 would separate . to show that @xmath2 is @xmath52-fully covered by @xmath131 it is enough to show that for any @xmath132 we can find a vertical segment starting at @xmath29 , of length at most @xmath133 , and having a subsegment of length at least @xmath97 contained in some medium or large triangle . this will be achieved by the next three lemmas . [ ver1 ] the length of any vertical segment contained in @xmath130 is at most @xmath134 where @xmath135 is the number of small triangles . . then there exists a vertical segment @xmath106 of length @xmath134 contained in @xmath130 . since the length of any component of @xmath106 that lies in any small triangle is less than @xmath136 , there exists a small triangle @xmath104 that intersects @xmath106 in at least three subsegments @xmath137 . let @xmath138 be the midpoints of these segments and assume the indices are chosen so that @xmath139 lies on the arc along @xmath106 joining @xmath140 to @xmath141 . if @xmath142 and @xmath143 traverse @xmath104 in the same direction , we can form an essential simple closed curve in @xmath130 by taking the arc along @xmath106 from @xmath140 to @xmath139 and concatenating it with the arc in @xmath104 from @xmath139 back to @xmath140 . since @xmath130 is homotopically trivial , we conclude that @xmath142 and @xmath143 traverse @xmath104 in opposite directions . similarly , @xmath143 and @xmath144 traverse @xmath104 in opposite directions , so that @xmath142 and @xmath144 traverse @xmath104 in the same direction . let @xmath145 be the arc in @xmath104 that joins the midpoints of @xmath142 and @xmath144 . note that @xmath145 can not be disjoint from @xmath143 for otherwise we can form a essential simple closed curve by taking the union of @xmath145 with the arc along @xmath106 joining @xmath140 and @xmath141 . let @xmath146 be the point where @xmath145 intersects @xmath143 and note that we can form an essential simple closed curve by following arc along @xmath106 from @xmath140 to @xmath146 , followed by the arc in @xmath104 from @xmath146 to @xmath141 , followed by the arc along @xmath106 from @xmath141 back to @xmath146 , then back to @xmath140 along the arc in @xmath104 . in any case , we obtain a contradiction to the fact that @xmath130 is homotopically trivial and this contradiction proves the lemma . [ ver2 ] suppose @xmath106 is a vertical segment in the complement of @xmath125 which does not pass through any singularity , has length less than @xmath97 , and has each of its endpoints in the interior of some short edge in @xmath126 . then there is a finite collection of triangles such that @xmath106 is contained in the interior of their union and each triangle is formed by three saddle connections of lengths less than @xmath147 . let @xmath145 and @xmath148 be the short edges in @xmath126 that contain the endpoints of @xmath106 , respectively . let @xmath115 be a curve joining one endpoint of @xmath145 to the endpoint of @xmath148 on the same side of @xmath106 by following an arc along @xmath145 , then @xmath106 , and then another arc along @xmath148 . let @xmath149 be the curve formed using the remaining arc of @xmath145 followed by @xmath106 and then the remaining arc of @xmath148 . let @xmath150 ( resp . @xmath151 ) be the geodesic representatives in the homotopy class of @xmath115 ( resp . @xmath149 ) relative to its endpoints . both @xmath150 and @xmath151 is a finite union of saddle connections whose total length is less than @xmath152 . the union @xmath153 bounds a closed set @xmath117 whose interior can be triangulated using saddle connections , each of which joins a singularity on @xmath150 to a singularity on @xmath151 . the length of each such interior saddle connection is less than @xmath147 . the union of @xmath117 with the small triangles having @xmath145 and @xmath148 on their boundary contains @xmath106 in its interior . [ ver3 ] there is a @xmath154 such that any vertical segment of length @xmath155 intersects the complement of @xmath125 in a subsegment of length @xmath97 . let @xmath156 be the set of singularities of @xmath0 . by lemma [ ver1 ] , there is some @xmath157 such that the length of any vertical segment contained in @xmath158 is less than @xmath155 . let @xmath159 where @xmath160 is the total number of edges . suppose there exists a vertical segment @xmath106 of length @xmath155 such that each component in the complement of @xmath125 has length less than @xmath97 . then there are two subsegments of @xmath106 in the complement of @xmath125 that join the same pair of short edges in @xmath126 . let @xmath161 be the complex generated by saddle connections of length less than @xmath147 . ( see @xcite . ) its area is @xmath162 and its boundary is @xmath163 where the implicit constants depending only on @xmath91 . by lemma [ ver2 ] implies the interior of @xmath161 is homotopically nontrivial , implying that @xmath164 forms a separating system . if @xmath97 is sufficiently small , this contradicts @xmath95 . this is a contradiction proves the lemma . let @xmath165 be the total number of triangles . given @xmath132 we may form a vertical segment @xmath106 of length @xmath155 with one endpoint at @xmath29 . by lemma [ ver3 ] , there is a component of @xmath106 is the complement of @xmath125 whose length is at least @xmath97 . this component is a union of at most @xmath135 segments , each of which contained in some medium or large triangle . the longest such segment has length at least @xmath166 . hence , @xmath29 is @xmath52-reachable from the central square associated to the medium or large triangle that contains this segment , where @xmath167 . this complete the proof of proposition [ prop : network ] . boshernitzan s criterion @xcite is a consequence of masur s theorem @xcite by the first inequality . masur s theorem is a consequence of the following by the second inequality . [ thm : recurrent ] if @xmath168 then @xmath5 is uniquely ergodic . fix @xmath31 and @xmath169 such that @xmath170 for all @xmath37 . let @xmath171 be a complete delaunay triangulation of @xmath32 and let @xmath172 be the length of the @xmath77th shortest edge . by convention , we set @xmath173 for all @xmath37 . let @xmath174 be the unique index determined by @xmath175 by passing to a subsequence and re - indexing , we may assume there is a @xmath176 such that @xmath177 assume @xmath37 is large enough so that @xmath178 where @xmath97 is small enough as required by proposition [ prop : network ] with @xmath179 . the theorem now follows from propositions [ prop : masur ] . in this section we discuss the various notions of divergence and the rates of divergence . the hypothesis of theorem [ thm : main ] can be formulated in terms of the flat metric on @xmath2 without appealing to the forward evolution of the surface . let @xmath180 and @xmath181 denote the horizontal and vertical components of a saddle connection @xmath106 , which are defined by @xmath182 it is not hard to show that the following statements are equivalent . 1 . there is a @xmath22 such that for all @xmath23 , @xmath183 2 . there is a @xmath112 such that for all @xmath23 , @xmath184 3 . there are constants @xmath185 and @xmath186 such that for all saddle connections @xmath106 satisfying @xmath187 , @xmath188 for any @xmath189 there are translation surfaces with nonergodic @xmath5 whose teichmller geodesic @xmath9 satisfies the sublinear slow rate of divergence @xmath190 . see @xcite . our main result asserts a logarithmic slow rate of divergence is enough to ensure unique ergodicity of @xmath5 . our interest lies in the case where @xmath191 as @xmath192 . to prove of theorem [ thm : main ] we shall need an analog of proposition [ prop : masur ] that applies to a continuous family of networks @xmath193 whose squares have dimensions going to zero . we also need to show that the slow rate of divergence gives us some control on the rate at which the small squares approach zero . a crucial assumption in the proof of theorem [ thm : recurrent ] is that the squares in the networks have area bounded away from zero . this allowed us to find generic points that persist in the squares of the networks along a subsequence @xmath31 . if the area of the squares tend to zero slowly enough as @xmath192 , one can still expect to find persistent generic points , with the help of the following result from probability theory . [ lem : pz ] * ( paley - zygmund @xcite ) * if @xmath194 be a sequence of measureable subsets of a probability space satisfying 1 . @xmath195 for all @xmath196 , and 2 . @xmath197 then @xmath198 . [ def : buffer ] we say a rectangle is @xmath115-_buffered _ if it can be extended in the vertical direction to a larger rectangle of area at least @xmath115 that overlaps itself at most once . ( by the area of the rectangle , we mean the product of its sides . ) [ prop : subseq ] suppose that for every @xmath23 we have an @xmath115-buffered square @xmath199 embedded in @xmath9 with side @xmath200 , @xmath201 . then there exists @xmath31 and @xmath202 such that @xmath203 satisfies @xmath195 for all @xmath196 and @xmath204 . let @xmath205 be any sequence satisfying the recurrence relation @xmath206 note that the function @xmath207 is increasing for @xmath208 and has inverse @xmath209 is increasing for @xmath210 , from which it follows that @xmath205 is increasing . we have @xmath211 let @xmath212 be a rectangle in @xmath32 that has the same width as @xmath213 and area at least @xmath115 . since @xmath214 overlaps itself at most once , @xmath215 . therefore , the height of @xmath214 is @xmath216 , which is less than @xmath217 times the height of the rectangle @xmath218 , by the choice of @xmath219 . let @xmath220 be the smallest rectangle containing @xmath221 that has horizontal edges disjoint from the interior of @xmath214 . its height is at most @xmath222 times that of @xmath221 . for each component @xmath25 of @xmath223 there is a corresponding component @xmath224 of @xmath225 ( see figure [ fig : buffer ] ) so that @xmath226 choose @xmath227 large enough so that @xmath228 for all @xmath37 and suppose that for some @xmath22 and @xmath229 we have @xmath230 . then @xmath231 so that @xmath204 . the condition @xmath232 ( corresponding to @xmath233 above ) holds for almost every direction in _ every _ teichmller disk . @xcite where @xmath234 is the unique time when @xmath214 maps to a square under the composition @xmath235 of teichmller maps . ] [ def : strip ] assume the vertical foliation of @xmath0 is minimal . given a saddle connection @xmath106 , we may extend each critical leaf until the first time it meets @xmath106 . let @xmath236 be the union of these critical segments with @xmath106 . by a _ vertical strip _ we mean any component in the complement of @xmath236 . we refer to any segment along a vertical edge on the boundary of a vertical strip that joins a singularity to a point in the interior of @xmath106 as a _ zipper_. each vertical strip has a pair of edges contained in @xmath106 as well as a pair of vertical edges , each containing exactly one singularity . the number @xmath237 of vertical strips determined by a saddle connection depends only on the stratum of @xmath0 . thus , any rectangle containing the vertical strip with most area has area is a @xmath238-buffer for any square of the same width contained in it . see figure [ fig : strip ] . the condition ( [ strong : diophantine ] ) prevents the slopes of saddle connections from being too close to vertical . this allows for some control on the widths of vertical strips . [ prop : strip ] let @xmath186 , @xmath112 and @xmath20 be the constants of the diophantine condition satisfied by @xmath0 . let @xmath237 be the number of vertical strips determined by any saddle connection . for any @xmath239 and @xmath240 there exists a @xmath241 such that for any @xmath242 and any saddle connection @xmath106 in @xmath9 whose length is at most @xmath243 , the width of any vertical strip determined by @xmath106 is at least @xmath244 . without loss of generality we may assume @xmath245 . the value of @xmath227 is chosen large enough to satisfying various conditions that will appear in the course of the proof . in particular , we require @xmath227 be large enough so that for @xmath246 and any @xmath242 we have @xmath247 first , observe that for any saddle connection @xmath248 in @xmath9 @xmath249 indeed , if @xmath250 we can apply the diophantine condition to the saddle connection @xmath251 in @xmath2 that corresponds to @xmath248 to conclude @xmath252 we shall argue by contradiction and suppose that there is a vertical strip @xmath253 supported on @xmath106 whose width is @xmath254 . let @xmath142 be the saddle connection joining the singularities on its vertical edges . if @xmath255 , then ( [ ieq : dio ] ) implies @xmath256 . if @xmath257 we get the same conclusion by choosing @xmath227 large enough we shall consider only zippers that protrude from a fixed side of @xmath106 . using ( [ ieq : cj ] ) with @xmath258 we see that the height @xmath259 of the longer zipper on the boundary of @xmath253 satisfies @xmath260 suppose we have a contraption @xmath261 of vertical strips joined along zippers of height @xmath262 and such that on the boundary of the contraption there is a zipper of height @xmath263 satisfying @xmath264 if the total width @xmath265 of the contraption is less than that of @xmath106 , we can adjoin a vertical strip @xmath266 along the zipper of height @xmath263 . the new contraption @xmath267 contains an embedded parallelogram with a pair of vertical sides of length greater than the rhs of ( [ ieq : height ] ) and whose width equals the total width @xmath268 of the new contraption . therefore , @xmath269 let @xmath270 be the height of the longer zipper on the boundary of the new contraption . * claim : * @xmath271 if not , we can find a saddle connection @xmath272 that crosses from one vertical boundary of the union to the other , with vertical component satisfying @xmath273 by virtue of ( [ ieq : cj ] ) , assuming @xmath274 . but then ( [ ieq : dio ] ) implies @xmath275 which contradicts ( [ ieq : width ] ) , and thus establishes the claim . as soon as the total width of the new contraption equals that of @xmath106 , both zippers on the boundary are degenerate or have height zero , contradicting the claim . this contradiction implies width of @xmath253 is at least @xmath244 . [ thm : slow : div ] there exists @xmath20 depending only on the stratum of @xmath0 such that @xmath276 implies @xmath277 is uniquely ergodic . let @xmath278 be a complete delaunay triangulation of @xmath9 . if a triangle @xmath279 has an edge @xmath106 of length @xmath280 then the circumscribing disk contains a maximal cylinder @xmath117 that is crossed by @xmath106 and such that @xmath281 where @xmath118 and @xmath282 are height and circumference of the cylinder @xmath117 ( @xcite ) . since @xmath283 a long delaunay edge will cross a cylinder of large modulus . let @xmath243 and @xmath284 be chosen so that a delaunay edge of length @xmath285 crosses a cylinder of modulus @xmath286 and assume @xmath284 is large enough so that the cylinder crossed by the delaunay edge is uniquely determined . for each delaunay edge @xmath106 of length at most @xmath243 , let @xmath104 and @xmath287 be the delaunay triangles that have @xmath106 on its boundary , and let @xmath100 and @xmath288 the respective circumscribing disks . applying proposition [ prop : strip ] we can find a vertical strip of area at least @xmath63 ( @xmath289 ) which is contained in some immersed rectangle than contains a square @xmath91 of side @xmath290 centered at some point on the equator of @xmath100 as well as a square @xmath107 of the same size centered at some point on the equator of @xmath288 . both @xmath91 and @xmath107 are @xmath63-buffered and @xmath52-reachable from each other for any @xmath54 . call an edge in @xmath278 _ long _ if it crosses a cylinder of modulus @xmath286 . call a triangle in @xmath278 _ thin _ if it has two long edges . we note that if a triangle in @xmath278 has any long edges on its boundary , then it has exactly two such edges . each cylinder @xmath117 of modulus @xmath286 determines a collection of long edges and thin triangles whose union contains @xmath117 . moreover , the intersection of the disks circumscribing the associated thin triangles contains a @xmath291-neighborhood of the core curve of @xmath117 . for each such @xmath117 , choose a saddle connection on its boundary and apply proposition [ prop : strip ] to construct a @xmath63-buffered square @xmath108 of side @xmath292 centered at some point on the core curve of @xmath117 . let @xmath193 be the collection of all squares @xmath91 and @xmath107 associated with edges in @xmath278 of length at most @xmath243 together with all the squares @xmath108 associated cylinders of modulus @xmath286 . it is easy to see that @xmath193 is a @xmath52-network for any @xmath54 and the number of elements in @xmath193 is bounded above by some constant @xmath62 that depends only on the stratum . choose @xmath97 so that @xmath293 and let @xmath31 be the sequence given by proposition [ prop : subseq ] , and note that @xmath294 by the choice of @xmath97 . let @xmath295 enumerate the elements of @xmath61 , using repetition , if necessary . aapplying lemma [ lem : pz ] to the subsets @xmath296 of the probability space @xmath297 , we obtain an @xmath62-tuple of generic points @xmath298 with the property that for infinitely many @xmath37 , @xmath299 for all @xmath77 . by passing to a subsequence , we may assume this holds for every @xmath37 . let @xmath24 be the ergodic component that contains @xmath71 . since @xmath81 is connected , we can find for each @xmath37 an @xmath300 such that @xmath301 is @xmath52-reachable from @xmath302 . after re - indexing , if necessary , we may @xmath303 is @xmath52-reachable from @xmath302 for infinitely many @xmath37 , and by further passing to a subsequence , we may assume this holds for every @xmath37 . by lemma [ lem : visible ] it follows that @xmath72 belongs to the ergodic component @xmath24 . given @xmath304 we can find for each @xmath37 an @xmath305 such that @xmath306 is @xmath52-reachable from @xmath307 . after re - indexing and passing to a subsequence , we may assume that @xmath308 is @xmath52-reachable from @xmath307 . proceeding inductively , we deduce that each @xmath75 belongs to the ergodic component @xmath24 . since @xmath32 is @xmath52-fully covered by @xmath61 , given any generic point @xmath309 , we can find for each @xmath37 an @xmath75 such that @xmath310 is @xmath52-visible from @xmath301 . for some @xmath77 this holds for infinitely many @xmath37 , so that by lemma [ lem : visible ] , @xmath309 belongs to the same ergodic component as @xmath75 , i.e. @xmath24 . this shows that @xmath5 is uniquely ergodic . given any function @xmath311 as @xmath192 there exists a teichmller geodesic @xmath9 determined by a nonerogdic vertical foliation such that @xmath312 . see @xcite . in the case of double covers of the torus branched over two points , it is possible to give a complete characterisation of the set of nonergodic directions . this allows us to obtain an affirmative answer to a question of w. veech ( @xcite , p.32 , question 2 ) . we briefly sketch the main ideas of this argument . let @xmath0 be the double of the flat torus @xmath313,dz^2)$ ] along a horizontal slit of length @xmath314 . let @xmath315 be the endpoints of the slit . the surface @xmath0 is a branched double cover of @xmath316 , branched over the points @xmath317 and @xmath318 . assume @xmath319 . ( if @xmath320 , the surface is square - tiled , hence veech ; in this case , a direction is uniquely ergodic iff its slope is irrational . ) let @xmath321 be the set of holonomy vectors of saddle connections in @xmath2 . then @xmath322 where @xmath161 is the set of holonomy vectors of simple closed curves in @xmath316 and @xmath323 is the set of holonomy vectors of the form @xmath324 where @xmath325 . we refer to saddle connections with holonomy in @xmath323 as _ slits _ and those with holonomy in @xmath161 as _ loops_. given any slit @xmath326 , there is segment in @xmath316 joining @xmath317 to @xmath318 whose holonomy vector is @xmath119 . the double of @xmath316 along this segment is a branched cover that is biholomorphically equivalent to @xmath0 if and only if @xmath327 ( see @xcite . ) in this case , the segment lifts to a pair of slits that are interchanged by the covering transformation @xmath328 and the complement of their union is a pair of slit tori also interchanged by the involution @xmath145 . a slit is called _ separating _ if its holonomy lies in @xmath329 ; otherwise , it is _ non - separating_. fix a direction @xmath330 and let @xmath331 be the foliation in direction @xmath330 . let @xmath9 be the teichmller geodesic determined by @xmath331 . let @xmath18 denote the length of the shortest saddle connection measured with respect to the sup norm . assume @xmath332 for otherwise the length of the shortest separating system is bounded away from zero along some sequence @xmath31 and theorem [ thm : recurrent ] implies @xmath331 is uniquely ergodic . note that the shortest saddle connection can always be realised by either a slit or a loop , and if @xmath234 is sufficiently large , we may also choose it to have holonomy vector with positive imaginary part . note also that @xmath333 is a piecewise linear function of slopes @xmath334 . let @xmath335 be the sequence of slits or loops that realise the local minima of @xmath336 as @xmath192 . we assume @xmath330 is minimal so that this is an infinite sequence . if @xmath335 is a loop , then @xmath337 must be a slit since two loops can not be simultaneously short . if @xmath335 and @xmath337 are both slits , then the length of the vector @xmath338 at the time when @xmath335 and @xmath337 have the same length ( with respect to the sup norm ) is less than twice the common length . it follows that @xmath339 so that either @xmath335 or @xmath337 is non - separating . note that @xmath340 is the shortest shortest loop at this time . if @xmath335 ( resp . @xmath337 ) is non - separating , then at the first ( resp . last ) time when @xmath340 is the shortest loop , there is another loop @xmath341 that forms an integral basis for @xmath342 $ ] together with @xmath340 . since the common length of these loops is at least one , @xmath335 ( resp . @xmath337 ) is the unique slit or loop of length less than one and it follows that the length of the shortest separating system is at least one . if there exists an infinite sequence of pairs of consecutive slits , then we may apply theorem [ thm : recurrent ] to conclude that @xmath331 is uniquely ergodic . it remains to consider the case when the sequence of shortest vectors alternates between separating slits and loops @xmath343 with increasing imaginary parts . note that @xmath344 is an even positive multiple of @xmath335 , say @xmath345 . the surface @xmath9 at the time @xmath346 when @xmath335 is shortest ( slope @xmath334 ) can be described quite explicitly . the slit @xmath265 is almost horizontal while @xmath268 is almost vertical . the area exchange between the partitions determined by @xmath265 and @xmath268 is approximately @xmath347 . the surface can be represented as a sum of tori slit along @xmath268 , each containing a cylinder having @xmath335 as its core curve and occupying most of the area of the slit torus . using this representation , we can find a single buffered square @xmath348 with side @xmath349 which , together with its image under @xmath145 , forms a @xmath52-network ( for any @xmath54 ) . a straightforward calculation shows that the sequence @xmath350 satisfies @xmath351 for all large enough @xmath352 . lemma [ lem : pz ] now implies @xmath331 is uniquely ergodic if @xmath353 .
in @xcite masur showed that a teichmller geodesic that is recurrent in the moduli space of closed riemann surfaces is necessarily determined by a quadratic differential with a uniquely ergodic vertical foliation . in this paper , we show that a divergent teichmller geodesic satisfying a certain slow rate of divergence is also necessarily determined by a quadratic differential with unique ergodic vertical foliation . as an application , we sketch a proof of a complete characterization of the set of nonergodic directions in any double cover of the flat torus branched over two points .
Confusion over similar-sounding flight numbers is under investigation as the cause of a close call between two airliners taking off on intersecting runways at Midway Airport, an aviation source said Wednesday. A Southwest Airlines plane, Flight 3828, was cleared for takeoff on runway 31 Center on Tuesday night, according to the Federal Aviation Administration. As Flight 3828 began its takeoff roll, Delta Air Lines Flight 1328 also began rolling on runway 4 Right without clearance from the Midway tower, according to the preliminary FAA investigation. Both planes were operating at full takeoff power, which is standard procedure on the Southwest Side airport's relatively short runways, the source said. The controller directing both planes spotted the Delta plane's movement and ordered the Delta pilots to stop immediately, the FAA said. The Southwest pilots also hit the brakes. Each aircraft traveled about one-third down its respective runway, stopping 2,000 to 3,000 feet short of the runway intersection between 31 Center and 4 Right, the source said. Before the incident, a Midway ground controller had notified both pilots about the similar and potentially confusing flight numbers of the two planes waiting to depart, and he advised the pilots to listen carefully to radio calls, according to tapes of the incident, which are on http://www.LiveATC.net On the transmission, the tower controller is heard clearing Southwest Flight 3828 for takeoff. But when the Southwest pilot radios back confirmation, his voice is obscured, or "stepped on," by a dual transmission, apparently from Delta Flight 1328. During a dual transmission, each pilot hears only some of what is being communicated. Zbigniew Bzdak, Chicago Tribune Passengers wait to travel at Midway Airport in July 2012. A Southwest Airlines plane, cleared for takeoff on Midway runway 31 Center on June 16, 2015, came close to colliding with a Delta Air Lines flight that had not been cleared for takeoff. Passengers wait to travel at Midway Airport in July 2012. A Southwest Airlines plane, cleared for takeoff on Midway runway 31 Center on June 16, 2015, came close to colliding with a Delta Air Lines flight that had not been cleared for takeoff. (Zbigniew Bzdak, Chicago Tribune) (Zbigniew Bzdak, Chicago Tribune) A second dual transmission then occurs, and seconds later, as the Southwest plane is accelerating down runway 31 Center, the Delta plane is also on a takeoff roll on intersecting runway 4 Right, sources said. A controller abruptly yells, "Stop, stop, stop, stop!" A pilot says, "Aborting," and another pilot announces, "SWA stopping." After both planes safely make an emergency stop short of the intersection, the Southwest pilot is heard on the tape asking the control tower whether he mishandled his takeoff clearance. "Were we the ones cleared for takeoff?" The controller responds, "Yes, sir, you were. You were doing what you were supposed to be doing." "Delta took our —. Delta was rolling also?" the pilot asks. "He took your call signs," the controller says. "Somebody kept stepping on you. I couldn't figure out who it was, that's why I reiterated that it was you that I was calling for takeoff." The controller is also heard providing an FAA phone number and advising the Delta captain to report a pilot deviation. The Southwest plane taxied back to the terminal for a safety check because of overheated brakes from the emergency stop, the source said. The plane ended up leaving for Tulsa, Okla. where it landed safely, according to a Southwest spokesman. The Delta crew also contacted its company, but the plane quickly got back into the departure line and flew out of Midway to Atlanta, officials said. Delta released a statement saying that it was "fully cooperating with the FAA's investigation." The Delta plane, a Boeing 717-200, is designed to carry 110 passengers. It was not immediately known how many passengers were on the plane. The Southwest plane was a Boeing 737, and had 139 passengers and five crew members on board, according to Southwest officials. The National Transportation Safety Board, which has investigated runway incursions at O'Hare International Airport in recent years, is expected to assign investigators to examine the incident at Midway, officials said. Runway incursions involving planes taxiing on the airfield, taking off or landing are a top safety concern — much more so than the risk of a midair collision — and the FAA has worked with airports and airlines to reduce the danger at airports, officials said. The FAA, as part of its NextGen air traffic modernization program, is developing a program to provide data communications between pilots and air traffic controllers in place of voice communications for many routine functions. Data communications will improve safety by reducing communications errors that can occur during verbal exchanges, the FAA said, particularly involving pilots for whom English is a second language, as well as cut the time spent exchanging information and reconfirming it. The new data system, which is likely years off, would also reduce delays, fuel burn and pollution, according to the agency. Chicago Tribune's Carlos Sadovi contributed. jhilkevitch@tribpub.com Twitter @jhilkevitch ||||| (CNN) Two commercial planes almost collided at Chicago Midway International Airport Tuesday night, the Federal Aviation Administration confirmed. Southwest flight 3828 was cleared for takeoff and started on the runway, but simultaneously Delta Air Lines 1328 also began on an intersecting runway without proper clearance according to a statement released by the FAA. It caused a potentially harrowing situation as two planes should never be rolling for takeoff on intersecting runways. The on-duty air traffic controller noticed the two planes headed toward the intersection, immediately warning the Delta flight. The controller can be heard yelling, "stop, stop, stop!" in the audio recording of the incident, which occurred at about 7:40 p.m. on Tuesday. Both flights stopped 2,000 feet from the runway intersection, later the Southwest flight went on to Tulsa with no additional problems. Read More ||||| Mayor Introduces Ethics Reforms For City Council Weeks After Ald. Ed Burke ChargedMayor Rahm Emanuel has proposed broad ethic reforms for the city council to consider after an alderman was recently charged with attempted extortion. Preckwinkle To Return $116,000 From Fundraiser At Ald. Burke's HomeCook County Board President Toni Preckwinkle is returning $116,000 in campaign contributions collected at a fundraiser hosted by Ald. Edward Burke’s house last year, seeking to distance herself from the attempted extortion charge against him. Chicago Mayoral Candidates Try Distancing Themselves From Ald. Ed Burke After Corruption ChargesThree candidates who've had close relationships with Burke are furiously pointing fingers at each other as to who is tighter with the tainted alderman. Archer Heights Burger King Linked To Criminal Complaint Against Ald. Ed BurkeThe restaurant on 41st and Pulaski is never mentioned by name, but its permit history matches events outlined in the federal criminal complaint against Ald. Ed Burke. Ald. Ed Burke Says He Plans To Seek Reelection, A Day After Extortion Charge UnsealedChicago Alderman Ed Burke says he's running for reelection--a day after a federal criminal complaint was unsealed charging the longtime 14th Ward alderman with attempted extortion for allegedly using his position on the City Council to try to steer business to his private law practice. Burke Resigns As Finance Committee Chair, To Be Stripped Of Police Security Detail In Wake Of Corruption CaseA day after longtime Ald. Edward Burke (14th) was charged with attempted extortion, he is already losing a great deal of his power and influence in the City Council. Ald. Edward Burke, Facing Corruption Charge, Resigns As Finance Committee ChairmanWhile insisting he's "done nothing wrong" Ald. Edward Burke (14th) is stepping down as chairman of the City Council Finance Committee, after he was charged with attempted extortion, the mayor's office announced Friday morning. "I've Done Nothing Wrong": Ald. Ed Burke Speaks After First Court AppearanceNearly 50-year aldermanic powerhouse Ed Burke is facing a federal charge of attempted extortion.
– Two commercial planes flirted with disaster tonight after beginning their takeoffs on intersecting runways at Chicago Midway International Airport, the Chicago Tribune reports. "Stop, stop, stop, stop!" an air controller can be heard shouting on a recording of the incident posted at LiveATC.net. The two pilots—one on Southwest flight 3828, the other on Delta Air Lines 1328—were each going at full takeoff power when they hit the brakes 2,000 to 3,000 feet short of the runway intersection. It happened at about 7:40pm, CNN reports. "Were we the ones cleared for takeoff?" asks the Southwest pilot. Air controller: "Yes, sir, you were. You were doing what you were supposed to be doing." Southwest Pilot: "Delta took our —. Delta was rolling also?" Controller: "He took your call signs. Somebody kept stepping on you." From the transmission, it sounds like the planes' similar numbers created confusion. The controller had even warned the pilots about this ahead of time, but when the Southwest pilot radioed back to confirm his takeoff, his voice was "stepped on" or obscured by a second transmission that may have come from the Delta flight. The controller can later be heard telling the Delta captain to report his mistake. "Just knowing that there were two aircraft of that size headed to the same intersection at the same time … this could have been catastrophic," a commercial pilot tells CBS Chicago. The National Transportation Safety Board will likely investigate.
The United States is one of the world’s most prosperous economies, with a gross domestic product that exceeded that of any other country last year. However, a vibrant economy alone does not ensure all residents are well off. In a recent study from the Organisation for Economic Co-operation and Development (OECD), U.S. states underperformed their regional counterparts in other countries in a number of important metrics that gauge well-being. The OECD’s newly released study, “How’s Life in Your Region?: Measuring Regional and Local Well-Being for Policy Making,” compares nine important factors that contribute to well-being. Applying an equal weight to each of these factors, 24/7 Wall St. rated New Hampshire as the best state for quality of life. Click here to see the 10 states with the best quality of life. Click here to see the 10 states with the worst quality of life. Monica Brezzi, author of the report and head of regional statistics at the OECD, told 24/7 Wall St. considering different dimensions of well-being at the regional level provides a way to identify “where are the major needs where policies can intervene.” Brezzi said that, in some cases, correcting one truly deficient measure can, in turn, lead to better results in others. In order to review well-being at the regional level, the OECD used only objective data in its report, rather than existing survey data. Brezzi noted that current international studies that ask people for their opinion on important measures of well-being often do not have enough data to be broken down by region. For example, one of the nine measures, health, is based on the mortality rate and life expectancy in each region, rather than on asking people if they feel well. Similarly, another determinant of well-being, safety, is measured by the homicide rate rather than personal responses as to whether people feel safe where they live. Based on her analysis, Brezzi identified one area where American states are exceptionally strong. “All the American states rank in the top 20% of OECD regions in income,” Brezzi said. Massachusetts — one of 24/7 Wall St.’s highest-rated states — had the second-highest per capita disposable household income in the nation, at $38,620. This also placed the state among the top 4% of regions in all OECD countries. However, the 50 states are also deficient in a number of key metrics for well-being. “With the exception of Hawaii, none of the American states are in the top 20% for health or for safety across the OECD regions,” Brezzi said. Minnesota, for instance, was rated as the third best state for health, with a mortality rate of 7.5 deaths per 1,000 residents and a life expectancy of 81.1 years. However, this only barely placed Minnesota among the top third of all regions in the OECD. Similarly, New Hampshire — which was rated as the safest state in the country, and was 24/7 Wall St.’s top state for quality of life — was outside the top third of all regions for safety. Across most metrics the 50 states have improved considerably over time. Only one of the nine determinants of well-being, jobs, had worsened in most states between 2000 and 2013. Brezzi added that not only was the national unemployment rate higher in 2013 than in 2000, but “this worsening of unemployment has also come together with an increase in the disparities across states.” Based on the OECD’s study, “How’s Life in Your Region?: Measuring Regional and Local Well-being for Policy Making,” 24/7 Wall St. identified the 10 states with the best quality of life. We applied an equal weight to each of the nine determinants of well-being — education, jobs, income, safety, health, environment, civic engagement, accessibility to services and housing. Each determinant is constituted by one or more variables. Additional data on state GDP are from the Bureau of Economic Analysis (BEA), and are current as of 2013. Further figures on industry composition, poverty, income inequality and health insurance coverage are from the U.S. Census Bureau’s 2013 American Community Survey. Data on energy production come from the Energy Information Administration (EIA) and represent 2012 totals. These are the 10 states with the best quality of life. ||||| It’s official – Australia is the best country in the world to live…while Canberra is once again the best city Australia voted best country to live in world, while nation's capital, Canberra, best place to live Oz scored 76.5 out of 90 for OECD's nine well being measures The country received full marks for civic engagement The ACT scored a top of 86.2, well above Australia's average It scored 10-out-of-10 for Safety, Income and Civic Engagement The OECD ranked indicators from more than 360 other regions in the world Northern Territory fared Australia's worse region on safety and health What's the best country to live in the world? well it's no other than Australia, as for the best place to live, that gong goes to the country's capital, Canberra. The OECD, Organisation for Economic Co-operation and Development has ranked the 362 regions of its 34 member nations according to nine measures of well-being which include income and education. Each were given a score out of ten. Australia led the world ranking with a tidy 76.5 out of 90, following by Norway at 72.3, Canada, Sweden and then the US. Australia topped the world list for the best country to live, it's scorecard 76.5 out of 90 although education only scored a 6.6 out of 10 The nation's capital, Canberra, has been voted as the best place to live by OECD beating out Sydney and Melbourne Despite being known for Sydney Harbour and it's popularity with visitors, Canberra was named the best place to live in the world. While Australia's scorecard was good enough to see it named best country to live The country that didn't fare so well was Mexico, which ranked the worst country to live with a measley 15.1 out of 90 scoring zero for safety, housing and accessibility to services. Poland, Hungary and Turkey were also judged as some of the hardest places in the world to live. AUSTRALIA'S SCORECARD All well-being scores out of 10 *Education - 6.6 *Jobs - 8.4 *Income - 7.3 *Safety - 9.8 *Health - 9 *Environment - 9.5 *Civic Engagement - 10 *Accessibility to services - 7.2 *Housing - 8.7 An earlier version of the report was released in June but since then it has been updated to include one more measure of well being, which is access to affordable and quality housing. That got a commendable score of 8.7 out of 10, while civic engagement took out the honours with full marks of 10, following closely behind was safety, environment and health. Even the low performing regions in Australia fare better than the OECD average in all of the well-being measures. But surprisingly Australia will have to work on its education which received a 6.6, well behind countries such as Canada, the Czech republic and Israel which recorded a high 9. The share of the workforce with at least a secondary degree in the bottom 20 per cent of regions in Australia is 13 percentage points lower than the OECD average. The OECD has pointed out, large regional disparities in education, health, jobs and key services can damage economic growth which in turn will lower well-being outcomes at a national level. Melbourne cricket ground (pictured), one of Australia's most popular venues. Australia topping the world as the best country to live Australia scored 76.5 out of 90 based on nine well-being measures and reached number one in the world as the best country to live with beautiful places like the Gold Coast (pictured) to visit Washington's white house (pictured) the US was among the top countries to live in the world with Australia topping the list While some would argue that Sydney has the best beaches and harbour while Melbourne's pubs and cafes are a talking point, the nation's best place to live and work is our often mocked national capital, Canberra. The territory received three top 10 scores, more than any other region included on the list, for safety, income and civic engagement. Even its lowest score, for the environment, was still a massive 9.1. Of all the nine indicators, Canberra total score was 86.2 out of a possible 90, thrashing the country's well being average. Scroll down for video Norway (pictured) came in second in the best best countries to live in the world behind Australia The national Carillion, Aspen island in Canberra. The nation's capital was ranked the best place to live in the world While living conditions in Canberra might be top class, it has built up a reputation as a boring city with little to do Canberra is truly the place to visit all-year round. During summer it's dry and desolate while in the winter months it is one of the coldest places in the country As the political heartland of the country, Canberra, along with six other of our state's and territories received top score of 10 for its civic engagement - meaning Australia is more in tune with its political system than just about anywhere else. The high number of voter engagement, about 95 per cent, was taken from the 2013 Federal Election and largely reflects the fact that voting in Australia is mandatory. Australia had the highest average voter turnout of any other region based on the previous national election. Voter turnout is defined as the ratio between the number of voters to the number of persons with voting rights. The statistics also confirmed the capital has a higher average income than the 360 regions identified, and it terms of its murder rate - that's practically non existent. Canberra has the lowest murder rate across Australia averaging just one in 100,000 people. It's commonly confused that Sydney, and not Canberra, is referred to as Australia's capital city. Despite its knockers, Canberra seems to be making its place on the world stage. A New York Times article recently told its readers that while it does not compare to the 'glitzier city of Sydney' and 'there are no beaches or iconic opera houses' it does have 'big-sky beauty, breezy civic pride and a decidedly hipster underbelly.' The best way to enjoy Canberra, the Times said, is 'with deep intakes of mountain air and an ear tuned to the calls of sulphur-crested cockatoos and crimson rosellas.' SOME OF THE LESS KNOWN ATTRACTIONS ON CANBERRA'S TO-DO LIST The Canberra Centre , a three-storey shopping complex is Civic's main shopping precinct Glebe Park, a picturesque park near the city centre with elm trees and oaks from early European settlement The Jolimont Centre, a bus terminal for Greyhound Australia and Murrays bus services Garema Place and City Walk , open areas of Civic for pedestrian traffic with many outdoor cafes. One of the longest running cafes in Civic is Gus's Cafe on Bunda Street. Westfield Woden, is a large shopping centre located in the Woden Town Centre in Canberra's suburb of Phillip Manuka Oval, one of two of Canberra's sporting stadiums with a capacity to seat 13,550 Stockholm in Sweden, one of the top countries in the world to live, with Australia coming in at first Unlike most cities, Canberra was purposefully built as the nation's capital and the road network is built with an incredibly high number of roundabouts HOW CANBERRA ROSE TO THE TOP Income 10/10 Safety 10/10 Civic Engagement 10/10 Health 9.9/10 Accessibility to services 9.6/10 Jobs 9.6/10 Environment 9.5/10 Education 9.1/10 Housing 8.5/10 It also recommended going for a bicycle ride which loops one of the capital's premier attractions, it's lake. But it isn't just any lake, it was named after Chicago man, Walter Burley Griffin, who designed the capital built in the centre of Melbourne and Sydney. Tasmania's score for education is 5.6, putting it in the bottom 27 per cent of the OECD regions. The ACT is high on the list, in the top 19 per cent, but when it comes to the country's natural environment, look no further than Tasmania, NSW and QLD which all score a perfect 10 out of 10. The Northern Territory doesn't fare quite as well as its neighbouring state's and territories, scoring just 4.1 for health and 1.4 for safety.Its health score is in the OECD's bottom 29 per cent, while its safety score is in the bottom 13 per cent. Canberra and the ACT can count itself a lucky place to live, with a higher average income, civic engagement and better safety than some of the 300 regions on the OECD list The war museum in Canberra, it was revealed recently, was Australia's number one tourist attraction Lake Burley Griffin was named in honour of Water Burley Griffin, the Chicago-born architect which designed the capital city from scratch Tasmania, along with Queensland and NSW, scored 10 out of 10 in the environment category The Northern Territory doesn't fare quite as well as its neighboring states and territories, scoring just 4.1 for health and 1.4 for safety ||||| We could not detect your location. Why not try Your region is not yet in our dataset. Why not try How about trying It seems like you are in
– An international economic group has named the best region on the planet to live, and it's one whose reputation hasn't always been sterling, the BBC reports: Canberra, Australia. The Organization for Economic Cooperation and Development, which has 34 member countries, calls the Australian Capital Territory the best in the world, though Australia's own Herald Sun says it's "more like capital punishment." Still, the top country overall is also Australia. The Regional Well-Being report is based on factors like local income, environment, and safety, the BBC notes. Among its other winners and losers: Among countries, Norway comes in second, while Canada, Sweden, and the US round out the top five, the Daily Mail reports. All 10 of the lowest-ranked regions are Mexican states, while the countries at the bottom of the list are Mexico, Turkey, Hungary, Poland, and Slovakia. Data from the study puts New Hampshire as the US state with the highest quality of life, followed by Minnesota, Vermont, Iowa, and North Dakota, 24/7 Wall St. reports. Check out your region's score via the OECD's site, or find out the world's happiest country.
The head of the Sicilian mafia sent a hit man to America to arrange the murder of Rudy Giuliani in the 1980s, Italian news media reported. “Boss of bosses” Salvatore Riina targeted Giuliani because of his friendship with an anti-mafia judge in Sicily, Giovanni Falcone. The plot to kill Giuliani, while he was US Attorney for the Southern District of New York, came to light during a trial in Sicily in which former Italian government officials are accused of agreeing to a peace pact with the mob in the 1990s. During the trial in Palermo, it was disclosed that a former mafioso, Rosario Naimo, moved to the United States in 1965 and became the Sicilian mob’s senior representative in the US. Naimo told investigators that a bloodthirsty mobster named Benedetto “The Beast” Villico arrived from Italy with a letter from Riina that ordered the assassination of Giuliani, the news agency ANSA reported. But Riina dropped the plan when he realized that US law enforcement could retaliate by destroying the Sicilian mob, the website Palermo Today said. Falcone wasn’t as lucky. He was assassinated by Riina’s men in 1992. Giuliani has said in the past that he was targeted while he was mayor. “They offered $800,000 to kill me,” he said last year in an interview with Oprah Winfrey. “Then toward the end of the time I was mayor, a particular mafia guy, who was convicted and put in jail for 100 years, put out a contract to kill me for $400,000,” he said. “I kind of felt bad that I went down in value.” He appeared to be referring to Riina, who is serving multiple life sentences. ||||| The Sicilian mafia wanted Rudy Giuliani to sleep with the fishes. Crime boss Salvatore “Toto” Riina ordered the hit on Giuliani back in the 1980s when the future New York City mayor was a mob-busting federal prosecutor, Italian newspapers are reporting. The source of that story is mafia turncoat Rosario Naimo, who spilled his secrets this week to Palermo-based prosecutors. Naimo said Riina was alarmed that Giuliani was joining forces with Sicily-based prosecutors like the legendary Giovanni Falcone, the papers reported. So Riina dispatched Naimo to New York City and told him to get the okay from the Gambino crime family to rub out Giuliani. No way, said the alarmed Gambinos, who warned Naimo that if Giuliani was murdered the U.S. government would “annihilate the mafia,” the papers reported. Giuliani was out of the country Friday and not immediately reachable for comment about the reports, a spokesman said. But back in November, the failed Republican presidential candidate told Oprah Winfrey the mafia took out a contract on him during his first year as mayor. “They offered $800,000 to kill me,” he said. “Then, toward the end of the time I was the mayor, a particular mafia guy who we convicted and put in jail for 100 years put out a contract to kill me for $400,000.” Then Giuliani joked, “I kind of felt bad that I went down in value.” Giuliani might want to count his blessings. Falcone, his wife and three cops was killed in 1992 by a massive roadside bomb outside Palermo. Riina, who was the “boss of bosses” of Cosa Nostra, was imprisoned for multiple murders and other crimes a year later. Short of stature and temper, Riina was born and raised in Corleone — a town made famous by the Godfather movies — and nicknamed “The Beast” by fellow mobsters. csiemaszko@nydailynews.com Sign up for BREAKING NEWS Emails privacy policy Thanks for subscribing!
– The Sicilian mob hated Rudy Giuliani so much when he worked as a federal prosecutor in New York back in the 1980s that it sent a hit man to the US to get rid of him, reports the New York Post. Similar reports have surfaced before, but a mafia turncoat fleshed out the details this week during a trial in Italy. It seems that the head of the Sicilian mafia, Salvatore "Toto" Riina, personally ordered the hit and sent a lackey from Italy to the US with a letter ordering it to take place. Luckily for Giuliani, the Gambino crime family vetoed the idea, knowing that the feds would strike back hard if one of their own were assassinated, reports the Daily News. Giuliani drew the ire of the mob not only for his own crackdown on organized crime but for his close association with an anti-mafia judge in Sicily named Giovanni Falcone. The judge, incidentally, was murdered by mob hitmen in 1992. Giuliani himself has spoken of such plots before, telling Oprah Winfrey last year that he knew of two contracts against him as mayor.
SECTION 1. EMPLOYEE SURVEYS. (a) In General.--Chapter 14 of title 5, United States Code, is amended by adding at the end the following: ``Sec. 1403. Employee surveys ``(a) In General.--Each agency shall conduct an annual survey of its employees (including survey questions unique to the agency and questions prescribed under subsection (b)) to assess-- ``(1) leadership and management practices that contribute to agency performance; and ``(2) employee satisfaction with-- ``(A) leadership policies and practices; ``(B) work environment; ``(C) rewards and recognition for professional accomplishment and personal contributions to achieving organizational mission; ``(D) opportunity for professional development and growth; and ``(E) opportunity to contribute to achieving organizational mission. ``(b) Regulations; Notice.-- ``(1) In general.--The Director of the Office of Personnel Management shall issue regulations prescribing survey questions that should appear on all agency surveys under subsection (a) in order to allow a comparison across agencies. ``(2) Notice of change to regulations.-- ``(A) In general.--The Director of the Office of Personnel Management may not issue a regulation under this section until the date that is 60 days after the date on which the Director submits such regulation to the Committee on Oversight and Government Reform of the House of Representatives and the Committee on Homeland Security and Governmental Affairs of the Senate unless the Director submitted such regulation to those committees not later than the day after the date on which the notice of proposed rulemaking is published in the Federal Register. ``(B) Applicability.--Subparagraph (A) shall apply with respect to any regulation promulgated on or after the date of enactment of this paragraph. ``(3) Notice of change to survey questions.--Not later than 60 days before finalizing any change, addition, or removal to any survey question in the annual employee survey administered by the Office pursuant to this section, the Director shall-- ``(A) make the proposed change, addition, or removal and the proposed final text, if applicable, of any such question publicly available on the agency's website; and ``(B) provide to the Committee on Oversight and Government Reform of the House of Representatives and the Committee on Homeland Security and Governmental Affairs of the Senate-- ``(i) the proposed change, addition, or removal and the proposed final text, if applicable, of any such question; ``(ii) a justification for the proposed change, addition, or removal; and ``(iii) an analysis of whether the change, addition, or removal will affect the ability to compare results from surveys taken after the change, addition, or removal is implemented with results from surveys taken before the change, addition, or removal is implemented. ``(c) Occupational Data.--To the extent practicable, the Director of the Office of Personnel Management shall, in publishing agency survey data collected under subsection (a), include responses to such surveys by occupation. In carrying out this subsection the Director shall ensure the confidentiality of any agency survey respondent. ``(d) Survey Incentives.--In conjunction with each annual survey required under subsection (a), the head of each agency shall submit to the Director of the Office of Personnel Management information on any monetary, in-kind, leave-related, or other incentive offered to employees in exchange for participation in the survey, including a description of the type of each such incentive offered and the quantity of each such incentive provided to employees. ``(e) Availability of Results.--The results of the agency surveys under subsection (a) shall be made available to the public and posted on the website of the agency involved, unless the head of such agency determines that doing so would jeopardize or negatively impact national security. ``(f) Agency Defined.--In this section, the term `agency' has the meaning given the term Executive agency in section 105.''. (b) Applicability.-- (1) The requirements of section 1403 of title 5, United States Code (as added by this Act) shall apply with respect to any annual survey initiated on or after the date of enactment of this Act. (2) Any annual survey authorized by, and meeting the requirements of, section 1128 of the National Defense Authorization Act for Fiscal Year 2004 (Public Law 108-136; 5 U.S.C. 7101 note) that is in progress on the date of enactment of this Act (or, if no such survey is in progress, was most recently completed prior to the date of enactment of this Act) shall be considered to be a survey authorized by, and that meets the requirements of, section 1403(a) of title 5, United States Code, (as added by this Act) including for purposes of requiring the Office of Personnel Management to give notice of subsequent changes, additions, or removals of survey questions under section 1403(b)(3) of such title. (c) Technical and Conforming Amendments.-- (1) Repeal.--Section 1128 of the National Defense Authorization Act for Fiscal Year 2004 (Public Law 108-136; 5 U.S.C. 7101 note), and the item relating to such section in the table of sections, is repealed. (2) Table of sections.--The table of sections for chapter 14 of title 5, United States Code, is amended by inserting after the item relating to section 1402 the following new item: ``1403. Employee surveys.''. (3) Table of chapters.--The item relating to chapter 14 in the table of chapters for part II of title 5, United States Code, is amended to read as follows: ``14. Agency Chief Human Capital Officers; Employee Surveys. 1401''. (4) Chapter heading.--The heading for chapter 14 of title 5, United States Code, is amended to read as follows: ``CHAPTER 14--AGENCY CHIEF HUMAN CAPITAL OFFICERS; EMPLOYEE SURVEYS''. SEC. 2. GAO STUDY ON ANNUAL SURVEY INCENTIVES. The Comptroller General of the United States shall conduct a study on the types of incentives offered by agencies to employees in exchange for participation in surveys required by section 1128 of the National Defense Authorization Act for Fiscal Year 2004 (Public Law 108-136; 5 U.S.C. 7101 note) or section 1403 of title 5, United States Code, that includes an evaluation of the impact of such incentives on employee survey responses and response rates, and any recommendations regarding such incentives the Comptroller General considers necessary.
This bill requires a federal agency to conduct an annual survey of its employees to assess leadership practices and employee satisfaction. Unless doing so would jeopardize national security, the results of the survey must be posted on the website of the agency.
A worker is pictured behind a logo at the IBM stand on the CeBIT computer fair in Hanover February 26, 2011. (Reuters) - IBM plans to move U.S. retirees off its company-sponsored health plan and shift them into new private insurance exchanges as a way of lowering costs for retirees. IBM had selected Extend Health, which is owned by Towers Watson & Co, to provide retirees with new health options for medical, prescription drug, dental and vision coverage, the company said in a statement on Friday. The plan, it said, offered IBM retirees more choice and better value than the company could provide through existing group plans. IBM also said it was hosting meetings with groups of retirees across the country to inform them about the move to the country's largest private Medicare Exchange. While some retirees may be skeptical, studies showed that the majority of people have a more positive outlook once they were presented with the concept and understood the options available to them through these exchanges, IBM said. Moving retirees to an exchange allows companies to reduce rising health care costs. "IBM didn't make this change to save money - it does not reduce our costs," a spokesman said. Projections indicate that healthcare costs under IBM's current plans for Medicare-eligible retirees would triple by 2020, largely impacting retiree premiums and out-of-pocket costs for retirees, he said. With this move, he added, risks are spread across a much larger group in the private marketplace. According to the website Alliance@IBM, an employee group, the plan will come into effect starting January 1, 2014. IBM, the world's largest technology-services company, has been reining in costs to ensure stable profits amid slowing demand for hardware. At the end of last month most of its staff in its services and technology group was asked to take a week furlough at one-third of normal pay, according to Alliance@IBM. The company took a $1 billion restructuring charge related to job cuts in its second quarter. The cuts were taken mainly outside of the United States, a spokesman said at the time, adding about 60 percent were from IBM's services division and 20 percent each from its hardware and software segments. (The story corrects to say private. not public, exchanges and clarifies cost savings is for retirees. Adds quote from IBM spokesman.) (Reporting by Nicola Leske in New York; Editing by Lisa Shumaker) ||||| Dow Jones Reprints: This copy is for your personal, non-commercial use only. To order presentation-ready copies for distribution to your colleagues, clients or customers, use the Order Reprints tool at the bottom of any article or visit www.djreprints.com
– As of the first of the new year, IBM retirees will no longer be on the company health plan—IBM is transferring them to a health-insurance exchange, reports Reuters. The company will give the retirees an annual payment, and they'll use it pick their own plan from a privately run Medicare exchange called Extend Health. It's similar in concept to the public exchanges being set up under ObamaCare and a "sign that even big, well-capitalized employers aren't likely to keep providing the once-common benefits as medical costs continue to rise," reports the Wall Street Journal. IBM says it's doing so because premiums would rise to unmanageable levels for retirees if it remained under the current system. The theory behind the exchanges is that multiple insurers will compete to offer the best plans and thus keep costs down. Other big companies such as DuPont and Caterpillar also have made the switch to exchanges for retirees, and some, including Sears, have done so for current employees as well, notes the Journal.
the advent of semiconducting two - dimensional ( 2d ) materials such as , mos@xmath0 @xcite , phosphorene @xcite , and sns@xmath0 @xcite has attracted immense interest in the scientific community because of their potential applications in electronic , magnetic , optoelectronic , and sensing devices . modifying their electronic properties in a controlled manner is the key to the success of technologies based on these materials . some popular techniques used in modifying their properties are electric field , heterostructuring , functionalization , and application of strain . due to experimental feasibility of strain , it has been a very effective way of transforming and tuning electronic @xcite , mechanical @xcite as well as magnetic @xcite properties of these materials . recent studies have shown its applicability in modifying some of the fundamental properties of bilayer phosphorene and transition metal dichalcogenides ( tmds ) where a complete semiconductor to metal ( s - m ) transition was observed under the applied strain @xcite . the effect of strain has been studied in not only two - dimensional materials , but also , in materials of other dimensionalities . some examples include direct to indirect band gap transition in multilayered wse@xmath0 @xcite and drastic changes in the electronic properties of few layers to bulk mos@xmath0 @xcite . its application is not only limited to electronic properties but it can also engineer the vibrational properties @xcite of these materials . among all the layered 2d materials , semiconducting sns@xmath0 is more earth - abundant and environment - friendly and demonstrates a wide spectrum of applications . for example , lithium- @xcite and sodium - ion batteries @xcite with anodes constructed from sns@xmath0 show high capacity , enhanced cyclability , and excellent reversibility . in addition , sns@xmath0 shows good photocatalytic activity @xcite . nanosheets made from sns@xmath0 are considered to be very good for hydrogen generation by photocatalytic water splitting @xcite . the visible - light photocatalytic activity of reduced graphene oxide has been shown to be enhanced by doping cu in sns@xmath0 sheets @xcite . further usage of sns@xmath0 in photonics has been demonstrated through fabrication of fast - response sns@xmath0 photodetectors having an order of magnitude faster photocurrent response times @xcite than those reported for other layered materials . this makes sns@xmath0 a suitable candidate for sustainable and green " optoelectronics applications . however , most of these applications rely on the inherent band gap of sns@xmath0 and therefore , its functionality could be further enhanced by tuning its band gap . here , we explore this possibility for layered sns@xmath0 as a function of ( i ) number of layers and ( ii ) application of different strains such as biaxial tensile ( bt ) , biaxial compressive ( bc ) , and normal compressive ( nc ) strains . band gap tuning in layered materials such as mos@xmath0 @xcite and phosphorene @xcite has been achieved using these strategies . interestingly , at zero strain , on increasing the layer number in sns@xmath0 , the band gap was found to be indirect and insensitive to the number of the layers due to the weaker interlayer coupling in sns@xmath0 compared to mos@xmath0 . in addition , applying strain does not change the nature of the band gap . on the contrary , irrespective of the type of applied strain , a reversible semiconductor to metal ( s - m ) transition was observed . the s - m transition in bilayer ( 2l ) sns@xmath0 was achieved at strain values of 0.17 , @xmath10.26 , and @xmath10.24 , for bt , bc , and nc strains , respectively . the strain values required to achieve this s - m transition in sns@xmath0 are higher in magnitude compared to mos@xmath0 , which is attributed to the weak interlayer coupling in sns@xmath0 . the calculations were performed using first - principles density functional theory ( dft ) @xcite as implemented in the vienna _ ab initio _ simulation package ( vasp ) @xcite . projector augmented wave ( paw ) @xcite pseudopotentials were used to represent the electron - ion interactions . in order to obtain an accurate band gap , the heyd - scuseria - ernzerhof ( hse06 ) hybrid functional @xcite was used , which models the short - range exchange energy of the electrons by fractions of fock and perdew - burke - ernzerhof ( pbe ) exchange @xcite . the addition of the fock exchange partially removes the pbe self - interaction which resolves the band gap overestimation problem . sufficient vacuum was added along the @xmath6-axis to avoid spurious interactions between the sheet and its periodic images . to model the van der waals interactions between the sns@xmath0 layers , we incorporated a semi - empirical dispersion potential ( @xmath7 ) to the conventional kohn - sham dft energy , through a pair - wise force field through grimme s dft - d3 method @xcite . the brillouin zone was sampled by a well - converged 9@xmath89@xmath81 monkhorst - pack @xcite * k*-grid . structural relaxation was performed using the conjugate - gradient method until the absolute values of the components of the hellman - feynman forces were converged to within 0.005 ev / . sns@xmath0 belongs to the group - iv dichalcogenide mx@xmath0 family @xcite , where , m and x are group iv and chalcogen elements , respectively . the hexagonal sns@xmath0 monolayer has a cdi@xmath0 layered crystal structure with a space group symmetry of @xmath9 @xcite unlike mos@xmath0 , which is a transition metal dichalcogenide with a space group symmetry of @xmath10 . it consists of three atomic sublayers with sn atoms in the middle layer covalently bonded to s atoms located at the top and bottom layers ( fig . [ fig:1](a ) ) . in multilayered sns@xmath0 , the interlayer interactions are van der waals type @xcite . the optimized lattice parameters of bulk and monolayer sns@xmath0 are @xmath11 3.689 , @xmath12 5.98 , and @xmath13 3.69 , respectively , which are in good agreement with the experimental values@xcite . sns@xmath0 prefers aa stacking in the multilayered structure ( fig . [ fig:1](b ) ) and the in - plane lattice parameter @xmath14 and interlayer distances remain the same as in bulk . bulk sns@xmath0 is an indirect band gap semiconductor with an hse06 band gap of 2.27 ev . the valence band maxima ( vbm ) is in between the @xmath15 and m points , while the conduction band minima ( cbm ) is at the l point . the band structures of 1 , 2 , 3 , 4 , and 5 layered sns@xmath0 are calculated and are shown for 1 - 3 layers in fig . [ fig:1](c ) . the band structures of all these multilayers show that the number of bands forming vbm and cbm increase with layer number and are equal to the number of layers , as shown in fig . [ fig:1](c ) . this gives an indication of weak interlayer interaction . like bulk , all these multilayered sns@xmath0 structures are indirect band gap semiconductors . the hse06 band gap values of 1 , 2 , 3 , 4 , and 5 layers are 2.47 ev , 2.46 ev , 2.40 ev , 2.35 ev , and 2.34 ev , respectively . the overall variation in the band gap from monolayer to bulk is very small ( 0.20 ev ) demonstrating insensitivity of the electronic structure to the number of layers . it is well - known that experimental growth of these layered materials with a precise control over ( a ) the number of layers and ( b ) the quantity of material of a pre - decided thickness , is a difficult task . therefore , although this insensitivity will limit its application in optical and sensing devices , it can still be beneficial from the nanoelectronics perspective . the devices fabricated from sns@xmath0 may have minimum noise or error in terms of the change in ( a ) the contact resistance and ( b ) carrier mobilities , both arising from the difference in the number of layers , unlike for mos@xmath0 field effect transistors ( fets ) @xcite . having shown the insensitivity of the band gap towards the number of layers , we now explore the possibility of electronic structure tuning by applying strain . for any strain value , the nature of the band gap ( i.e. indirect band gap ) was found to be independent of the number of layers . therefore , here , we will discuss only the results of bilayer ( 2l ) sns@xmath0 in detail . first , we study the effect of in - plane uniform bc and bt strains on the electronic structure of 2l - sns@xmath0 . the in - plane strain ( @xmath16 ) is given by @xmath17 , where , @xmath14 and @xmath18 are lattice parameters with and without strain , respectively . for the unstrained 2l - sns@xmath0 , the vbm is in between the @xmath15 and m points , while the cbm is at the m point . the conduction bands cbm and cbm@xmath11 are dispersive at the m point , whereas , the valence bands vbm and vbm@xmath11 , are not very dispersive . hence , the overall curvature as well as the mobilities of these valence bands are expected to be quite low . with increasing bt strain , the vbm and vbm@xmath11 remain flat and are still located in the @xmath15-m region as shown in fig . [ fig2](a ) . at a strain value of 0.06 , the cbm shifts to the @xmath15 point while still preserving the indirect nature of the band gap . in addition , the curvature of the conduction bands at the m ( @xmath15 ) point decreases ( increases ) , implying that , locally , the mobility of @xmath19-type carriers at this point may decrease ( increase ) . however , the overall mobility of the @xmath19-type carriers may not change significantly , owing to the compensation of these local changes in the mobilities . also , irrespective of the strain , the shape of the conduction as well as the valence bands do not change . at a critical strain value of 0.17 , the cbm at @xmath15 crosses the fermi level ( fig . [ fig2](a ) ) , rendering 2l - sns@xmath0 metallic ( fig . [ fig2](b ) ) . under bc strain , similar changes in the band dispersion of 2l - sns@xmath0 are observed . surprisingly , unlike the bt strain , at a strain value of @xmath10.06 , there is a significant shift in the cbm away from the fermi level as compared to the vbm resulting in an increase in the band gap . however , irrespective of strain , the cbm remains at the m point . the vbm remains at the @xmath15-m location upto a strain of @xmath10.06 , beyond which , it moves to the @xmath15 point . in addition , both , the valence as well as conduction bands become more dispersive with strain , indicating that the overall mobilities of @xmath20- and @xmath19-type carriers may increase . similar to bt strain , the movement of vbm and cbm towards the fermi level leads to the band gap reduction , while still preserving an indirect band gap , as shown in fig . [ fig2](c ) . at a critical strain of @xmath10.26 , the s - m transition occurs ( fig . [ fig2](d ) ) , with the vbm crossing the fermi level ( fig . [ fig2](c ) ) . we next study the changes in band structure of sns@xmath0 under normal compressive ( nc ) strain ( @xmath21 ) defined as @xmath22 , where @xmath23 and @xmath24 ( fig . [ fig:1](b ) ) are the layer thickness at applied and zero strains . the minimum required number of layers for nc strain is two . the nc strain was applied perpendicular to the plane of the multi - layers of sns@xmath0 . constrained relaxation mechanism was incorporated wherein the atoms in upper and lower layers were restricted to move along the normal direction at each strain value . with increasing strain , the band gap reduces smoothly and becomes metallic at @xmath10.24 , as shown in figs . [ fig2](e ) and ( f ) . in the strain range of @xmath10.08 to @xmath10.17 , the vbm remains in between the @xmath15 and m points , whereas there is a drastic shift in the position of cbm from m to in between @xmath15 and m points . similar to bc strain , the bands become more dispersive with increasing nc strain , indicating that the overall mobilities of @xmath20- and @xmath19-type carriers can increase . at @xmath10.24 strain , a band crossing occurs at the fermi level rendering it metallic . next , we discuss the mechanism of s - m transition under bt , bc , and nc strains . we calculated the angular momentum projected density of states ( ldos ) and band - decomposed charge density of 2l - sns@xmath0 as a function of these strains . in unstrained sns@xmath0 , the valence and conduction bands originate predominantly from s-@xmath4/@xmath5/@xmath2 , and the direction - independent sn-@xmath3 orbitals , respectively , as shown in fig . [ fig:3](a ) . with increasing bt strain up to 0.06 , s-@xmath2 crosses the s-@xmath4 and s-@xmath5 orbitals , and becomes the frontier orbital . the band gap begins to reduce due to the movement of these frontier orbitals ( sn-@xmath3 and s-@xmath2 ) towards the fermi level . for strains beyond 0.06 , the intralayer interactions between these orbitals begin to increase . this is also confirmed by the band - decomposed charge density plots ( fig . [ fig:3](b ) ) . at a critical strain of 0.17 , there is a strong hybridization between these orbitals , as is also observed in the cbm band - decomposed charge density ( fig . [ fig:3](c ) ) , which finally leads to the semiconductor to metal transition . in the case of bc strain , the s-@xmath2 orbital crosses the tails of the s-@xmath4 and s-@xmath5 orbitals and moves away from the fermi level , as shown in fig . [ fig:3](d ) . hence , the s-@xmath4/@xmath5 and sn-@xmath3 orbitals contribute predominantly to the valence and the conduction bands , respectively . with a strain upto @xmath10.06 , the band gap initially increases , as observed from the band structure in fig . [ fig2](d ) . this increase is due to the slight upward ( downward ) shift in the sn-@xmath3 ( s-@xmath4/@xmath5 ) orbitals along with the crossing of the s-@xmath2 orbital , causing the conduction ( valence ) bands to move up ( down ) as shown in fig . [ fig:3](d ) . a similar band gap increase at an intermediate bc strain is observed in mos@xmath0 @xcite . as shown in the vbm band - decomposed charge density in fig . [ fig:3](d ) , the wavefunction spread between the s atoms ( above and below sn ) in a layer decreases compared to the unstrained case ( fig . [ fig:3](a ) ) due to the shifting of the valence orbitals @xmath4 , @xmath5 , and @xmath2 of the s atom . this localization causes the increase in the band gap . as we go beyond @xmath10.06 , the band gap begins to decrease because of the movement of the s-@xmath4/@xmath5 ( in - plane orbitals ) and sn-@xmath3 orbitals towards the fermi level . at a critical strain of @xmath10.26 , this movement results in s - m transition . here , the in - plane orbitals contribute to s - m transition , whereas in bt strain , in addition to the in - plane orbitals , out - of - plane orbitals also play a role in triggering s - m transition . with increasing nc strain up to @xmath10.17 , the sn-@xmath3 and s-@xmath4/@xmath5/@xmath2 orbitals begin to move towards the fermi level , as shown in figs . [ fig:4](a ) and ( b ) . for strains beyond @xmath10.17 , these orbitals start overlapping because of the reduction in the interlayer distance . at a critical strain of @xmath10.24 , interlayer interactions become enhanced and as a consequence , results in strong hybridization between s-@xmath2 and sn-@xmath3 orbitals ( fig . [ fig:4](c ) ) . these orbitals cross the fermi level and lead to the s - m transition . from the band - decomposed charge analysis , at this critical strain , the presence of out - of - plane lobes ( @xmath2 ) in the vbm and @xmath3 orbital in cbm are mainly responsible for s - m transition , as shown in fig . [ fig:4](c ) . among the applied strains , bt strain results in the fastest s - m transition . although the critical strain required in the case of bt strain is the lowest ( @xmath25 0.17 compared to @xmath25 0.26 and 0.24 in the case of bc and nc strains , respectively ) , the nc strain is more feasible compared to the biaxial strains . this is because the intralayer bonding between the atoms is much stronger compared to the weaker interlayer van der waals interactions . having explored band gap tuning using bt , bc , and nc strains , it is interesting to perform a comparative study of the band gap variation as a function of ( a ) interlayer coupling between the layers and ( b ) applied nc strain , in both 2l - sns@xmath0 and 2l - mos@xmath0 . here , we use the generalized gradient approximation ( gga ) of the pbe form to model the exchange - correlation . fig . [ fig:5](a ) shows the band gap as a function of the number of layers for the unstrained structures of mos@xmath0 and sns@xmath0 . as we go from 1l to 5l , the change in the band gap is small for sns@xmath0 ( @xmath26 0.24 ev ) compared to that in mos@xmath0 ( @xmath26 0.59 ev ) , indicating that the band gap of sns@xmath0 is less sensitive to the number of layers . the reason for this insensitivity is attributed to the weak interlayer coupling in sns@xmath0 compared to mos@xmath0 . [ fig:5](b ) shows the interlayer coupling energy as a function of interlayer distance for 2l - sns@xmath0 and 2l - mos@xmath0 . here , the adjacent layers in sns@xmath0 are weakly coupled to each other , with a coupling energy of 235 mev / unit cell , relative to that of mos@xmath0 ( 430 mev / unit cell ) . the weaker interlayer coupling in sns@xmath0 can be explained based on the higher sn electron affinity ( @xmath27 107.3 kj / mol ) @xcite compared to mo electron affinity ( @xmath28 72.3 kj / mol ) @xcite . a higher @xmath29 value indicates a lower tendency to interact with the neighboring atoms . therefore , the polarizability of the sn atom is much lower than in mo causing a weaker interlayer coupling in sns@xmath0 . the above observation is also confirmed by the charge accumulation and depletion calculations of 1l - sns@xmath0 and 1l - mos@xmath0 as shown in figs . [ fig:6](a)-(d ) wherein , the lower accumulation of charge around the s atoms and lower depletion of charge around sn atoms indicates weaker polarizability . as a consequence , the out - of - plane linear elastic modulus ( @xmath30 ) , i.e. @xmath31 where , @xmath6 is the interlayer distance and @xmath32 is the interlayer coupling energy , of sns@xmath0 ( @xmath33 11.1 gpa ) is nearly half of that of mos@xmath0 ( @xmath33 20.6 gpa ) . as a function of nc strain , in bilayer mos@xmath0 , due to stronger interlayer coupling , the strain dependence of band gap is more prominent ( 0.11 ev / unit strain ) than sns@xmath0 ( 0.026 ev / unit strain ) , as shown in fig . [ fig:5](c ) . therefore , the overlap of wave functions within the adjacent layers is much stronger in mos@xmath0 than in sns@xmath0 , and hence a small change in the interlayer distance can lead to a band renormalization in mos@xmath0 . for the same reason , semiconductor to metal transition is observed at a relatively high strain 23@xmath34 for 2l - sns@xmath0 compared to 2l - mos@xmath0 ( 12@xmath34 ) under nc strain ( fig . [ fig:5](c ) ) . therefore , the insensitivity of the indirect band gap of sns@xmath0 towards the number of layers and less dependency on strain can be attributed to the weak coupling between the layers . in conclusion , using density functional theory based calculations , we have shown the tuning of the band gap as well as reversibility in s - m transition for multilayers of sns@xmath0 under different strains ( bt , bc , and nc ) . with increasing strain , there is a smooth reduction in the band gap for all the strains , with a small increase observed at @xmath10.06 bc strain . the critical strain at which s - m transition is achieved , decreases significantly , as we go from mono to five layers for all the strains . moreover , in order to understand the s - m transition under different strains , we analyzed the contribution from different molecular orbitals by performing ldos and band - decomposed charge density calculations . for bt , bc , and nc strains , the interaction between the s-@xmath2 and sn-@xmath3 , s-@xmath4/@xmath5 and sn-@xmath3 , and s-@xmath2 and sn-@xmath3 orbitals , respectively , trigger s - m transition . for comparison , we studied the band gap trends in 2l - sns@xmath0 and 2l - mos@xmath0 . at zero strain , the change in the band gap of sns@xmath0 is @xmath26 0.24 ev , which is more than half of that in mos@xmath0 ( @xmath26 0.59 ev ) , while going from monolayer to five layer . moreover , the band gap in sns@xmath0 remains indirect irrespective of the number of layers as well as applied strain . the insensitivity of the band gap in sns@xmath0 to the number of layers is attributed to the weaker interlayer coupling ( 235 mev / unit cell ) than in mos@xmath0 ( 430 mev / unit cell ) due to lower polarizability . in terms of band gap dependence under nc strain , sns@xmath0 is less sensitive to strain ( 0.026 ev per unit strain ) than mos@xmath0 ( 0.11 ev per unit strain ) . the authors thank tribhuwan pandey for fruitful discussions , and the supercomputer education and research centre , iisc , for providing the required computational facilities for the above work . the authors acknowledge dst nanomission for financial support . [ 1]#1 @ifundefined stephenson , t. ; li , z. ; olsen , b. ; mitlin , d. lithium ion battery applications of molybdenum disulfide ( mos@xmath35 ) nanocomposites . _ energy environ . sci . _ * 2014 * , _ 7 _ , 209231 wang , q. h. ; kalantar - zadeh , k. ; kis , a. ; coleman , j. n. ; strano , m. s. electronics and optoelectronics of two - dimensional transition metal dichalcogenides . nanotechnol . _ * 2012 * , _ 7 _ , 699712 zhao , y. ; zhang , y. ; yang , z. ; yan , y. ; sun , k. synthesis of mos@xmath35 and moo@xmath35 for their applications in h@xmath35 generation and lithium ion batteries : a review . tech . adv . mater . _ * 2013 * , _ 14 _ , 043501 tongay , s. ; zhou , j. ; ataca , c. ; lo , k. ; matthews , t. s. ; li , j. ; grossman , j. c. ; wu , j. thermally driven crossover from indirect toward direct bandgap in 2d semiconductors : mose@xmath35 versus mos@xmath35 . _ nano lett . _ * 2012 * , _ 12 _ , 55765580 fei , r. ; yang , l. strain - engineering the anisotropic electrical conductance of few - layer black phosphorus . _ nano lett . _ * 2014 * , _ 14 _ , 28842889 qu , b. ; ma , c. ; ji , g. ; xu , c. ; xu , j. ; meng , y. s. ; wang , t. ; lee , j. y. layered sns@xmath35-reduced graphene oxide composite a high - capacity , high - rate , and long - cycle life sodium - ion battery anode material . _ adv . mater . _ * 2014 * , _ 26 _ , 38543859 wei , r. ; hu , j. ; zhou , t. ; zhou , x. ; liu , j. ; li , j. ultrathin sns@xmath35 nanosheets with exposed \{001 } facets and enhanced photocatalytic properties . _ acta mater . _ * 2014 * , _ 66 _ , 163 171 zhou , m. ; lou , x. w. d. ; xie , y. two - dimensional nanosheets for photoelectrochemical water splitting : possibilities and opportunities . _ nano today _ * 2013 * , _ 8 _ , 598 618 manjanath , a. ; samanta , a. ; pandey , t. ; singh , a. k. semiconductor to metal transition in bilayer phosphorene under normal compressive strain . _ nanotechnology _ * 2015 * , _ 26 _ , 075701 bhattacharyya , s. ; singh , a. k. semiconductor - metal transition in semiconducting bilayer sheets of transition - metal dichalcogenides . b _ * 2012 * , _ 86 _ , 075454 hinsche , n. f. ; yavorsky , b. y. ; mertig , i. ; zahn , p. influence of strain on anisotropic thermoelectric transport in bi@xmath36te@xmath37 and sb@xmath36te@xmath37 . _ phys . rev . b _ * 2011 * , _ 84 _ , 165214 luo , x. ; sullivan , m. b. ; quek , s. y. first - principles investigations of the atomic , electronic , and thermoelectric properties of equilibrium and strained bi@xmath35se@xmath38 and bi@xmath35te@xmath38 including van der waals interactions . b _ * 2012 * , _ 86 _ , 184111 nayak , a. p. ; pandey , t. ; voiry , d. ; liu , j. ; moran , s. t. ; sharma , a. ; tan , c. ; chen , c .- h . ; li , l .- j . ; chhowalla , m. pressure - dependent optical and vibrational properties of monolayer molybdenum disulfide . _ nano lett . _ * 2015 * , _ 15 _ , 346353 guo , h. ; yang , t. ; tao , p. ; wang , y. ; zhang , z. high pressure effect on structure , electronic structure , and thermoelectric properties of mos@xmath35 . _ j. appl . phys . _ * 2013 * , _ 113 _ , 013709 nayak , a. p. ; bhattacharyya , s. ; zhu , j. ; liu , j. ; wu , x. ; pandey , t. ; jin , c. ; singh , a. k. ; akinwande , d. ; lin , j .- f . pressure - induced semiconducting to metallic transition in multilayered molybdenum disulphide . commun . _ * 2014 * , _ 5 _ , 4731 kang , j. ; sahin , h. ; peeters , f. m. tuning carrier confinement in the mos@xmath39/ws@xmath39 lateral heterostructure . _ j. phys . * 2015 * , _ 119 _ , 95809586 pereira , v. m. ; castro neto , a. h. strain engineering of graphene s electronic structure . lett . _ * 2009 * , _ 103 _ , 046801 casillas , g. ; santiago , u. ; barrn , h. ; alducin , d. ; ponce , a. ; jos - yacamn , m. elasticity of mos@xmath39 sheets by mechanical deformation observed by in situ electron microscopy . _ j. phys . c _ * 2014 * , _ 119 _ , 710715 yun , w. s. ; lee , j. strain - induced magnetism in single - layer mos@xmath39 : origin and manipulation . _ j. phys . c _ * 2015 * , _ 119 _ , 28222827 desai , s. b. ; seol , g. ; kang , j. s. ; fang , h. ; battaglia , c. ; kapadia , r. ; ager , j. w. ; guo , j. ; javey , a. strain - induced indirect to direct bandgap transition in multilayer wse@xmath35 . _ nano lett . _ * 2014 * , _ 14 _ , 45924597 bhattacharyya , s. ; pandey , t. ; singh , a. k. effect of strain on electronic and thermoelectric properties of few layers to bulk mos@xmath35 . _ nanotechnology _ * 2014 * , _ 25 _ , 465701 zhang , l. ; zunger , a. evolution of electronic structure as a function of layer thickness in group - vib transition metal dichalcogenides : emergence of localization prototypes . _ nano lett . _ * 2015 * , _ 15 _ , 949957 li , j. ; wu , p. ; lou , f. ; zhang , p. ; tang , y. ; zhou , y. ; lu , t. mesoporous carbon anchored with sns@xmath35 nanosheets as an advanced anode for lithium - ion batteries . _ electrochim . acta _ * 2013 * , _ 111 _ , 862 868 sathish , m. ; mitani , s. ; tomai , t. ; honma , i. ultrathin sns@xmath35 nanoparticles on graphene nanosheets : synthesis , characterization , and li - ion storage applications . _ c _ * 2012 * , _ 116 _ , 1247512481 chang , k. ; wang , z. ; huang , g. ; li , h. ; chen , w. ; lee , j. y. few - layer sns@xmath35/graphene hybrid with exceptional electrochemical performance as lithium - ion battery anode . _ j. power sources _ * 2012 * , _ 201 _ , 259 266 zhong , h. ; yang , g. ; song , h. ; liao , q. ; cui , h. ; shen , p. ; wang , c .- x . vertically aligned graphene - like sns@xmath39 ultrathin nanosheet arrays : excellent energy storage , catalysis , photoconduction , and field - emitting performances . _ j. phys . c _ * 2012 * , _ 116 _ , 93199326 zhang , y. ; zhu , p. ; huang , l. ; xie , j. ; zhang , s. ; cao , g. ; zhao , x. few - layered sns@xmath35 on few - layered reduced graphene oxide as na - ion battery anode with ultralong cycle life and superior rate capability . funct . mater . _ * 2015 * , _ 25 _ , 481489 yu , h. ; ren , y. ; xiao , d. ; guo , s. ; zhu , y. ; qian , y. ; gu , l. ; zhou , h. an ultrastable anode for long - life room - temperature sodium - ion batteries . _ angew . chem . int . ed . _ * 2014 * , _ 126 _ , 91099115 lei , y. ; song , s. ; fan , w. ; xing , y. ; zhang , h. facile synthesis and assemblies of flowerlike sns@xmath39 and in@xmath40-doped sns@xmath39 : hierarchical structures and their enhanced photocatalytic property . _ j. phys . c _ * 2009 * , _ 113 _ , 12801285 yu , j. ; xu , c .- y . ; ma , f .- x . ; hu , s .- p . ; zhang , y .- w . ; zhen , l. monodisperse sns@xmath35 nanosheets for high - performance photocatalytic hydrogen generation . _ acs appl . inter . _ * 2014 * , _ 6 _ , 2237022377 xiaoqiang , a. ; c. , y. j. ; junwang , t. biomolecule - assisted fabrication of copper doped sns@xmath35 nanosheet - reduced graphene oxide junctions with enhanced visible - light photocatalytic activity . _ j. mater . chem . * 2014 * , _ 2 _ , 10001005 su , g. ; hadjiev , v. g. ; loya , p. e. ; zhang , j. ; lei , s. ; maharjan , s. ; dong , p. ; m. ajayan , p. ; lou , j. ; peng , h. chemical vapor deposition of thin crystals of layered semiconductor sns@xmath39 for fast photodetection application . _ nano lett . _ * 2014 * , _ 15 _ , 506513 han , s. ; kwon , h. ; kim , s. k. ; ryu , s. ; yun , w. s. ; kim , d. ; hwang , j. ; kang , j .- s . ; baik , j. ; shin , h. band - gap transition induced by interlayer van der waals interaction in mos@xmath39 . b _ * 2011 * , _ 84 _ , 045409 tran , v. ; soklaski , r. ; liang , y. ; yang , l. layer - controlled band gap and anisotropic excitons in few - layer black phosphorus . b _ * 2014 * , _ 89 _ , 235319 kohn , w. ; sham , l. j. self - consistent equations including exchange and correlation effects . rev . _ * 1965 * , _ 140 _ , 11331138 kresse , g. ; furthmller , j. efficiency of ab - initio total energy calculations for metals and semiconductors using a plane - wave basis set . sci . _ * 1996 * , _ 6 _ , 1550 kresse , g. ; furthmller , j. efficient iterative schemes for _ ab initio _ total - energy calculations using a plane - wave basis set . b _ * 1996 * , _ 54 _ , 1116911186 blchl , p. e. projector augmented - wave method . b _ * 1994 * , _ 50 _ , 1795317979 kresse , g. ; joubert , d. from ultrasoft pseudopotentials to the projector augmented - wave method . _ phys . rev . b _ * 1999 * , _ 59 _ , 17581775 heyd , j. ; peralta , j. e. ; scuseria , g. e. ; martin , r. l. energy band gaps and lattice parameters evaluated with the heyd - scuseria - ernzerhof screened hybrid functional . _ j. chem . phys . _ * 2005 * , _ 123 _ , 174101 janesko , b. g. ; henderson , t. m. ; scuseria , g. e. screened hybrid density functionals for solid - state chemistry and physics . _ phys . _ * 2009 * , _ 11 _ , 443454 ellis , j. k. ; lucero , m. j. ; scuseria , g. e. the indirect to direct band gap transition in multilayered mos@xmath35 as predicted by screened hybrid density functional theory . lett . _ * 2011 * , _ 99 _ , 261908 grimme , s. semiempirical gga - type density functional constructed with a long - range dispersion correction . _ j. comput . _ * 2006 * , _ 27 _ , 17871799 monkhorst , h. j. ; pack , j. d. special points for brillouin - zone integrations . _ b _ * 1976 * , _ 13 _ , 51885192 lokhande , c. d. a chemical method for tin disulphide thin film deposition . _ d. _ * 1990 * , _ 23 _ , 1703 robert , m. ; finger , w. the crystal structures and compressibilities of layer minerals at high pressure . i. sns@xmath39 , berndtite . mineral . _ * 1978 * , _ 63 _ , 289292 patil , s. g. ; tredgold , r. h. electrical and photoconductive properties of sns@xmath35 crystals . _ j. phys . d. _ * 1971 * , _ 4 _ , 718 whitehouse , c. ; balchin , a. polytypism in tin disulphide . _ j. crys . growth _ * 1979 * , _ 47 _ , 203 212 mitchell , r. ; fujiki , y. ; ishizawa , y. structural polytypism of tin disulfide : its relationship to environments of formation . . growth _ * 1982 * , _ 57 _ , 273 279 al - alamy , f. ; balchin , a. the growth by iodine vapour transport and the crystal structures of layer compounds in the series sns@xmath41se@xmath42 ( 0 @xmath43 2 ) , sn@xmath41zr@xmath44se@xmath39 ( 0 @xmath45 1 ) , and tas@xmath41se@xmath42 ( 0 @xmath45 2 ) . . growth _ * 1977 * , _ 38 _ , 221 232 sun , y. ; cheng , h. ; gao , s. ; sun , z. ; liu , q. ; liu , q. ; lei , f. ; yao , t. ; he , j. ; wei , s. freestanding tin disulfide single - layers realizing efficient visible - light water splitting . ed . _ * 2012 * , _ 51 _ , 87278731 toh , m. ; tan , k. ; wei , f. ; zhang , k. ; jiang , h. ; kloc , c. intercalation of organic molecules into @xmath46 single crystals . _ j. solid state chem . _ * 2013 * , _ 198 _ , 224230 krasnozhon , d. ; lembke , d. ; nyffeler , c. ; leblebici , y. ; kis , a. mos@xmath39 transistors operating at gigahertz frequencies . _ nano lett . _ * 2014 * , _ 14 _ , 59055911 vandevraye , m. ; drag , c. ; blondel , c. electron affinity of tin measured by photodetachment microscopy . _ j. phys . b _ * 2013 * , _ 46 _ , 125002 bilodeau , r. c. ; scheer , m. ; haugen , h. k. infrared laser photodetachment of transition metal negative ions : studies on cr@xmath47 , mo@xmath47 , cu@xmath47 , and ag@xmath47 . _ j. phys . b _ * 1998 * , _ 31 _ , 3885
controlled variation of the electronic properties of 2d materials by applying strain has emerged as a promising way to design materials for customized applications . using first principles density functional theory calculations , we show that while the electronic structure and indirect band gap of sns@xmath0 do not change significantly with the number of layers , they can be reversibly tuned by applying biaxial tensile ( bt ) , biaxial compressive ( bc ) , and normal compressive ( nc ) strains . mono to multilayered sns@xmath0 exhibit a reversible semiconductor to metal transition ( s - m ) at strain values of 0.17 , @xmath10.26 , and @xmath10.24 under bt , bc , and nc strains , respectively . due to weaker interlayer coupling , the critical strain value required to achieve s - m transition in sns@xmath0 under nc strain is much higher than for mos@xmath0 . the s - m transition for bt , bc , and nc strains is caused by the interaction between the s-@xmath2 and sn-@xmath3 , s-@xmath4/@xmath5 and sn-@xmath3 , and s-@xmath2 and sn-@xmath3 orbitals , respectively .
primary human myometrial cells were isolated using a mixture of collagenases { 1 mg / ml of collagenase 1 a and 1 mg / ml of collagenase xi ( sigma ) } and cultured in dmem medium containing phenol red 7.5% foetal calf serum , l - glutamine and 100 mu / ml penicillin and 100 g / ml streptomycin in an atmosphere of 5% co2 : 95% air at 37 c . cells were exposed to il-1 , p4 either alone or in combination ; ethanol was used as the vehicle control for 6 h. total rna was extracted and purified from myometrial cells grown in 6-well culture plates using rnaeasy mini kit ( qiagen ) . cdna generated from 2 g of total rna using the genechip expression 3-amplification one - cycle cdna synthesis kit , in conjunction with the genechip eukaryotic polya rna control kit ( affymetrix , inc . ) . the cdna was cleaned up using the genechip sample cleanup module and subsequently processed to generate biotin - labelled crna using the genechip expression 3-amplification ivt labelling kit ( affymetrix , inc . ) . 25 g of labelled crna was fragmented using 5 fragmentation buffer and rnase - free water at 94 c for 35 min . 15 g of the fragmented , biotin - labelled crna was made up in a hybridization cocktail and hybridised to the hgu133 plus 2.0 array at 45 c for 16 h. following hybridization the arrays were washed and stained using the affymetrix fluidics station 450 . all steps of the process were quality controlled by measuring yield ( g ) , concentration ( g / l ) and 260:280 ratios via spectrophotometry using the nanodrop nd-1000 and sample integrity using the agilent 2100 bioanalyser ( agilent technologies , inc . ) . after general array quality control and data export , the data were normalised using robust multichip average and imported into partek genomic suite . array data were then identified for any potential outliers and overall grouping / separation using principal component analysis . the gene lists were generated by a sequential filtering approach based on the fold - change ( normalisation to the control group ) and the confidence ( fold - change p value ) . the less - stringent gene list resulted from the fold - change greater than 1.5 by comparing the treatment group to the control group . the stringent gene list was obtained by further filtering with fold - change p < 0.05 . this work was supported by grants from action medical research , beneficentia stiftung and the chelsea and westminster health charity , borne . it was also supported by the national institute for health research ( nihr ) biomedical research centre . the views expressed are those of the author(s ) and not necessarily those of the nhs , the nihr or the department of health . is supported by the biomedical research unit , a joint initiative between university hospitals coventry and warwickshire and the university of warwick .
inflammation plays a central role in many human diseases . human parturition also resembles an inflammatory reaction , where progesterone ( p4 ) and progesterone receptors ( prs ) have already been demonstrated to suppress contraction - associated gene expression . in our previous studies , we have found that the progesterone actions , including progesterone - induced gene expression and progesterone 's anti - inflammatory effect , are mediated by pr , gr or both . in this study , we used microarrays ( gse68171 ) to find p4 and il-1 responsive genes and il-1 responsive genes which were repressed by p4 . these data may provide a broader view of gene networks and cellular functions regulated by p4 and il-1 in human myometrial cells . these data will also help us understand the role of pr and gr in human parturition .
SECTION 1. SHORT TITLE. This Act may be cited as the ``Jumpstart VA Construction Act''. SEC. 2. FINDINGS. Congress makes the following findings: (1) The buildings of the Department of Veterans Affairs have an average age of 60 years. (2) Since 2004, use of Department facilities has grown from 80 percent to 120 percent, while the condition of these facilities has eroded from 81 percent to 71 percent over that same period of time. (3) The Department currently manages and maintains more than 5,600 buildings and almost 34,000 acres of land. (4) More than 3,900 infrastructure gaps remain that will cost between $54,000,000,000 and $66,000,000,000 to close, including $10,000,000,000 in activation costs. (5) The Veterans Health Administration has 21 major construction projects dating to 2007 that have been only partially funded. (6) The total unobligated amount for all currently budgeted major construction projects exceeds $2,900,000,000. (7) To finish existing projects and to close current and future gaps, the Department will need to invest at least $23,200,000,000 over the next 10 years. (8) At current requested funding levels, it will take more than 67 years to complete the 10-year capital investment plan of the Department. SEC. 3. PILOT PROGRAM FOR THE CONSTRUCTION OF DEPARTMENT OF VETERANS AFFAIRS MAJOR MEDICAL FACILITY PROJECTS BY NON-FEDERAL ENTITIES UNDER PARTNERSHIP AGREEMENTS. (a) In General.--The Secretary of Veterans Affairs shall carry out a 10-year pilot program under which the Secretary shall enter into partnership agreements on a competitive basis with appropriate non- Federal entities for the construction of major construction projects authorized by law. (b) Selection of Projects.-- (1) In general.--The Secretary shall select 10 major construction projects for completion by non-Federal entities under the pilot program. Each project selected shall be a major medical facility project authorized by law for the construction of a new facility for which-- (A) Congress has appropriated any funds; (B) the design and development phase is complete; and (C) construction has not begun, as of the date of the enactment of this Act. (2) Type of projects.--In selecting major construction projects under paragraph (1), the Secretary shall select-- (A) four seismic-related projects; (B) four community based outpatient clinic-related projects; and (C) two other projects. (c) Agreements.--Each partnership agreement for a construction project under the pilot program shall provide that-- (1) the non-Federal entity shall obtain any permits required pursuant to Federal and State laws before beginning to carry out construction; and (2) if requested by the non-Federal entity, the Secretary shall provide technical assistance for obtaining any necessary permits for the construction project. (d) Responsibilities of Secretary.--The Secretary shall-- (1) appoint a non-Department of Veterans Affairs entity as the project manager of each major construction project for which the Secretary enters into a partnership agreement under the pilot program; (2) ensure that the project manager appointed under paragraph (1) develops and implements a project management plan to ensure concise and consistent communication of all parties involved in the project; (3) work in cooperation with each non-Federal entity with which the Secretary enters into a partnership agreement to minimize multiple change orders; (4) develop metrics to monitor change order process times, with the intent of expediting any change order; and (5) monitor any construction project carried out by a non- Federal entity under the pilot program to ensure that such construction is in compliance with the Federal Acquisition Regulations and Department of Veterans Affairs acquisition regulations and that the costs are reasonable. (e) Reimbursement.-- (1) In general.--The Secretary shall reimburse, without interest, a non-Federal entity that carries out work pursuant to a partnership agreement under the pilot program in an amount equal to the estimated Federal share of the cost of such work. Any costs that exceed the amount originally agreed upon between the Secretary and the non-Federal entity shall be paid by the non-Federal entity. The Secretary may commence making payments to a non-Federal entity under this subsection upon entering into a partnership agreement with the entity under this section. (2) Limitation.--The Secretary may not make any reimbursement payment under this subsection until the Secretary determines that the work for which the reimbursement is requested has been performed in accordance with applicable permits and approved plans. (3) Budget requests.--The Secretary shall budget for reimbursement under this section on a schedule that is consistent with the budgeting process of the Department and the ongoing Strategic Capital Investment Planning priorities list. (f) Comptroller General Report.--The Comptroller General of the United States shall submit to Congress a biennial report on the partnership agreements entered into under the pilot program. (g) Deadline for Implementation.--The Secretary shall begin implementing the pilot program under this section by not later than 180 days after the date of the enactment of this Act.
Jumpstart VA Construction Act - Requires the Secretary of Veterans Affairs to carry out a 10-year pilot program of entering into partnership agreements on a competitive basis with appropriate non-federal entities for major authorized construction projects. Directs the Secretary to select 10 major medical facility projects authorized for the construction of a new facility for which: (1) Congress has appropriated funds, (2) the design and development phase is complete, and (3) construction has not begun as of the date of enactment of this Act. Requires four of such projects to be seismic-related projects and four to be community based outpatient clinic-related projects. Directs the Secretary to: (1) appoint a non-Department of Veterans Affairs (VA) entity as the project manager of each project; (2) ensure that the project manager implements a project management plan to ensure concise and consistent communication of all parties involved; (3) work in cooperation with each participating non-federal entity to minimize multiple change orders; (4) develop metrics to monitor change order process times, with the intent of expediting any change order; and (5) monitor construction to ensure that it is in compliance with the Federal Acquisition Regulations and VA acquisition regulations and that the costs are reasonable.
at present a number of powerful techniques based on operator product expansion ( ope ) and effective field theories have been developed . these tools allow one consistently to include into consideration various nonperturbative contributions , written in terms of a few number of universal quantities . the coefficients ( wilson coefficients ) in front of these operators are generally expanded in series over the qcd coupling constant , inverse heavy quark mass and/or relative velocity of heavy quarks inside the hadron . the accuracy , obtained in such calculations , can be systematically improved , and it is limited only by the convergence properties of the mentioned series . the described approach have been already widely used for making the precise predictions in the heavy quark sector of standard model ( sm ) , such as decays , distributions and partial width asymmetries involving the cp violation for the heavy hadrons . the sensitivity of wilson coefficients to virtual corrections caused by some higher - scale interactions makes this approach to be invaluable in searching for a `` new '' physics at forthcoming experiments . the approach under discussion has been successfully used in the description of weak decays of the hadrons containing a single heavy quark , as carried out in the framework of heavy quark effective theory ( hqet ) @xcite , in the annihilation and radiative decays of heavy quarkonia @xmath4 , where one used the framework of non - relativistic qcd ( nrqcd ) @xcite , and in the weak decays of long - lived heavy quarkonium with mixed flavours @xmath5 @xcite .- meson was recently reported by the cdf collaboration @xcite ; see ref.@xcite for a theoretical review of @xmath6-meson physics before the observation . ] the experimental data on the weak decays of heavy hadrons can be used for the determination of basic properties of weak interactions at a fundamental level , in particular , for the extraction of ckm matrix elements . the same approach is also valid for the baryons containing two heavy quarks . in addition to the information extracted from the analysis of hadrons with a single heavy flavor , the baryons with two heavy quarks , @xmath7 , provide a way to explore the nonspectator effects , where their importance is increased . here we would like to note , that in the case of systems with two heavy quarks , the hypothesis on the quark - hadron duality is more justified , and , so , the results of ope - based approach turn out to be more reliable . for these baryons we can apply a method , based on the combined hqet - nrqcd techniques @xcite , if we use the quark - diquark picture for the bound states . the expansion in the inverse heavy quark mass for the heavy diquark @xmath8 is a straightforward generalization of these techniques in the mesonic decays of @xmath6 @xcite , with the difference that , instead of the color - singlet systems , we deal with the color anti - triplet ones , with the appropriate account for the interaction with the light quark . first estimates of the lifetimes for the doubly heavy baryons @xmath9 and @xmath10 were recently performed in @xcite . using the same approach , but different values of parameters a repetition of our results for the case of doubly charmed baryons was done in @xcite . the spectroscopic characteristics of baryons with two heavy quarks and the mechanisms of their production in different interactions were discussed in refs . @xcite and @xcite , respectively . in this paper , we present the calculation of lifetimes for the doubly heavy baryons as well as reconsider the previous estimates with a use of slightly different set of parameters adjusted in the consideration of lifetime data for the observed heavy hadrons and improved spectroscopic inputs . as we made in the description of inclusive decays of the @xmath9 and @xmath10-baryons , we follow the papers @xcite , where all necessary generalizations to the case of hadrons with two heavy quarks and other corrections are discussed . we note , that in the leading order of ope expansion , the inclusive widths are determined by the mechanism of spectator decays involving free quarks , wherein the corrections in the perturbative qcd are taken into account . the introduction of subleading terms in the expansion over the inverse heavy quark masses - correction is absent , and the corrections begin with the @xmath11-terms . ] allows one to take into account the corrections due to the quark confinement inside the hadron . here , an essential role is played by both the motion of heavy quark inside the hadron and chromomagnetic interactions of quarks . the important ingredient of such corrections in the baryons with two heavy quarks is the presence of a compact heavy diquark , which implies that the square of heavy quark momentum is enhanced in comparison with the corresponding value for the hadrons with a single heavy quark . the next characteristic feature of baryons with two heavy quarks is the significant numerical impact on the lifetimes by the quark contents of hadrons , since in the third order over the inverse heavy quark mass , @xmath12 , the four - quark correlations in the total width are enforced in the effective lagrangian due to the two - particle phase space of intermediate states ( see the discussion in @xcite ) . in this situation , we have to add the effects of pauli interference between the products of heavy quark decays and the quarks in the initial state as well as the weak scattering involving the quarks composing the hadron . due to such terms we introduce the corrections depending on spectators and involving the masses of light and strange quarks in the framework of non - relativistic models with the constituent quarks , because they determine the effective physical phase spaces , strongly deviating from the naive estimates in th decays of charmed quarks . we take into account the corrections to the effective weak lagrangian due to the evolution of wilson coefficients from the scale of the order of heavy quark mass to the energy , characterizing the binding of quarks inside the hadron @xcite . the paper is organized as follows . in agreement with the general picture given above , we describe the scheme for the construction of ope for the total width of baryons containing two heavy quarks with account of corrections to the spectator widths in section 2 . the procedure for the estimation of non - perturbative matrix elements of operators in the doubly heavy baryons is considered in section 3 in terms of non - relativistic heavy quarks . section 4 is devoted to the numerical evaluation and discussion of parameter dependence of lifetimes of doubly heavy baryons . we conclude in section 5 by summarizing our results . in this section we describe the approach used for the calculation of total lifetimes for the doubly heavy baryons , originally formulated in @xcite , together with some new formulae , required for the evaluation of nonspectator effects in the decays of other baryons in the family of doubly heavy baryons , not considered previously . the optical theorem along with the hypothesis of integral quark - hadron duality , leads us to a relation between the total decay width of heavy quark and the imaginary part of its forward scattering amplitude . this relationship , applied to the @xmath13-baryon total decay width @xmath14 , can be written down as : @xmath15 where the @xmath13 state in eq . ( [ 1 ] ) has the ordinary relativistic normalization , @xmath16 , and the transition operator @xmath17 is determined by the expression @xmath18 where @xmath19 is the standard effective hamiltonian , describing the low energy interactions of initial quarks with the decays products , so that @xmath20 + h.c.\ ] ] where @xmath21[\bar q_{3\gamma}\gamma^{\nu}(1-\gamma_5)q_{4\delta}](\delta_{\alpha\beta}\delta _ { \gamma\delta}\pm\delta_{\alpha\delta}\delta_{\gamma\beta}),\ ] ] and @xmath22^{\frac{6}{33 - 2f } } , \quad c_- = \left [ \frac{\alpha_s(m_w)}{\alpha_s(\mu)}\right ] ^{\frac{-12}{33 - 2f}},\\\ ] ] where f is the number of flavors , @xmath23 run over the color indeces . under an assumption , that the energy release in the heavy quark decay is large , we can perform the operator product expansion for the transition operator @xmath17 in eq.([1 ] ) . in this way we obtain series of local operators with increasing dimensions over the energy scale , wherein the contributions to @xmath14 are suppressed by the increasing inverse powers of the heavy quark masses . this formalism has already been applied to calculate the total decay rates for the hadrons , containing a single heavy quark @xcite ( for the most early work , having used similar methods , see also @xcite ) and hadrons , containing two heavy quarks @xcite . as was already pointed in @xcite , the expansion , applied here , is simultaneously in the powers of both inverse heavy quark masses and the relative velocity of heavy quarks inside the hadron . thus , this fact shows the difference between the description for the doubly heavy baryons and the consideration of both the heavy - light mesons ( the expansion in powers of @xmath24 ) and the heavy - heavy mesons @xcite ( the expansion in powers of relative velocity of heavy quarks inside the hadron , where one can apply the scaling rules of nonrelativistic qcd @xcite ) . the operator product expansion explored has the form : @xmath25 the leading contribution in eq.([4 ] ) is determined by the operators @xmath26 , corresponding to the spectator decay of @xmath27-quarks . the use of motion equation for the heavy quark fields allows one to eliminate some redundant operators , so that no operators of dimension four contribute . there is a single operator of dimension five , @xmath28 . as we will show below , significant contributions come from the operators of dimension six @xmath29 , representing the effects of pauli interference and weak scattering for doubly heavy baryons . furthermore , there are also other operators of dimension six @xmath30 and @xmath31 , which are suppressed in comparison with @xmath32 @xcite . in what follows , we do not calculate the corresponding coefficient functions for the latter two operators , so that the expansion is certainly complete up to the second order of @xmath33 , only . further , the different contributions to ope are given by the following : @xmath34 where the @xmath35-labelled terms account for the operators of dimension three @xmath36 and five @xmath37 , the @xmath38-marked terms correspond to the effects of pauli interference and weak scattering . the explicit formulae for these contributions have the following form : @xmath39o_{gb } , \label{5}\ ] ] @xmath40o_{gc } , \label{6}\ ] ] where @xmath41 with @xmath42 , and @xmath43 denotes the spectator width ( see @xcite ) : @xmath44 @xmath45 + 12r^2y^2\ln\frac{(1-r - y+\sqrt{1 - 2(r+y)+(r - y)^2})^2}{4ry}\end{aligned}\ ] ] @xmath46 @xmath47 where @xmath48 and @xmath49 . the functions @xmath50 can be obtained from @xmath51 by the substitution @xmath52 . in the @xmath53-quark decays , we neglect the value @xmath54 and suppose @xmath55 . the calculation of both the pauli interference effect for the products of heavy quark decays with the quarks in the initial state and the weak scattering of quarks , composing the hadron , depends on the quark contents of baryons and results in : @xmath56 so that @xmath57 \label{16 } \\ & & [ ( c_{+ } - c_{-})^2 + \frac{1}{3}(1-k^{\frac{1}{2}})(5c_{+}^2+c_{-}^2 + 6c_{-}c_{+})]+ \nonumber\\ & & [ ( \frac{(1-z_{-})^2}{2 } - \frac{(1-z_{-})^3}{4})(\bar b_i\gamma_{\alpha}(1-\gamma_5)b_j)(\bar c_j\gamma^{\alpha}(1-\gamma_5)c_i ) + \nonumber\\ & & ( \frac{(1-z_{-})^2}{2 } - \frac{(1-z_{-})^3}{3})(\bar b_i\gamma_{\alpha}\gamma_5b_j)(\bar c_j\gamma^{\alpha}(1-\gamma_5)c_i)]k^{\frac{1}{2}}(5c_{+}^2+c_{-}^2 + 6c_{-}c_{+ } ) ) , \nonumber\\ { \cal t}_{pi,\tau\bar\nu_{\tau}}^b & = & -\frac{g_f^2|v_{cb}|^2}{\pi}m_b^2(1-\frac{m_c}{m_b})^2\nonumber\\ & & [ ( \frac{(1-z_{\tau})^2}{2 } - \frac{(1-z_{\tau})^3}{4})(\bar b_i\gamma_{\alpha}(1-\gamma_5)b_j)(\bar c_j\gamma^{\alpha}(1-\gamma_5)c_i ) + \label{17}\\ & & ( \frac{(1-z_{\tau})^2}{2 } - \frac{(1-z_{\tau})^3}{3})(\bar b_i\gamma_{\alpha}\gamma_5b_j)(\bar c_j\gamma^{\alpha}(1-\gamma_5)c_i)],\nonumber\\ { \cal t}_{pi , d\bar u}^{b ' } & = & -\frac{g_f^2|v_{cb}|^2}{4\pi}m_b^2(1-\frac{m_d}{m_b})^2\nonumber\\ & & ( [ ( \frac{(1-z_{-})^2}{2}- \frac{(1-z_{-})^3}{4 } ) ( \bar b_i\gamma_{\alpha}(1-\gamma_5)b_i)(\bar d_j\gamma^{\alpha}(1-\gamma_5)d_j ) + \nonumber\\ & & ( \frac{(1-z_{-})^2}{2 } - \frac{(1-z_{-})^3}{3})(\bar b_i\gamma_{\alpha}\gamma_5 b_i)(\bar d_j\gamma^{\alpha}(1-\gamma_5)d_j ) ] \label{18 } \\ & & [ ( c_{+ } + c_{-})^2 + \frac{1}{3}(1-k^{\frac{1}{2}})(5c_{+}^2+c_{-}^2 - 6c_{-}c_{+})]+ \nonumber\\ & & [ ( \frac{(1-z_{-})^2}{2 } - \frac{(1-z_{-})^3}{4})(\bar b_i\gamma_{\alpha}(1-\gamma_5)b_j)(\bar d_j\gamma^{\alpha}(1-\gamma_5)d_i ) + \nonumber\\ & & ( \frac{(1-z_{-})^2}{2 } - \frac{(1-z_{-})^3}{3})(\bar b_i\gamma_{\alpha}\gamma_5b_j)(\bar d_j\gamma^{\alpha}(1-\gamma_5)d_i)]k^{\frac{1}{2}}(5c_{+}^2+c_{-}^2 - 6c_{-}c_{+ } ) ) , \nonumber\\ { \cal t}_{pi , s\bar c}^{b ' } & = & -\frac{g_f^2|v_{cb}|^2}{16\pi}m_b^2(1-\frac{m_s}{m_b})^2\sqrt{(1 - 4z_{- } ) } \nonumber\\ & & ( [ ( 1-z_{-})(\bar b_i\gamma_{\alpha}(1-\gamma_5)b_i)(\bar s_j\gamma^{\alpha}(1-\gamma_5)s_j ) + \nonumber\\ & & \frac{2}{3}(1 + 2z_{-})(\bar b_i\gamma_{\alpha}\gamma_5 b_i)(\bar s_j\gamma^{\alpha}(1-\gamma_5)s_j ) ] \label{181 } \\ & & [ ( c_{+ } + c_{-})^2 + \frac{1}{3}(1-k^{\frac{1}{2}})(5c_{+}^2+c_{-}^2 - 6c_{-}c_{+})]+ \nonumber\\ & & [ ( 1-z_{-})(\bar b_i\gamma_{\alpha}(1-\gamma_5)b_j)(\bar s_j\gamma^{\alpha}(1-\gamma_5)s_i ) + \nonumber\\ & & \frac{2}{3}(1 + 2z_{-})(\bar b_i\gamma_{\alpha}\gamma_5b_j)(\bar s_j\gamma^{\alpha}(1-\gamma_5)s_i)]k^{\frac{1}{2}}(5c_{+}^2+c_{-}^2 - 6c_{-}c_{+ } ) ) , \nonumber\\ { \cal t}_{pi , u\bar d}^c & = & -\frac{g_f^2}{4\pi}m_c^2(1-\frac{m_u}{m_c})^2\nonumber\\ & & ( [ ( \frac{(1-z_{-})^2}{2}- \frac{(1-z_{-})^3}{4 } ) ( \bar c_i\gamma_{\alpha}(1-\gamma_5)c_i)(\bar u_j\gamma^{\alpha}(1-\gamma_5)u_j ) + \nonumber\\ & & ( \frac{(1-z_{-})^2}{2 } - \frac{(1-z_{-})^3}{3})(\bar c_i\gamma_{\alpha}\gamma_5 c_i)(\bar u_j\gamma^{\alpha}(1-\gamma_5)u_j ) ] \label{19 } \\ & & [ ( c_{+ } + c_{-})^2 + \frac{1}{3}(1-k^{\frac{1}{2}})(5c_{+}^2+c_{-}^2 - 6c_{-}c_{+})]+ \nonumber\\ & & [ ( \frac{(1-z_{-})^2}{2 } - \frac{(1-z_{-})^3}{4})(\bar c_i\gamma_{\alpha}(1-\gamma_5)c_j)(\bar u_j\gamma^{\alpha}(1-\gamma_5)u_i ) + \nonumber\\ & & ( \frac{(1-z_{-})^2}{2 } - \frac{(1-z_{-})^3}{3})(\bar c_i\gamma_{\alpha}\gamma_5c_j)(\bar u_j\gamma^{\alpha}(1-\gamma_5)u_i)]k^{\frac{1}{2}}(5c_{+}^2+c_{-}^2 - 6c_{-}c_{+ } ) ) , \nonumber\\ { \cal t}_{pi , u\bar d}^{c ' } & = & -\frac{g_f^2}{4\pi}m_c^2(1-\frac{m_s}{m_c})^2\nonumber\\ & & ( [ \frac{1}{4}(\bar c_i\gamma_{\alpha}(1-\gamma_5)c_i)(\bar s_j\gamma^{\alpha}(1-\gamma_5)s_j ) + \frac{1}{6}(\bar c_i\gamma_{\alpha}\gamma_5 c_i)(\bar s_j\gamma^{\alpha}(1-\gamma_5)s_j ) ] \label{191 } \\ & & [ ( c_{+ } - c_{-})^2 + \frac{1}{3}(1-k^{\frac{1}{2}})(5c_{+}^2+c_{-}^2 + 6c_{-}c_{+})]+ \nonumber\\ & & [ \frac{1}{4}(\bar c_i\gamma_{\alpha}(1-\gamma_5)c_j)(\bar s_j\gamma^{\alpha}(1-\gamma_5)s_i ) + \nonumber\\ & & \frac{1}{6}(\bar c_i\gamma_{\alpha}\gamma_5c_j)(\bar s_j\gamma^{\alpha}(1-\gamma_5)s_i)]k^{\frac{1}{2}}(5c_{+}^2+c_{-}^2 + 6c_{-}c_{+ } ) ) , \nonumber\\ { \cal t}_{pi,\nu_{\tau}\bar\tau}^c & = & -\frac{g_f^2}{\pi}m_c^2(1-\frac{m_s}{m_c})^2\nonumber\\ & & [ ( \frac{(1-z_{\tau})^2}{2 } - \frac{(1-z_{\tau})^3}{4})(\bar c_i\gamma_{\alpha}(1-\gamma_5)c_j)(\bar s_j\gamma^{\alpha}(1-\gamma_5)s_i ) + \label{192}\\ & & ( \frac{(1-z_{\tau})^2}{2 } - \frac{(1-z_{\tau})^3}{3})(\bar c_i\gamma_{\alpha}\gamma_5c_j)(\bar s_j\gamma^{\alpha}(1-\gamma_5)s_i)],\nonumber\\ { \cal t}_{ws , bc } & = & \frac{g_f^2|v_{cb}|^2}{4\pi}m_b^2(1+\frac{m_c}{m_b})^2(1-z_{+})^2[(c_{+}^2 + c_{-}^2 + \frac{1}{3}(1 - k^{\frac{1}{2}})(c_{+}^2 - c_{-}^2))\nonumber\\ & & ( \bar b_i\gamma_{\alpha}(1 - \gamma_5)b_i)(\bar c_j\gamma^{\alpha}(1 - \gamma_5)c_j ) + \label{20}\\ & & k^{\frac{1}{2}}(c_{+}^2 - c_{-}^2)(\bar b_i\gamma_{\alpha}(1 - \gamma_5)b_j ) ( \bar c_j\gamma^{\alpha}(1 - \gamma_5)c_i)],\nonumber\\ { \cal t}_{ws , bu } & = & \frac{g_f^2|v_{cb}|^2}{4\pi}m_b^2(1+\frac{m_u}{m_b})^2(1-z_{+})^2[(c_{+}^2 + c_{-}^2 + \frac{1}{3}(1 - k^{\frac{1}{2}})(c_{+}^2 - c_{-}^2))\nonumber\\ & & ( \bar b_i\gamma_{\alpha}(1 - \gamma_5)b_i)(\bar u_j\gamma^{\alpha}(1 - \gamma_5)u_j ) + \label{21}\\ & & k^{\frac{1}{2}}(c_{+}^2 - c_{-}^2)(\bar b_i\gamma_{\alpha}(1 - \gamma_5)b_j ) ( \bar u_j\gamma^{\alpha}(1 - \gamma_5)u_i)],\nonumber\\ { \cal t}_{ws , cd } & = & \frac{g_f^2}{4\pi}m_c^2(1+\frac{m_d}{m_c})^2(1-z_{+})^2[(c_{+}^2 + c_{-}^2 + \frac{1}{3}(1 - k^{\frac{1}{2}})(c_{+}^2 - c_{-}^2))\nonumber\\ & & ( \bar c_i\gamma_{\alpha}(1 - \gamma_5)c_i)(\bar d_j\gamma^{\alpha}(1 - \gamma_5)d_j ) + \label{22}\\ & & k^{\frac{1}{2}}(c_{+}^2 - c_{-}^2)(\bar c_i\gamma_{\alpha}(1 - \gamma_5)c_j ) ( \bar d_j\gamma^{\alpha}(1 - \gamma_5)d_i)],\nonumber\end{aligned}\ ] ] @xmath58 where the following notation has been used : @xmath59 in the evolution of coefficients @xmath60 and @xmath61 , we have taken into account the threshold effects , connected to the heavy quark masses . in expressions ( [ 5 ] ) and ( [ 6 ] ) , the scale @xmath62 has been taken approximately equal to @xmath63 . in the pauli interference term , we suggest that the scale can be determined on the basis of the agreement of the experimentally known difference between the lifetimes of @xmath64 , @xmath65 and @xmath66 with the theoretical predictions in the framework described above . in any case , the choice of the normalization scale leads to uncertainties in the final results . theoretical accuracy can be improved by the calculation of next - order corrections in the powers of qcd coupling constant . the coefficients of leading terms , represented by operators @xmath67 and @xmath68 , are equivalent to the widths fot the decays of free quarks and are known in the next - to - leading logarithmic approximation of qcd @xcite , including the mass corrections in the final state with the charmed quark and @xmath69-lepton @xcite in the decays of @xmath53-quark and with the strange quark mass for the decays of @xmath70-quark . in the numerical estimates , we include these corrections and mass effects , but we neglect the decay modes suppressed by the cabibbo angle , and also the strange quark mass effects in @xmath53 decays . the expressions for the contribution of operator @xmath71 are known in the leading logarithmic approximation . the expressions for the contributions of operators with the dimension 6 have been calculated by us with account of masses in the final states together with the logarithmic renormalization of the effective lagrangian for the non - relativistic heavy quarks at energies less than the heavy quark masses . thus , the calculation of lifetimes for the baryons @xmath72 is reduced to the problem of evaluating the matrix elements of operators , which is the subject of next section . by using the equations of motion , the matrix element of operator @xmath73 can be expanded in series over the powers of @xmath74 : @xmath75q^j|\xi_{qq'}^{\diamond}\rangle_{norm}}{2m_{q^j}^2 } + o(\frac{1}{m_{q^j}^3}).\end{aligned}\ ] ] thus , we have to estimate the matrix elements of operators from the following list only : @xmath76 the meaning of each term in the above list , was already discussed by us in the previous papers on the decays of doubly heavy baryons @xcite , so we omit it here . further , employing the nrqcd expansion of operators @xmath77 and @xmath78 , we have @xmath79 here the factorization at scale @xmath62 ( @xmath80 ) is supposed . we have omitted the term of @xmath81 , corresponding to the spin - orbital interactions , which are not essential for the basic state of baryons under consideration . the field @xmath82 has the standard non - relativistic normalization . now we would like to make some comments on the difference between the descriptions of interactions of the heavy quark with the light and heavy heavy ones . as well known , in the doubly heavy subsystem there is an additional parameter which is the relative velocity of quarks . it introduces the energy scale equal to @xmath83 . therefore , the darwin term ( @xmath84 ) in the heavy subsystem stands in the same order of inverse heavy quark mass in comparison with the chromomagnetic term ( @xmath85 ) ( they have the same power in the velocity @xmath86 ) . this statement becomes evident if we apply the scaling rules of nrqcd @xcite : @xmath87 for the interaction of heavy quark with the light one , there is no such small velocity parameter , so that the darwin term is suppressed by the additional factor of @xmath88 . further , the phenomenological experience with the potential quark models shows , that the kinetic energy of quarks practically does not depend on the quark contents of system , and it is determined by the color structure of state . so , we suppose that the kinetic energy is equal to @xmath89 for the quark - diquark system , and it is @xmath90 in the diquark ( the color factor of 1/2 ) . then @xmath91 @xmath92 where the diquark terms dominate certainly . applying the quark - diquark approximation and relating the matrix element of chromomagnetic interaction of diquark with the light quark to the mass difference between the exited and ground states @xmath93 , we have @xmath94 the numerical values of parameters used in the calculations above are given in the next section . our presentation here is less detailed than in previous papers @xcite . however , we hope , that the interested reader can find there all needed details . analogous expressions may be obtained for the matrix elements of operator @xmath95 @xmath96 the permutations of quark masses lead to the required expressions for the operators of @xmath67 and @xmath97 . for the four quark operators , determining the pauli interference and the weak scattering , we use the estimates in the framework of non - relativistic potential model @xcite : @xmath98 where @xmath99 is an arbitrary spinor matrix . performing the numerical calculations of lifetimes for the doubly heavy baryons , we have used the following set of parameters : @xmath100 the numerical values of diquark wavefunctions at the origin for baryons under consideration are collected in table [ wf ] . the masses of doubly heavy baryons may be found in table [ bmasses ] . .the values of diquark wavefunctions for the doubly heavy baryons at the origin . [ cols="^,^,^,^,^,^,^,^,^,^",options="header " , ] a small comment concerns with the corrections to the spectator decays of heavy quarks , caused by the motion of heavy quarks inside the hadron and interactions with the light degrees of freedom . the corrections due to the quark - gluon operators of dimension 5 are numerically small @xcite . the most important terms come from the kinetic energy of heavy quarks . in figs . [ ccu]-[bbs ] we have shown the dependence of baryons lifetimes from the values of light quark - diquark wavefunctions at the origin . we see quite a different behaviour withthe increase of @xmath101-parameter . here , we would like to note , that in this paper we do not give a detail discussion of nonspectator effects on the total lifetimes and semileptonic branching ratios of doubly heavy baryons and promise to fill this gap in one of our subsequent papers @xcite . finally , concerning the uncertainties of the presented estimates , we note that they are mainly determined by the following : \1 ) the @xmath70-quark mass is poorly known , but constrained by the fits to the experimental data , discussed above , can lead to the uncertainty @xmath102 in the case of doubly charmed baryons and @xmath103 for the case of @xmath104 - baryons . \2 ) the uncertainties in the values of diquark and light quark - diquark wavefunctions lead to @xmath105 in the case of doubly charmed baryons and @xmath102 for the @xmath104 - baryons . thus , the estimated uncertainty in predictions for the lifetimes of doubly heavy baryons is close to @xmath106 in the case of @xmath107 - baryons , of order of @xmath108 in the case of doubly charmed baryons and less then @xmath109 in the case of @xmath110 - baryons . to 1.5 cm to 17.5 cm to 1.5 cm to 17.5 cm to 1.5 cm to 17.5 cm to 1.5 cm to 17.5 cm to 1.5 cm to 17.5 cm to 1.5 cm to 17.5 cm to 1.5 cm to 17.5 cm to 1.5 cm to 17.5 cm to 1.5 cm to 17.5 cm in the present paper we have performed a detail investigation and numerical estimates for the lifetimes of doubly heavy baryons . the used approach is based on ope expansion of total widths for the corresponding hadrons , and it is combined with the formalism of effective fields theories developed previously . in this way , we have accounted for the both perturbative qcd and mass corrections to the wilson coefficients of operators . the nonspectator effects , presented by pauli interference and weak scattering , and their influence on the total lifetimes are considered . the obtained results show the significant role played by them in the description of lifetimes of doubly heavy baryons . this work is in part supported by the russian foundation for basic research , grants 96 - 02 - 18216 and 96 - 15 - 96575 . the work of a.i . onishchenko was supported by international center of fundamental physics in moscow and soros science foundation . * * , nucl . . suppl . * 54a * , 297 ( 1997 ) . ( 1994 ) 259 . d51 ( 1995 ) 1125 , + phys . d55 ( 1997 ) 5853 . d53 ( 1996 ) 4991 . ( 1998 ) 2432 , phys . d58 ( 1998 ) 112004 . , preprint ihep 98 - 22 [ hep - ph/9803433 ] ; + _ s.s.gershtein et al . _ , uspekhi fiz . ( 1995 ) 3 . , phys.rev . * d60 * , ( 1999 ) 014007 , hep - ph/9807354 . , hep - ph/9901224 . , hep - ph/9912424 , eur.phys.j . * c9 * , ( 1999 ) 213 , hep - ph/9901323 , hep - ph/9911241 . , mod.phys.lett . * a14 * , ( 1999 ) 135 , hep - ph/9807375 , heavy ion phys . * 9 * , ( 1999 ) 133 , hep - ph/9811212 , z. phys . c76 ( 1997 ) 111 ; + _ j.g.k@xmath111rner , m.kr@xmath112mer , d.pirjol_ , prog . 33 ( 1994 ) 787 ; + _ r.roncaglia , d.b.lichtenberg , e.predazzi_ , phys . d52 ( 1995 ) 1722 ; + _ e.bagan , m.chabab , s.narison_ , phys . b306 ( 1993 ) 350 ; + _ m.j.savage and m.b.wise_ , phys . b248 ( 1990 ) 117 ; + _ m.j.savage and r.p.springer_ , int . phys . a6 ( 1991 ) 1701 ; + _ s.fleck and j.m.richard_ , part . world 1 ( 1989 ) 760 , prog . 82 ( 1989 ) 760 ; + _ d.b.lichtenberg , r. roncaglia , e. predazzi _ , phys . d53 ( 1996 ) 6678 ; + _ m.l.stong_ , hep - ph/9505217 ; + _ j.m.richard_ , phys . 212 ( 1992 ) 1 . , phys.lett.*b306 * , ( 1993 ) 350 ; + _ e. bagan at al . , _ * c64 * , ( 1994 ) , 57 ; + _ v.v . kiselev , a.i . onishchenko _ , hep - ph/9909337 . b332 ( 1994 ) 411 ; + _ a.falk et al . d49 ( 1994 ) 555 ; + _ a.v . berezhnoi , v.v . kiselev , a.k . likhoded _ , phys . atom . ( 1996 ) 870 [ yad . 59 ( 1996 ) 909 ] ; + _ m.a . doncheski , j. steegborn , m.l . d53 ( 1996 ) 1247 ; + _ s.p.baranov_ , phys . d56 ( 1997 ) 3046 ; + _ a.v.berezhnoy , v.v.kiselev , a.k.likhoded , a.i.onishchenko_ , phys . d57 ( 1997 ) 4385 ; + _ v.v.kiselev , a.e.kovalsky_ , hep - ph/9908321 . , yad . 41 ( 1985 ) 187 ; + _ m.b.voloshin , m.a.shifman_ , zh . fiz . 64 ( 1986 ) 698 ; + _ m.b.voloshin_ , preprint tip - minn-96/4-t , umn - th-1425 - 96 . 1996 . [ hep - th/9604335 ] . , `` b decays '' , second edition , ed . s. stone ( world scientific , singapore , 1994 ) , z.phys . * c33 * , ( 1986 ) 297 . d46 ( 1992 ) 4052 . b293 ( 1992 ) 430 , phys . b297 ( 1993 ) 477 ; + _ b.blok , m.shifman_ , nucl . b399 ( 1993 ) 441 , 459 ; + _ i.bigi et al . b323 ( 1994 ) 408 . , d49 ( 1994)1310 . b326 ( 1994 ) 145 ; + _ l.koyrakh_ , phys . d49 ( 1994 ) 3379 . b187 ( 1981 ) 461 . b333 ( 1990 ) 66 . b391 ( 1993 ) 501 . b122 ( 1983 ) 297 , ann . ( 1984 ) 202 . b432 ( 1994 ) 3 , phys . b342 ( 1995)362 ; + _ e.bagan et al . b351 ( 1995 ) 546 . , phys . rev . * d12 * , ( 1975 ) 147 , phys.rev . * d24 * , ( 1981 ) 2982 . , in preparation
we perform a detailed investigation of total lifetimes for the doubly heavy baryons @xmath0 , @xmath1 in the framework of operator product expansion over the inverse heavy quark mass , whereas , to estimate matrix elements of operators obtained in ope , approximations of nonrelativistic qcd are used . = -18 mm = -22 mm # 10= 0=0 1= 1=1 0>1 # 1 / * lifetimes of doubly heavy baryons * + a.k . likhoded@xmath2 , a.i . onishchenko@xmath3 + + _ protvino , moscow region , 142284 russia _ + b ) institute for theoretical and experimental physics + _ moscow , b. cheremushkinskaja , 25 , 117259 russia + fax : 7 ( 095 ) 123 - 65 - 84 _
consider a minimal - delay space - time coded rayleigh quasi - static flat fading mimo channel with full channel state information at the receiver ( csir ) . the input output relation for such a system is given by @xmath0 where @xmath1 is the channel matrix and @xmath2 is the additive noise . both @xmath3 and @xmath4 have entries that are i.i.d . complex - gaussian with zero mean and variance 1 and @xmath5 respectively . the transmitted codeword is @xmath6 and @xmath7 is the received matrix . the ml decoding metric to minimize over all possible values of the codeword @xmath8 is @xmath9 [ ld_stbc_def ] @xcite : a linear stbc @xmath10 over a real ( 1-dimensional ) signal set @xmath11 , is a finite set of @xmath12 matrices , where any codeword matrix belonging to the code @xmath10 is obtained from , @xmath13 by letting the real variables @xmath14 take values from a real signal set @xmath15 where @xmath16 are fixed @xmath12 complex matrices defining the code , known as the weight matrices . the rate of this code is @xmath17 complex symbols per channel use . we are interested in linear stbcs , since they admit sphere decoding ( sd ) @xcite and other qr decomposition based decoding techniques such as the qrdm decoder @xcite which are fast ways of decoding for the variables . designing stbcs with low decoding complexity has been studied widely in the literature . orthogonal designs with single symbol decodability were proposed in @xcite , @xcite , @xcite . for stbcs with more than two transmit antennas , these came at a cost of reduced transmission rates . to increase the rate at the cost of higher decoding complexity , multi - group decodable stbcs were introduced in @xcite , @xcite , @xcite . another set of low decoding complexity codes known as the fast decodable codes were studied in @xcite . fast decodable codes have reduced sd complexity owing to the fact that a few of the variables can be decoded as single symbols or in groups if we condition them with respect to the other variables . fast decodable codes for asymmetric systems using division algebras have been reported @xcite . the properties of fast decodable codes and multi - group decodable codes were combined and a new class of codes called fast group decodable codes were studied in @xcite . a new code property called the _ block - orthogonal _ property was studied in @xcite which can be exploited by the qr - decomposition based decoders to achieve significant decoding complexity reduction without performance loss . this property was exploited in @xcite to reduce to the average ml decoding complexity of the golden code @xcite and also in @xcite to reduce the worst - case complexity of the golden code with a small performance loss . while the other low decoding complexity stbcs use the zero entries in the upper left portion of the upper triangular matrix after the qr decomposition , these decoders utilize the zeroes in the lower right portion to reduce the complexity further . the contributions of this paper are as follows : * we generalize the set of sufficient conditions for an stbc to be block orthogonal provided in @xcite for sub - block sizes greater than 1 . * we provide analytical proofs that the codes obtained from the sum of clifford unitary weight designs ( cuwds ) @xcite exhibit the block orthogonal property when we choose the right ordering and the right number of matrices . * we provide new methods of construction of bostbcs using coordinate interleaved orthogonal designs ( ciods ) @xcite , cyclic division algebras ( cdas ) @xcite and crossed product algebras ( cpas ) @xcite along with the analytical proofs of their block orthogonality . * we show that the ordering of variables of the stbc used for the qr decomposition dictates the block orthogonal structure and its parameters . * we show how the block orthogonal property of the stbcs can be exploited to reduce the decoding complexity of a sphere decoder which uses a depth first search approach . * we provide bounds on the maximum possible reduction in the euclidean metrics ( em ) calculation during sphere decoding of bostbcs . * simulation results show that we can reduce the decoding complexity of existing stbcs by upto 30% by utilizing the block orthogonal property . the remaining part of the paper is organized as follows : in section [ sec2 ] the system model and some known classes of low decoding complexity codes are reviewed . in section [ sec3 ] , we derive a set of sufficient conditions for an stbc to be block orthogonal and also the effect of ordering of matrices on it . in section [ sec4 ] , we present proofs of block orthogonal structure of various existing codes and also discuss some new methods of constructions of the same . in section [ sec5 ] , we discuss a method to reduce the number of em calculations while decoding a bostbc using a depth first search based sphere decoder and also derive bounds for the same . simulation results for the decoding complexity of various bostbcs are presented in section [ sec6 ] . concluding remarks constitute section [ sec7 ] . _ notations : _ throughout the paper , bold lower - case letters are used to denote vectors and bold upper - case letters to denote matrices . for a complex variable @xmath18 , denote the real and imaginary part of @xmath18 by @xmath19 and @xmath20 respectively . the sets of all integers , all real and complex numbers are denoted by @xmath21 and @xmath22 , respectively . the operation of stacking the columns of @xmath23 one below the other is denoted by @xmath24 . the kronecker product is denoted by @xmath25 , @xmath26 and @xmath27 denote the @xmath28 identity matrix and the null matrix , respectively . for a complex variable @xmath18 , the @xmath29 operator acting on @xmath18 is defined as follows @xmath30.\ ] ] the @xmath29 operator can similarly be applied to any matrix @xmath31 by replacing each entry @xmath32 by @xmath33 , @xmath34 , @xmath35 , resulting in a matrix denoted by @xmath36 . given a complex vector @xmath37^{t}$ ] , @xmath38 is defined as @xmath39^{t}$ ] . for any linear stbc with variables @xmath40 given by ( [ ld_stbc ] ) , the generator matrix @xmath41 @xcite is defined by @xmath42 where @xmath43^{t}$ ] . in terms of the weight matrices , the generator matrix can be written as @xmath44.\ ] ] hence , for any stbc , can be written as @xmath45 where @xmath46 is given by @xmath47 and @xmath43,$ ] with each @xmath48 drawn from a 1-dimensional ( pam ) constellation . using the above equivalent system model , the ml decoding metric can be written as @xmath49 using @xmath50 decomposition of @xmath51 , we get @xmath52 where @xmath53 is an orthonormal matrix and @xmath54 is an upper triangular matrix . using this , the ml decoding metric now changes to @xmath55 if we have @xmath56 , $ ] where @xmath57 are column vectors , then the @xmath58 and @xmath59 matrices have the following form obtained by the gram - schmidt orthogonalization : @xmath60 , \ ] ] where @xmath61 are column vectors , and @xmath62,\ ] ] where @xmath63 and for @xmath64 @xmath65 a brief overview of the known low decoding complexity codes is given in this section . the codes that will be described are multi - group decodable codes , fast decodable codes and fast group decodable codes . in case of a multi - group decodable stbc , the variables can be partitioned into groups such that the ml decoding metric is decoupled into submetrics such that only the members of the same group need to be decoded jointly . it can be formally defined as @xcite , @xcite , @xcite : [ multi_group_decodability ] an stbc is said to be @xmath66-group decodable if there exists a partition of @xmath67 into @xmath66 non - empty subsets @xmath68 such that the following condition is satisfied : @xmath69 whenever @xmath70 and @xmath71 and @xmath72 . if we group all the variables of the same group together in , then the @xmath59 matrix for the sd @xcite , @xcite in case of multi - group decodable codes will be of the following form : @xmath73,\ ] ] where @xmath74 is a square upper triangular matrix . now , consider the standard sd of an stbc . suppose the @xmath59 matrix as defined in turns out to be such that when we fix values for a set of symbols , the rest of the symbols become group decodable , then the code is said to be fast decodable . formally , it is defined as follows : [ fast_decodability ] an stbc is said to be fast sd if there exists a partition of @xmath75 where @xmath76 into @xmath66 non - empty subsets @xmath68 such that the following condition is satisfied for all @xmath77 @xmath78 whenever @xmath79 and @xmath80 and @xmath81 where @xmath82 and @xmath83 are obtained from the @xmath50 decomposition of the equivalent channel matrix @xmath56 = \textbf{q}\textbf{r}$ ] with @xmath57 as column vectors and @xmath84 $ ] with @xmath61 as column vectors as defined in . hence , by conditioning @xmath85 variables , the code becomes @xmath66-group decodable . as a special case , when no conditioning is needed , i.e. , @xmath86 , then the code is @xmath66-group decodable . the @xmath59 matrix for fast decodable codes will have the following form : @xmath87,\ ] ] where @xmath88 is an @xmath89 block diagonal , upper triangular matrix , @xmath90 is a square upper triangular matrix and @xmath91 is a rectangular matrix . fast group decodable codes were introduced in @xcite . these codes combine the properties of multi - group decodable codes and the fast decodable codes . these codes allow each of the groups in the multi - group decodable codes to be fast decoded . the @xmath59 matrix for a fast group decodable code will have the following form : @xmath92,\ ] ] where each @xmath93 will have the following form : @xmath94,\ ] ] where @xmath95 is an @xmath96 block diagonal , upper triangular matrix , @xmath97 is a square upper triangular matrix and @xmath98 is a rectangular matrix . block orthogonal codes introduced in @xcite are a sub - class of fast decodable / fast group decodable codes . they impose an additional structure on the variables conditioned in these codes . an stbc is said to be block orthogonal if the @xmath99 matrix of the code has the following structure : @xmath100,\ ] ] where each @xmath101 is a block diagonal , upper triangular matrix with @xmath102 blocks @xmath103 , each of size @xmath104 and @xmath105 are non - zero matrices . the low decoding complexity codes described in section [ sec2 ] utilize the zero entries in the upper triangular matrix @xmath99 , in the breadth first or depth first search decoders such as the sphere decoder or the qrdm decoder to achieve decoding complexity reduction . the fast sphere decoding complexity @xcite of an stbc is governed by the zeros in the upper left block of the @xmath99 matrix and does not exploit the zeros in the lower right blocks . the zeros in the lower right block can be used to reduce the average decoding complexity of the code where the average decoding complexity refers to the average number of floating operations performed by the decoder . the zeros in the lower right block are also utilized in some non ml decoders such as the qrdm decoder @xcite or the modified sphere decoder @xcite to reduce the decoding complexity of the code . the structure of block orthogonal matrix was defined in . in general , the size of block diagonal matrices , @xmath106 s , and the upper triangular blocks in these matrices can be arbitrary . similar to @xcite , we consider only the case that @xmath106s have the same size , @xmath107 , and the upper triangular blocks in @xmath106s each have the same size @xmath108 . hence , a block orthogonal code can be represented by the parameters @xmath109 : * @xmath110 : the number of matrices @xmath106 in @xmath99 ; * @xmath102 : the number of blocks in the block diagonal matrix @xmath106 - denoted by @xmath111 , @xmath112 ; * @xmath113 : the number of diagonal entries in the matrices @xmath111 . a set of sufficient conditions for an stbc to be a bostbc with the parameters @xmath114 are described below : first a condition for the stbc to be block orthogonal with parameters @xmath115 is given . the case for @xmath116 will be given subsequently . [ bostc_lemma1 ] @xcite consider an stbc of size @xmath117 with weight matrices @xmath118 , @xmath119 . let @xmath120 , ~~ \mathcal{b}_{i } = \left[\begin{array}{cc } \textbf{b}_{i}^{r } & -\textbf{b}_{i}^{i}\\ \textbf{b}_{i}^{i } & \textbf{b}_{i}^{r}\\ \end{array}\right]\ ] ] and @xmath121 _ { 2 t \times 2n_{t}}$ ] , @xmath122 _ { 2 t \times 2n_{t}}$ ] , @xmath123 , @xmath124 and @xmath125 . this stbc has block orthogonal structure @xmath115 if the following conditions are satisfied : * @xmath126 is of dimension @xmath127 . * @xmath128 and @xmath129 for @xmath123 . * @xmath130 and @xmath131 for @xmath132 and @xmath72 . * @xmath133 for @xmath132 and @xmath72 where @xmath134 and each element ( tuple ) of @xmath135 includes four uniquely permuted scalars drawn from @xmath136 . the set of conditions for an stbc to have a block orthogonal structure with parameters @xmath137 is now given . [ bostc_lemma2 ] @xcite let the @xmath99 matrix of an stbc with weight matrices @xmath138 , @xmath139 be @xmath140,\ ] ] where @xmath141 is a @xmath89 block - orthogonal matrix with parameters @xmath142 , @xmath143 is an @xmath144 matrix and @xmath145 is a @xmath107 upper triangular matrix . the stbc will be a block orthogonal stbc with parameters @xmath146 if the following conditions are satisfied : * the matrices @xmath139 are hurwitz - radon orthogonal . * the matrix @xmath143 is para - unitary , i.e. , @xmath147 . the authors in @xcite only discuss the conditions for the block orthogonal codes with parameters @xmath146 . these conditions can be easily derived for bostbcs with parameters @xmath109 as well . we first derive the conditions for @xmath148 . [ bostc_lemma3 ] consider an stbc of size @xmath149 with weight matrices @xmath150 , @xmath151 . let the @xmath59 matrix for this stbc be of the form @xmath140,\ ] ] where @xmath141 and @xmath152 are @xmath153 upper triangular matrices , @xmath143 is an @xmath153 matrix . the stbc will have a block orthogonal structure with parameters @xmath154 if the following conditions are satisfied : * the matrices @xmath155 are @xmath102-group decodable with @xmath113 variables in each group , i.e. , @xmath155 can be partitioned into @xmath102 sets @xmath156 , each of cardinality @xmath113 such that @xmath157 for all @xmath158 , @xmath159 , @xmath160 . * the matrices @xmath161 are @xmath102-group decodable with @xmath113 variables in each group , i.e. , @xmath161 can be partitioned into @xmath102 sets @xmath156 , each of cardinality @xmath113 such that @xmath162 for all @xmath163 , @xmath164 , @xmath160 . * the set of matrices @xmath165 are such that the @xmath59 matrix obtained has full rank . * the matrix @xmath166 is a block diagonal matrix with @xmath102 blocks of size @xmath108 . proof is given in appendix [ proof_bostc_lemma3 ] . [ bostc_lemma4 ] let the @xmath99 matrix of an stbc with weight matrices @xmath138 , @xmath161 be @xmath140,\ ] ] where @xmath141 is a @xmath89 block - orthogonal matrix with parameters @xmath167 , @xmath143 is an @xmath168 matrix and @xmath145 is a @xmath153 upper triangular matrix . the stbc will be a block orthogonal stbc with parameters @xmath169 if the following conditions are satisfied : * the matrices @xmath161 are @xmath102-group decodable with @xmath113 variables in each group , i.e. , @xmath161 can be partitioned into @xmath102 sets @xmath156 , each of cardinality @xmath113 such that @xmath162 for all @xmath163 , @xmath164 , @xmath160 . * the set of matrices @xmath170 are such that the @xmath59 matrix obtained has full rank . * the matrix @xmath166 is a block diagonal matrix with @xmath102 blocks of size @xmath108 . proof is given in appendix [ proof_bostc_lemma4 ] . we now show that the block orthogonality property depends on the ordering of the weight matrices or equivalently the ordering of the variables . if we do not choose the right ordering , we will be unable to get the desired structure . [ golden_code ] let us consider the golden code @xcite given by : @xmath171,\ ] ] where @xmath172 , @xmath173 , @xmath174 , @xmath175 and @xmath176 for @xmath177 . if we order the variables ( and hence the weight matrices ) as @xmath178 $ ] , then the @xmath59 matrix for sd has the following structure @xmath179,\ ] ] where @xmath180 denotes non zero entries . this ordering of variables has presented a @xmath181 block orthogonal structure to the @xmath99 matrix . now , if we change the ordering to @xmath182 $ ] , then the @xmath59 matrix for sd has the following structure @xmath183,\ ] ] where @xmath180 denotes non zero entries . this ordering of variables has presented a @xmath184 block orthogonal structure to the @xmath99 matrix . we can also have an ordering which can leave the @xmath99 matrix bereft of any block orthogonal structure such as @xmath185 $ ] . the structure of the @xmath99 matrix in this case will be @xmath186,\ ] ] also note that we have many entries @xmath187 even when the @xmath188-th and the @xmath189-th weight matrices are hr orthogonal such as for cases @xmath190 and @xmath191 etc . code constructions for block orthogonal stbcs with various parameters were presented in @xcite . it was shown via simulations that these constructions were indeed block orthogonal with the aforementioned parameters . we provide analytical proofs for the block orthogonal structure of some of these constructions which include also other well known codes such as the bhv code @xcite , the silver code @xcite and the srinath - rajan code @xcite . we first study some basics of cuwds and ciods . @xcite linear stbcs can be broadly classified as unitary weight designs ( uwds ) and non unitary weight designs ( nuwds ) . a uwd is one for which all the weight matrices are unitary and nuwds are defined as those which are not uwds . clifford unitary weight designs ( cuwds ) are a proper subclass of uwds whose weight matrices satisfy certain sufficient conditions for @xmath66-group ml decodability . to state those sufficient conditions , let us list down the weight matrices of a cuwd in the form of an array as shown in table [ cuwd_table ] . .structure of cuwds [ cols="^,^,^,^ " , ] [ cuwd_table ] all the weight matrices in one column belong to one group . the weight matrices of cuwds satisfy the following sufficient conditions for @xmath66-group ml decodability . * @xmath192 . * all the matrices in the first row except @xmath193 should square to @xmath194 and should pair - wise anti - commute among themselves . * the unitary matrix in the @xmath188-th row and the @xmath189-th column is equal to @xmath195 . the cuwd matrix representation for these matrices for a system with @xmath196 transmit antennas are given below @xcite . let @xmath197 , ~~ \sigma_{2 } = \left[\begin{array}{cc } 0 & j\\ j & 0\\ \end{array}\right ] , ~~ \sigma_{3 } = \left[\begin{array}{cc } 1 & 0\\ 0 & -1\\ \end{array}\right].\ ] ] the representations of the clifford generators are given by : @xmath198 @xmath199 @xmath200 @xmath201 where @xmath202 . the weight matrices of the cuwd for a rate-1 , four group decodable stbc can be derived as follows . let @xmath203 for @xmath204 . let @xmath205 . the weight matrices are now given by @xmath206 for @xmath207 , @xmath208 and where @xmath209 is the binary representation of @xmath210 . coordinate interleaved orthogonal designs ( ciods ) were introduced in @xcite . [ ciod_def ] a ciod for a system with @xmath196 transmit antennas in variables @xmath48 , @xmath211 , @xmath212 even , is a @xmath213 matrix @xmath214 , such that @xmath215,\ ] ] where @xmath216 and @xmath217 are complex orthogonal designs of size @xmath218 and @xmath219 . we now show that stbcs obtained as a sum of rate-1 , four group decodable cuwds exhibit the block orthogonal structure with parameters @xmath220 . [ bostc_cuwd_const ] _ construction i : _ let @xmath221 be a rate-1 , four group decodable stbc obtained from cuwd @xcite with weight matrices @xmath222 . let @xmath223 be an @xmath224 matrix such that the set of weight matrices @xmath225 yield a full rank @xmath59 matrix . then the stbc given by @xmath226 will exhibit a block orthogonal structure with parameters @xmath227 . proof is given in appendix [ app_r_mat_struct_const_1 ] . [ bostc_cuwd_ex ] let us consider the bhv code given by : @xmath228 where @xmath229 and @xmath229 take the alamouti structure , and @xmath230,$ ] @xmath231 $ ] and @xmath232^{t } = \textbf{u}\left [ s_{3 } , s_{4}\right]^{t},$ ] where @xmath233 is a unitary matrix chosen to maximize the minimum determinant . in this case , as per the above construction , @xmath234 . hence , the bhv code is a bostbc with parameters @xmath235 . in this section , we show the block orthogonality property of two constructions from either cyclic division algebras or crossed product algebras over the field @xmath236 . [ bostc_const_2 ] _ construction ii : _ let @xmath23 be an stbc with weight matrices @xmath237 and @xmath238 for the variables @xmath239 $ ] and @xmath240 $ ] respectively such that @xmath241 for @xmath242 . let the weight matrices be chosen such that the @xmath59 matrix has full rank . then the code @xmath23 exhibits the block orthogonal property with parameters @xmath243 if we take the ordering of weight matrices as @xmath244 . proof is given in appendix [ app_r_mat_struct_const_2 ] . [ bostc_cda_ex ] consider any stbc obtained from the cyclic division algebra ( cda ) @xcite over the base field @xmath236 . the structure of such an stbc will be @xmath245,\ ] ] where @xmath246 . the weight matrices of this stbc satisfy the properties of the construction above . hence , this is a bostbc with parameters @xmath247 . the next construction is a special case of the previous construction . [ bostc_const_3 ] _ construction iii : _ let @xmath229 be a two group decodable stbc with weight matrices @xmath237 and @xmath238 for the variables @xmath248 $ ] and @xmath249 $ ] respectively such that @xmath241 for @xmath250 . let @xmath223 be a matrix such that the set of weight matrices @xmath251 yield a full rank @xmath59 matrix . then the stbc given by @xmath252 will exhibit a block orthogonal structure with parameters @xmath253 . proof is given in appendix [ app_r_mat_struct_const_3 ] . [ bostc_cda_2g_ex ] consider the golden code as given in example [ golden_code ] . if we consider , @xmath254,\ ] ] and @xmath223 as @xmath255,\ ] ] we can see that the golden code is a bostbc with parameters @xmath256 . in this section we show that the bostbcs that can be obtained from ciods @xcite . [ bostc_ciod_const ] _ construction iv : _ let @xmath257 be a rate-1 ciod with weight matrices @xmath258 . let @xmath223 be a matrix such that the set of weight matrices @xmath259 yield a full rank @xmath59 matrix . then the stbc given by @xmath260 will exhibit a block orthogonal structure with parameters @xmath261 . proof is given in appendix [ app_r_mat_struct_const_4 ] . [ bostc_ciod_ex ] consider the @xmath262 code constructed by srinath et al . in @xcite given by @xmath263,\ ] ] if we consider , @xmath264,\ ] ] and @xmath223 as @xmath265,\ ] ] we see that the code is a bostbc with parameters @xmath256 . in this section we describe how we can achieve decoding complexity reduction for bostbcs . also we show how the block orthogonal structure helps in the reduction of the euclidean metric ( em ) calculations and the sorting operations for a sphere decoder using a depth first search algorithm . we also briefly present the implications of the block orthogonal structure for qrdm decoders as discussed in @xcite . the sphere decoder under consideration in this section will be the depth first search algorithm based decoder with schnorr - euchner enumeration and pruning as discussed in @xcite . we first consider the case of @xmath266 block orthogonal code . consider a bostbc with parameters @xmath267 . the structure of the @xmath99 matrix for this code is as mentioned in with two blocks @xmath268 and @xmath269 . this code is fast sphere decodable , i.e. , for a given set of values of variables in sub - blocks @xmath270 , @xmath271 , we can decode the variables in @xmath272 and @xmath273 , @xmath274 , independently . the ml decoding complexity of this code will be @xmath275 . due to the structure of the block orthogonal code , we can see that the variables in the blocks @xmath270 and @xmath276 , @xmath274 , are also independent in the sense that the em calculations and the schnorr - euchner enumeration based sorting operations for the variables in @xmath270 are independent of the values taken by the variables in @xmath276 . we illustrate this point with an example . [ bostbc_em_ind_ex ] consider a hypothetical bostbc having the parameters @xmath277 with variables @xmath278 . the @xmath59 matrix for this bostbc will be of the form @xmath279\ ] ] the first two levels of the search tree for the sphere decoder are shown in in figure [ fig : bostbc_em_ind ] with the variables assumed to be taking values from a 2-pam constellation - a. as it can be seen from the figure , irrespective of the value taken by @xmath280 , the edge weights ( euclidean metrics ) for the variable @xmath281 remain the same . from example [ bostbc_em_ind_ex ] we can see that instead of calculating the em repeatedly , we can store these values in a look up table when they are calculated for the first time and retrieve them whenever needed . this technique of avoiding repeated calculations by storing the previously calculated values is known as _ memoization _ @xcite . this approach reduces the number of floating point operations ( flops ) significantly . consider a bostbc with parameters @xmath282 . the structure of the @xmath99 matrix for this code is as mentioned in . consider the block @xmath283 , @xmath284 of the @xmath59 matrix . for a given set of values for the variables in the blocks @xmath285 , @xmath286 , we can see that the variables in the blocks @xmath287 and @xmath288 , @xmath274 , are independent as seen in the case of @xmath148 . hence , we can use memoization here as well in order to reduce the number of em calculations and sorting operations . we calculate the maximum possible reduction in the number of em values calculated and the memory requirements for the look up tables in this section . first we consider the case of @xmath148 . considering a @xmath289 bostbc , we first calculate the memory requirements for storing the em values . let each of the variables of the stbc take values from a constellation of size @xmath290 . the number of em values that need to be stored for a single sub - block @xmath270 , @xmath291 , is @xmath292 these values will need to be stored for @xmath293 such sub - blocks . the total memory requirement for the block @xmath269 is , @xmath294 we now find the maximum number of reductions possible for the em calculations for this bostbc . this will occur when all the nodes are visited in the depth first search . for the block @xmath269 , the number of em calculations for a code without the block orthogonal structure would be @xmath295 for a bostbc , if we use the look up table , we would be performing the em calculations only once per each of the sub - block . for @xmath102 sub - blocks , the number of em calculations will be @xmath296 we therefore perform only a small percentage of em calculations if the code exhibits a block orthogonal structure . we call the ratio of the the number of em calculated for a bostbc to the number of em calculated if the stbc did not possess a block orthogonal structure as euclidean metric reduction ratio ( emrr ) given by @xmath297 which is a decreasing function of @xmath102 , @xmath290 and @xmath298 . considering a @xmath300 bostbc , we first calculate the memory requirements for storing the em values . the memory requirement per sub - block @xmath287 , @xmath291 , of any block @xmath301 , @xmath302 , under consideration is the same as that of the case of the sub - block @xmath270 in the @xmath148 case . this is so because , for a given set of values for the variables in the blocks @xmath285 , @xmath303 , the memory requirement for the sub - block @xmath287 can be calculated in the similar way as it was calculated for @xmath270 for the @xmath148 case . hence , the memory requirements for a block @xmath301 for a given set of values for the variables in the blocks @xmath285 is the same as that of @xmath269 in the @xmath148 case . @xmath304 we can reuse the same memory for another set of given values of the variables of @xmath285 , as the previous em values will not be retrieved again as the depth first search algorithm does not revisit any of the previously visited nodes ( i.e. , any previously given set of values for the variables in the tree ) . hence , we can write , @xmath305 for @xmath306 . since there are @xmath307 such blocks , the total memory requirement for storing the em values will be @xmath308 we now find the maximum number of reductions possible for the em calculations for this bostbc . this will occur when all the nodes are visited in the depth first search . for blocks other than @xmath268 , the number of em calculations for a code without the block orthogonal structure would be @xmath309 for a bostbc , if we consider the block @xmath301 and for a given set of values for the variables in @xmath285 , @xmath310 , if we use the look up table , we would be performing the em calculations only once per each of the sub - block . for @xmath102 sub - blocks , the number of em calculations will be @xmath311 these calculations need to be repeated for all the @xmath312 values of the variables in @xmath285 . @xmath313 the em calculations for all the blocks is given by @xmath314 the emrr in this case will be @xmath315 we can see that the ratio of the reduction of operations is independent of @xmath316 and dependent only on @xmath102 and @xmath298 . in this section we review the simplified qrdm decoding method which exploits the block orthogonal structure of a code as presented in @xcite . the traditional qrdm decoder is a breadth first search decoder in which @xmath317 surviving paths with the smallest euclidean metrics are picked at each stage and the rest of the paths are discarded . if @xmath318 for a block orthogonal code with parameters @xmath282 , then the qrdm decoder gives ml performance . the simplified qrdm decoder utilizes the block orthogonal structure of the code to find virtual paths between nodes , which reduces the number of surviving paths to effectively @xmath319 , to reduce the number of euclidean metric calculations . for details of how this is achieved , refer to @xcite . the maximum reduction in decoding complexity bound for a qrdm decoder is given by @xmath320 in all the simulation scenarios in this section , we consider quasi - static rayleigh flat fading channels and the channel state information ( csi ) is known at the receiver perfectly . any stbc which does not have a block orthogonal property is assumed to be a fast decodable stbc which is conditionally @xmath102 group decodable with @xmath298 symbols per group , but not possessing the block diagonal structure for the blocks @xmath321 . we first plot the emrr for bostbcs with different parameters against the snr . figures [ fig : stbc_241_em ] and [ fig : stbc_222_em ] show the plot of @xmath322 vs snr for a @xmath235 bostbc ( examples - silver code , bhv code ) with the symbols being drawn from 4-qam , 16-qam and 64-qam . we can clearly see that the reduction in the emrr with the increasing size of signal constellation as explained in section [ cplxity_redn_mem_req ] . it can also be seen that a larger value of @xmath102 gives a lower emrr if we keep the product @xmath323 constant . figure [ fig : stbc_242_em ] shows the plot of @xmath322 vs snr for a @xmath324 bostbc ( examples - @xmath325 code from pavan et al @xcite ) with the symbols being drawn from 4-qam and 16-qam . notice that the @xmath324 bostbc offers a lower emrr as compared to the @xmath326 bostbc due to the higher value of @xmath298 , as explained in section [ cplxity_redn_mem_req ] . we now compare the total number of flops performed by the sphere decoder for a bostbc against that of an stbc without a block orthogonal structure for various snrs . figures [ fig : stbc_241_flops ] , [ fig : stbc_222_flops ] , [ fig : stbc_242_flops ] show the plot of number of flops vs snr for a @xmath235 bostbc , a @xmath327 bostbc and a @xmath324 bostbc respectively with the symbols being drawn from 4-qam , 16-qam and 64-qam for the first two figures and from 4-qam and 16-qam for the last one . we can see that the bostbcs offer around 30% reduction in the number of flops for the @xmath235 and @xmath324 bostbcs and around 15% for the @xmath327 bostbc at low snrs . for a bostbc with parameters @xmath326,width=624,height=259 ] for a bostbc with parameters @xmath328,width=624,height=259 ] for a bostbc with parameters @xmath329,width=624,height=259 ] , width=624,height=259 ] , width=624,height=259 ] , width=624,height=259 ] the primary difference between the depth first and the breadth first ( qrdm ) approach is the variation of the emrr with respect to snr . as seen in the figures from section [ ml_simulations ] , the effect of the block orthogonal property reduces as the snr increases in the depth first sphere decoder . this is owing to the schnorr - euchner enumeration and pruning of branches . as the snr increases , the decoder needs to visit fewer number of nodes in order to find the ml solution and hence the emrr also tends to 1 . however , in the case of a breadth first search algorithm , all the nodes need to be visited in order to arrive to a solution . hence the emrr is independent of the snr in the breadth first search case . to reduce the number of nodes visited , only @xmath317 paths are selected in the qrdm algorithm to reduce complexity . the value of @xmath330 chosen needs to be varied with snr in order to get near ml performance . in this paper we have studied the block orthogonal property of stbcs . we have shown that this property depends upon the ordering of weight matrices . we have also provided proofs of various existing codes exhibiting the block orthogonal property . a method of exploiting the block orthogonal structure of the stbcs to reduce the sphere decoding complexity was also given with bounds on the maximum possible reduction . 1 b. hassibi and b. hochwald , high - rate codes that are linear in space and time , ieee trans . inf . theory , vol . 48 , no . 7 , pp . 1804 - 1824 , july 2002 . e. viterbo and j. boutros , a universal lattice code decoder for fading channels , ieee trans . theory , vol . 5 , pp . 1639 - 1642 , july 1999 . t. p. ren , y. l. guan , c. yuen and e. y. zhang , block - orthogonal space time code structure and its impact on qrdm decoding complexity reduction , ieee journal of selected topics in signal processing , vol . 5 , issue 8 , pp . 1438 - 1450 , nov . v. tarokh , h. jafarkhani and a. r. calderbank , space - time block codes from orthogonal designs , ieee trans . theory , vol . 5 , pp . 1456 - 1467 , july 1999 . liang , orthogonal designs with maximal rates , ieee trans . theory , vol.49 , no . 2468 - 2503 , oct . o. tirkkonen and a. hottinen , square - matrix embeddable space - time block codes for complex signal constellations , ieee trans . theory , vol . 2 , pp . 384 - 395 , feb . d. n. dao , c. yuen , c. tellambura , y. l. guan and t. t. tjhung , four - group decodable space - time block codes , ieee trans . signal processing , vol . 424 - 430 , jan . 2008 . s. karmakar and b. s. rajan , multigroup decodable stbcs from clifford algebras , ieee trans . theory , vol . 223 - 231 , jan . 2009 . s. karmakar and b. s. rajan , high - rate , multi - symbol - decodable stbcs from clifford algebras , ieee transactions on inf . theory , vol . . 2682 - 2695 , jun . e. biglieri , y. hong and e. viterbo , on fast - decodable space - time block codes , ieee trans . theory , vol . 2 , pp . 524 - 530 , feb . r. vehkalahti , c. hollanti and f. oggier , fast - decodable asymmetric space - time codes from division algebras , available online at arxiv , arxiv:1010.5644v1 [ cs.it ] . t. p. ren , y. l. guan , c. yuen and r. j. shen , fast - group - decodable space - time block code , proceedings ieee information theory workshop , ( itw 2010 ) , cairo , egypt , jan . 6 - 8 , 2010 , available online at http://www1.i2r.a-star.edu.sg/cyuen/publications.html . m. o. sinnokrot and j. barry , fast maximum - likelihood decoding of the golden code , ieee transactions on wireless commun . , vol . 9 , no . 26 - 31 , jan . 2010 . j. c. belfiore , g. rekaya and e. viterbo , the golden code : a 2x2 full - rate space time code with non - vanishing determinants , ieee trans . theory , vol . 1432 - 1436 , apr . 2005 . s. kahraman and m. e. celebi , dimensionality reduction for the golden code with worst - case complexity of @xmath331 , available online at http://istanbultek.academia.edu/sinankahraman . g. s. rajan and b. s. rajan , multi - group ml decodable collocated and distributed space time block codes , ieee trans . theory , vol . 56 , no . 7 , pp . 3221 - 3247 , july 2010 . z. ali khan md . , and b. s. rajan , single symbol maximum likelihood decodable linear stbcs , ieee trans . theory , vol . 5 , pp . 2062 - 2091 , may 2006 . b. a. sethuraman , b. s. rajan and v. shashidhar , full - diversity , high - rate space - time block codes from division algebras , ieee trans . theory , vol . 2596 - 2616 , oct 2003 . v. shashidhar , b. s. rajan and b. a. sethuraman , information - lossless space - time block codes from crossed - product algebras , ieee trans . theory , vol . 9 , pp . 39133935 , sep 2006 . o. damen , a. chkeif , and j.c . belfiore , lattice code decoder for space - time codes , ieee communication letters , vol . 161 - 163 , may 2000 . g. r. jithamithra and b. s. rajan , minimizing the complexity of fast sphere decoding of stbcs , available online at arxiv , arxiv:1004.2844v2 [ cs.it ] , 22 may 2011 . c. hollanti , j. lahtonen , k. ranto , r. vehkalahti and e. viterbo , on the algebraic structure of the silver code : a 2 2 perfect space - time code with non - vanishing determinant , in proc . of ieee inf . theory workshop , porto , portugal , may 2008 . k. p. srinath and b. s. rajan , low ml - decoding complexity , large coding gain , full - rate , full - diversity stbcs for 2x2 and 4x2 mimo systems , ieee journal of selected topics in signal processing : special issue on managing complexity in multiuser mimo systems , vol . 916 - 927 , dec . 2009 . t. h. cormen , c. e. leiserson , r. l. rivest , c. stein , introduction to algorithms , third edition , mit press , sep 2009 . following the system model in section [ sec2 ] , we have the equivalent channel matrix @xmath332 as @xmath333 ~=~ \left [ \textbf{h}_{1 } ~ ... ~ \textbf{h}_{l } ~ \textbf{h}_{l+1 } ~ ... ~ \textbf{h}_{2l}\right ] $ ] . we know from theorem 2 of @xcite that , if any two weight matrices @xmath334 and @xmath335 are hurwitz - radon orthogonal , then the @xmath188-th and the @xmath189-th columns of the @xmath336 matrix are orthogonal . due to the conditions on the weight matrices , we have that @xmath337 and @xmath338 are block diagonal with @xmath102 blocks , each of size @xmath104 . under @xmath339 decomposition , @xmath340 with @xmath341 $ ] with @xmath342 and @xmath343 $ ] as mentioned . it can be seen from lemma 2 of @xcite that the matrix @xmath268 is block diagonal with @xmath102 blocks , each of size @xmath104 . we can now write , @xmath344 @xmath345 simplifying , @xmath346 now , if @xmath347 is block diagonal with @xmath102 blocks of size @xmath104 each @xmath348 is block diagonal with @xmath102 blocks of size @xmath104 each . since @xmath269 is upper triangular and full rank , this means that @xmath269 is block diagonal with @xmath102 blocks of size @xmath104 each . following the system model in section [ sec2 ] , we have the equivalent channel matrix @xmath349 as @xmath333 ~=~ \left [ \textbf{h}_{1 } ~ ... ~ \textbf{h}_{l } ~ \textbf{h}_{l+1 } ~ ... ~ \textbf{h}_{l+l}\right ] $ ] . we know from theorem 2 of @xcite that , if any two weight matrices @xmath350 and @xmath351 are hurwitz - radon orthogonal , then the @xmath188-th and the @xmath189-th columns of the @xmath336 matrix are orthogonal . due to the conditions on the weight matrices , we have that @xmath338 is block diagonal with @xmath102 blocks , each of size @xmath104 . under @xmath339 decomposition , @xmath340 with @xmath341 $ ] with @xmath352 and @xmath353 and @xmath343 $ ] as mentioned . we can now write , @xmath344 @xmath345 simplifying , @xmath346 now , if @xmath347 is block diagonal with @xmath102 blocks of size @xmath104 each @xmath348 is block diagonal with @xmath102 blocks of size @xmath104 each . since @xmath269 is upper triangular and full rank , this means that @xmath269 is block diagonal with @xmath102 blocks of size @xmath104 each . according to construction i , the structure of the stbc is @xmath354 where @xmath229 is a rate-1 four group decodable stbcs obtained from cuwds as described in section [ cuwds ] . let the @xmath99 matrix for this code have the following structure : @xmath140,\ ] ] where @xmath268 , @xmath355 and @xmath269 are @xmath356 matrices . from @xcite , it can be easily seen that @xmath357 has a block diagonal structure with four blocks , and each block of the size @xmath358 . @xmath359,\ ] ] where @xmath360 , @xmath361 is a @xmath358 given by . @xmath362,\ ] ] ' '' '' [ r1_struct_prop_const_1 ] the non - zero blocks of the matrix @xmath268 are equal i.e. , @xmath363 , for @xmath364 . it is sufficient for us to prove that @xmath365 and @xmath366 for @xmath364 , @xmath367 and @xmath368 . the proof is by induction . we first consider the case of @xmath369 . we also recall @xcite that @xmath370 now , for we have , @xmath371 since @xmath372 and @xmath373 for @xmath374 . for we have , @xmath375 since @xmath376 . now we prove equations and for arbitrary @xmath189 . we prove this by induction . let the equations hold true for all @xmath377 . we now have for equation , @xmath378 which follows from the induction hypothesis and the fact that @xmath379 for @xmath380 . for equation , @xmath381 \\ % \end{equation * } % \begin{equation * } & = \frac{1}{2\parallel \textbf{r}_{j } \parallel } \left [ tr\left ( \check{\textbf{h } } \check{\textbf{a}}_{j } \check { \textbf{a}}_{k}^{t } \check { \textbf{h}}^{t}\right ) - \sum_{l=1}^{j-1 } \left\langle \textbf{q}_{l } , \textbf{h}_{j}\right\rangle \left\langle \textbf{q}_{l } , \textbf{h}_{k}\right\rangle\right ] \\ % \end{equation * } % \begin{equation * } & = \frac{tr\left ( \check{\textbf{h } } \check{\textbf{a}}_{j } \check{\textbf{a}}_{4\left ( i-1\right ) \lambda + 1 } \check { \textbf{a}}_{4\left ( i-1\right ) \lambda + 1}^{t } \check { \textbf{a}}_{k}^{t } \check { \textbf{h}}^{t}\right)}{2\parallel \textbf{r}_{4\left ( i-1\right ) \lambda + j } \parallel } \\ & \quad - \frac{1}{2\parallel \textbf{r}_{4\left ( i-1\right ) \lambda + j } \parallel}\sum_{l=1}^{j-1 } \left\langle \textbf{q}_{4\left ( i-1\right ) \lambda + l } , \textbf{h}_{4\left ( i-1\right ) \lambda + j}\right\rangle.\\ & \qquad \left\langle \textbf{q}_{4\left ( i-1\right ) \lambda + l } , \textbf{h}_{4\left ( i-1\right ) \lambda + k}\right\rangle % \end{equation * } % \begin{equation*}\end{aligned}\ ] ] @xmath382 the matrix @xmath355 is key for the block orthogonality property of the stbc in question . it is required to be para - unitary for achieving this property . the structure of the matrix @xmath355 for construction i is described in the following proposition . [ e_struct_prop_const_1 ] the matrix @xmath355 is of the form @xmath383,\ ] ] where @xmath384 , @xmath177 are @xmath358 matrices and @xmath385 is a @xmath358 permutation matrix given by @xmath386.\ ] ] let us represent the matrix @xmath355 using @xmath358 blocks as : @xmath387,\ ] ] we first prove that @xmath388 for @xmath364 . the proof is by induction on the rows of the matrix @xmath389 . the first row entries of the matrix @xmath389 are given by @xmath390 and for the matrix @xmath391 are given by @xmath392 due to the construction of the stbc , we have @xmath393 , for @xmath394 . using this , we get @xmath395 now , let us assume that row @xmath396 of @xmath391 is equal to the row @xmath396 of @xmath389 for all @xmath397 . the @xmath189-th row of @xmath389 is given by @xmath398 and the @xmath189-th row of @xmath391 is given by @xmath399 @xmath400 we now prove that @xmath401 . the proofs for the matrices @xmath402 and @xmath403 are very similar . first step is to prove that @xmath404 . the proof is by induction on the rows of the matrix @xmath405 . the first row entries of the matrix @xmath405 are given by @xmath406 and for the matrix @xmath407 are given by @xmath408 due to the construction of the stbc , we have @xmath393 , for @xmath394 . using this , we get @xmath409 now , let us assume that row @xmath396 of @xmath405 is equal to the row @xmath396 of @xmath407 for all @xmath397 . the @xmath189-th row of @xmath405 is given by @xmath410 and the @xmath189-th row of @xmath407 is given by @xmath411 @xmath412 we now prove that @xmath413 . the proof is by induction on the rows of the matrix @xmath407 . the first row entries of the matrix @xmath407 are given by @xmath408 due to the construction of the stbc , we have @xmath393 , for @xmath394 . using this , we get @xmath414 we need to show that this is equal to @xmath415 . @xmath416 substituting the values of the weight matrices from for @xmath417 , @xmath418 and @xmath419 , and simplifying , we see that it is sufficient to show that @xmath420 or equivalently , @xmath421 since @xmath422 and @xmath102 are one s complement of each other in the binary representation , we have , @xmath423 therefore we have , @xmath424 the equality for @xmath402 and @xmath403 can be shown similarly . [ r2_struct_prop_const_1 ] the matrix @xmath269 is block diagonal with @xmath425 blocks , each of size @xmath358 . for the matrix @xmath269 to be block diagonal with @xmath425 blocks , each of size @xmath358 , we need to satisfy the following conditions * the matrices @xmath426 form a four group decodable stbc with @xmath427 variables per group * the matrix @xmath355 is such that @xmath347 is block diagonal with @xmath425 blocks , each of size @xmath358 . since the matrices @xmath428 form a four group decodable stbc with @xmath427 variables per group , it is easily seen that the matrices @xmath426 also form a four group decodable stbc with @xmath427 variables per group as @xmath429 \textbf{m}^{h } = \textbf{0}$ ] for @xmath188 and @xmath189 in different groups . we now introduce some notation before we address the structure of the matrix @xmath430 . let @xmath396 be an integer such that @xmath431 . we denote by @xmath432 , the binary representation of @xmath433 using @xmath434 bits . let @xmath435 denote the bitwise xor operation between any two binary numbers . now , we turn to the structure of the matrix @xmath355 . from proposition [ e_struct_prop_const_1 ] , we know the structure of the matrix @xmath355 . computing @xmath347 , we see that for it to be block diagonal with @xmath425 blocks , each of size @xmath358 , it is sufficient to show that the matrices @xmath436 are symmetric with identical entries on the diagonal for @xmath437 . the entries of @xmath436 are given by @xmath438 expanding and simplifying , we get @xmath439 where @xmath440 @xmath441 and @xmath442 is given by @xmath443 for @xmath444 and @xmath445 . we now see that for every @xmath396 , there exists a unique @xmath446 such that @xmath447 as @xmath448 \check { \textbf{a}}_{\lambda + l}^{t}\check { \textbf{m}}^{t } \check { \textbf{h}}^{t}\right)\\ % \end{equation * } % \begin{equation * } & = \left\langle \textbf{h}_{m^ { ' } } , \textbf{h}_{4 \lambda + l } \right\rangle,\end{aligned}\ ] ] where @xmath449 . similarly , for every @xmath450 , there exists a unique @xmath451 such that @xmath452 where @xmath453 . we can now write , @xmath454 if @xmath455 . let @xmath456 . @xmath457 is given by , @xmath458 . therefore , we can see that @xmath459 is symmetric . using the above arguments , it is also easly seen that the diagonal elements of the matrix @xmath459 are identical . hence , we have shown that the matrix @xmath269 is block diagonal with @xmath425 blocks , each of size @xmath358 . the stbc @xmath461 can be written as @xmath462 where @xmath463 . tweaking the system model in section [ sec2 ] , we can get a generator matrix for this stbc as @xmath464.\ ] ] hence , can be written as @xmath465 where @xmath466 is given by @xmath467 and @xmath43,$ ] with each @xmath48 drawn from a 2-dimensional constellation . it can be easily seen that @xmath468 . let the qr decomposition of the complex matrix @xmath469 yield matrices @xmath470 and @xmath471 . using the relation : if @xmath472 , then @xmath473 , we can see that @xmath474 . the qr decomposition of a complex matrix yields a unitary @xmath58 matrix and an upper triangular matrix @xmath59 with real diagonal entries . hence , the diagonal entries of the matrix @xmath471 are real . since @xmath474 , we ll have @xmath475 for @xmath476 . hence , the stbc @xmath461 exhibits a block orthogonal property with parameters @xmath477 . let the @xmath99 matrix for this code have the following structure : @xmath140,\ ] ] where @xmath268 , @xmath355 and @xmath269 are @xmath478 matrices . from @xcite , it can be easily seen that @xmath268 has a block diagonal structure with two blocks , and each block of the size @xmath479 . @xmath480,\ ] ] where @xmath481 and @xmath482 are @xmath479 upper triangular matrices . [ r1_struct_prop_const_3 ] the non - zero blocks of the matrix @xmath268 are equal i.e. , @xmath483 . proof is similar to the proof of proposition [ r1_struct_prop_const_1 ] . the structure of the matrix @xmath355 is described in the following proposition . [ e_struct_prop_const_3 ] the matrix @xmath355 is of the form @xmath484,\ ] ] where @xmath384 , @xmath177 are @xmath479 matrices . proof is similar to the proof of proposition [ e_struct_prop_const_1 ] . [ r2_struct_prop_const_3 ] the matrix @xmath269 is block diagonal with @xmath485 blocks , each of size @xmath479 . proof is similar to the proof of proposition [ r2_struct_prop_const_1 ] . as only rate-1 ciods are considered in this construction , this can only be done for either @xmath486 ciods or @xmath487 ciods . the structure of the @xmath59 matrix obtained from the @xmath262 ciod is the same as the structure of @xmath59 matrix obtained from the construction iii . the proof of the structure is also the same as given in appendix [ app_r_mat_struct_const_4 ] . we now consider the structure of the @xmath59 matrix obtained from using a @xmath487 ciod . let the @xmath99 matrix for this code have the following structure : @xmath140,\ ] ] where @xmath268 , @xmath355 and @xmath269 are @xmath488 matrices . from @xcite , it can be easily seen that @xmath268 has a block diagonal structure with @xmath425 blocks , and each block of the size @xmath486 . @xmath359,\ ] ] where @xmath360 are @xmath486 upper triangular matrices for @xmath489 .
construction of high rate space time block codes ( stbcs ) with low decoding complexity has been studied widely using techniques such as sphere decoding and non maximum - likelihood ( ml ) decoders such as the qr decomposition decoder with m paths ( qrdm decoder ) . recently ren et al . , presented a new class of stbcs known as the block orthogonal stbcs ( bostbcs ) , which could be exploited by the qrdm decoders to achieve significant decoding complexity reduction without performance loss . the block orthogonal property of the codes constructed was however only shown via simulations . in this paper , we give analytical proofs for the block orthogonal structure of various existing codes in literature including the codes constructed in the paper by ren et al . we show that codes formed as the sum of clifford unitary weight designs ( cuwds ) or coordinate interleaved orthogonal designs ( ciods ) exhibit block orthogonal structure . we also provide new construction of block orthogonal codes from cyclic division algebras ( cdas ) and crossed - product algebras ( cpas ) . in addition , we show how the block orthogonal property of the stbcs can be exploited to reduce the decoding complexity of a sphere decoder using a depth first search approach . simulation results of the decoding complexity show a 30% reduction in the number of floating point operations ( flops ) of bostbcs as compared to stbcs without the block orthogonal structure .
studies have demonstrated that although the prevalence of chronic diseases in the elderly population of china is high , resulting in a heavy disease burden , the awareness , treatment and prevention of chronic diseases are generally poor in china . although knowledge concerning the prevention of hypertension has been most extensively popularized , the awareness of this information in economically poor regions and rural areas is < 10% . the control rate of hypertension in rural areas of china is only 2.3% , which is lower than that in rural india ( 8.6% ) . however , the burden of chronic diseases will continuously increase due to the aging of the chinese population . in addition , the high disabling potential of chronic neurological diseases such as dementia will also cause an enormous social and economic burden because of the huge population base of elderly individuals in china . therefore , the prevention and control of chronic diseases in china are very important . the prevention and control of chronic diseases related tasks have been performed for more than 30 years , and a complete prevention and control system has been established . thus , is the chronic disease awareness status among elderly veterans higher than that of the elderly in the general population ? can the years of experience in the prevention and control of chronic diseases in elderly veteran communities provide valuable references ? currently , most chinese veterans have entered the advanced aging stage . the huge burden caused by dementia and other chronic disabling neurological diseases ( cdnd ) has gradually attracted our attention , and information and education about disease prevention have gradually been introduced . however , does the current awareness of strategies for the prevention and control of these diseases correspond to their high degree of disablement ? to address this issue , a survey of the popularization and awareness status of chronic disease prevention knowledge was submitted to elderly veteran communities in beijing to summarize the experiences and shortcomings of these individuals regarding their knowledge about the prevention and control of chronic diseases . the survey results are expected to provide a valuable reference for the future prevention and control of chronic diseases among the elderly in the general population . a cross - sectional cluster sampling method was applied to survey veterans living in elderly veteran communities in beijing . the study protocol was reviewed and approved by the ethics committee of chinese pla general hospital , and the investigation was initiated after obtaining informed consent from the subjects or their guardians . veterans were included if they met the following criteria : ( 1 ) at least 60 years old ; ( 2 ) continuous residence in a veteran community in beijing for at least 1 month ; and ( 3 ) a history of working in the military system before retirement . most chinese veterans live in fixed veteran communities composed of stable elderly populations after retirement . the fixed healthcare management system , including community outpatient clinics , provides support and professional healthcare services to these veterans and stores long - term medical records . all of the above characteristics of veterans reduce the rate of missing data and facilitate epidemiological research . the working staff in the veteran communities and the spouses of the retired veterans were not included in this study . using a unified questionnaire , via face - to - face interviews , the baseline characteristics of the veterans were collected , and the popularization and awareness status of chronic disease prevention knowledge was investigated by qualified geriatric neurology graduate students after standardized training . the survey covered five cdnd , including dementia , alzheimer 's disease ( ad ) , parkinson 's disease ( pd ) , sleep disorder and cerebrovascular disease ( cvd ) , and three common chronic diseases ( ccd ) , including hypertension , diabetes , and coronary heart disease ( chd ) ; cvd was considered as a common chronic neurological disease . the survey investigated the current status of the veterans awareness of the disease name , their knowledge about the prevention and treatment of these chronic diseases and the approaches used by the veterans to access this information , including media ( books , newspapers , magazines , radio , and television ) , word of mouth ( verbal communication among the elderly ) , and health care professionals ( hospital seminar and medical staff in the elderly veteran communities ) . epidata 3.1 software ( the epidata association , odense , denmark ) developed by lauritsen jm and bruus m. was used to establish a database , and spss 19.0 ( ssps inc . , chicago , il , usa ) was used for the statistical analysis of the data . the differences in the total awareness rates of various chronic diseases among veterans were compared with that of hypertension using the mcnemar paired chi - square test . alternatively , the awareness rates of veterans with or without different chronic diseases were compared to that of hypertension using the chi - square test . the demographic differences in the awareness and the veterans approaches to access knowledge about a chronic disease were compared using the chi - square test , the fisher exact test , and the chi - square trend test . differences between groups were considered statistically significant when p < 0.05 . due to the varying number of subjects who responded regarding their awareness of different chronic diseases , the sample sizes of each surveyed item were recorded ; only complete data were analyzed , and any subjects for whom data were missing were omitted . a cross - sectional cluster sampling method was applied to survey veterans living in elderly veteran communities in beijing . the study protocol was reviewed and approved by the ethics committee of chinese pla general hospital , and the investigation was initiated after obtaining informed consent from the subjects or their guardians . veterans were included if they met the following criteria : ( 1 ) at least 60 years old ; ( 2 ) continuous residence in a veteran community in beijing for at least 1 month ; and ( 3 ) a history of working in the military system before retirement . most chinese veterans live in fixed veteran communities composed of stable elderly populations after retirement . the fixed healthcare management system , including community outpatient clinics , provides support and professional healthcare services to these veterans and stores long - term medical records . all of the above characteristics of veterans reduce the rate of missing data and facilitate epidemiological research . the working staff in the veteran communities and the spouses of the retired veterans were not included in this study . using a unified questionnaire , via face - to - face interviews , the baseline characteristics of the veterans were collected , and the popularization and awareness status of chronic disease prevention knowledge was investigated by qualified geriatric neurology graduate students after standardized training . the survey covered five cdnd , including dementia , alzheimer 's disease ( ad ) , parkinson 's disease ( pd ) , sleep disorder and cerebrovascular disease ( cvd ) , and three common chronic diseases ( ccd ) , including hypertension , diabetes , and coronary heart disease ( chd ) ; cvd was considered as a common chronic neurological disease . the survey investigated the current status of the veterans awareness of the disease name , their knowledge about the prevention and treatment of these chronic diseases and the approaches used by the veterans to access this information , including media ( books , newspapers , magazines , radio , and television ) , word of mouth ( verbal communication among the elderly ) , and health care professionals ( hospital seminar and medical staff in the elderly veteran communities ) . epidata 3.1 software ( the epidata association , odense , denmark ) developed by lauritsen jm and bruus m. was used to establish a database , and spss 19.0 ( ssps inc . , chicago , il , usa ) was used for the statistical analysis of the data . the differences in the total awareness rates of various chronic diseases among veterans were compared with that of hypertension using the mcnemar paired chi - square test . alternatively , the awareness rates of veterans with or without different chronic diseases were compared to that of hypertension using the chi - square test . the demographic differences in the awareness and the veterans approaches to access knowledge about a chronic disease were compared using the chi - square test , the fisher exact test , and the chi - square trend test . differences between groups were considered statistically significant when p < 0.05 . due to the varying number of subjects who responded regarding their awareness of different chronic diseases , the sample sizes of each surveyed item were recorded ; only complete data were analyzed , and any subjects for whom data were missing were omitted . a total of 3473 veterans aged 60 years living in 44 elderly veteran communities in beijing completed this survey in 2008 ; their average age was 77.97 5.10 years . among them , 30.8% were in the oldest old category ( aged 80 years ) , 94.4% were males , and their average education duration was 9.93 4.18 years . the prevalence of hypertension and chd among veterans was higher than 60.0% , which was the highest among ccd ; the prevalence of pd , ad and dementia were low among cdnd , whereas the prevalence of cvd and sleep disorders were relatively high [ table 1 ] . the awareness statuses of the disease name and of strategies for the prevention of cdnd in all surveyed veterans , those with cdnd , and those without cdnd were significantly worse than those with ccd such as hypertension . the awareness rates for hypertension , chd , and diabetes were approximately 100% ; hypertension displayed the highest awareness rate . the awareness rate for cvd was the highest among cdnd and was close to that for hypertension and the other ccd . the awareness rate for ad was the lowest at < 10% , followed by those for sleep disorders , pd , and dementia . the differences in the awareness rates between the veterans with cdnd and those with hypertension were statistically significant [ table 1 and figure 1 ] . prevalence rates of chronic diseases and awareness rates of prevention knowledge about chronic diseases among veterans ( % ( n / n ) ) * p < 0.001 ; p < 0.01 ; the p value reflects differences in the awareness rates of various chronic diseases relative to hypertension among veterans . the numbers in parentheses correspond to the numbers of veterans with the disease or who knew the disease name and prevention knowledge divided by the total numbers of veterans . ad : alzheimer s disease ; pd : parkinson s disease ; cvd : cerebrovascular disease ; chd : coronary heart disease . ad : alzheimer 's disease ; pd : parkinson 's disease ; cvd : cerebrovascular disease ; chd : coronary heart disease . the awareness statuses of the disease name and prevention and treatment knowledge about chronic diseases varied with age , years of education , and gender . except for hypertension and chd , the awareness rates of the name of chronic diseases in the oldest - old group ( aged 80 years ) were significantly lower than those in the younger elderly group ( aged 6079 years ) . the awareness rates of the prevention and treatment knowledge about dementia , cvd , and hypertension among the oldest - old group were remarkably lower than those among the younger elderly group . except for hypertension , with increasing years of education , the awareness rate of the name of chronic diseases exhibited an increasing trend . except for ad and pd , awareness status of the prevention and treatment knowledge about chronic diseases increased significantly with increasing years of education based on the chi - square trend test . except for dementia , the awareness status of the name of chronic diseases among females was significantly higher than that among males . additionally , the awareness statuses of the prevention and treatment knowledge about ad among females were significantly higher than those among males based on the chi - square test [ tables 2 and 3 ] . awareness rates of chronic disease names among veterans based on different demographic characteristics * p < 0.05 ; p < 0.01 ; p < 0.001 ; the p value reflects differences in the awareness rates of various chronic diseases among veterans grouped by age , education level and gender . the chi - square trend test was used to explore the educational differences in the awareness rate of chronic diseases among veterans . the numbers in parentheses correspond to the numbers of veterans who knew the disease name divided by the total numbers of veterans . ad : alzheimer s disease ; pd : parkinson s disease ; cvd : cerebrovascular disease ; chd : coronary heart disease . awareness rates of chronic disease prevention knowledge among veterans based on different demographic characteristics * p < 0.05 ; p < 0.01 ; p < 0.001 ; the p value reflects differences in the awareness rates of various chronic diseases among veterans grouped by age , education level and gender . the chi - square trend test was used to explore the educational differences in the awareness rate of chronic diseases among veterans . the numbers in parentheses correspond to the numbers of veterans with prevention knowledge divided by total numbers of veterans . ad : alzheimer s disease ; pd : parkinson s disease ; cvd : cerebrovascular disease ; chd : coronary heart disease . regarding the approaches used to access knowledge about ccd such as hypertension , diabetes , chd and cvd , media was the most frequently selected mode of communication , displaying the highest rate of nearly 80% . the rate of the use of health care professionals was 46.158.6% , which was similar to or higher than that of the word of mouth [ 48.651.9% ; table 4 ] . compared with ccd such as hypertension , among the approaches to access knowledge about cdnd , the rates of the use of health care professionals were significantly reduced to 10.628.2% . excluding ad , the rates of the use of the word of mouth for cdnd were 56.576.5% , which were higher than those for ccd . additionally , the rates of the use of media for cdnd were 46.256.2% , which were lower than those for ccd [ tables 4 , 5 , and figure 2 ] . rates of the use of different approaches to access prevention knowledge about chronic disabling neurological diseases among veterans * p < 0.05 ; p < 0.01 ; p < 0.001 ; the p value reflects differences in the use of different approaches to access prevention knowledge about chronic disabling neurological diseases among veterans grouped by age , education level and gender . the chi - square trend test was used to explore the educational differences in the use of different approaches to access prevention knowledge about chronic disabling neurological diseases among veterans . the numbers in parentheses correspond to the numbers of veterans who used the indicated approach divided by the total numbers of veterans . ad : alzheimer s disease ; pd : parkinson s disease ; cvd : cerebrovascular disease ; chd : coronary heart disease . rates of the use of different approaches to access prevention knowledge about common chronic diseases among veterans * p < 0.05 ; p < 0.01 ; p < 0.001 ; the p value reflects differences in the use of different approaches to access prevention knowledge about common chronic diseases among veterans grouped by age , education level and gender . the chi - square trend test was used to explore the educational differences in the use of different approaches to access prevention knowledge of chronic diseases among veterans . the numbers in parentheses correspond to the numbers of patients who used the indicated approach divided by the total numbers of veterans . rates of the use of different approaches to access prevention knowledge about chronic diseases among veterans . ad : alzheimer 's disease ; pd : parkinson 's disease ; cvd : cerebrovascular disease ; chd : coronary heart disease . the approaches used to access knowledge about most chronic diseases were not significantly affected by age , years of education or gender [ tables 4 and 5 ] . except for ad , pd , and sleep disorders , the proportions of the oldest - old group that obtained knowledge about chronic diseases using media the proportion of the oldest - old group that obtained prevention and treatment knowledge about dementia through word of mouth was significantly higher than that of the younger elderly group . the approaches used to access prevention and treatment knowledge about dementia , pd , cvd , diabetes , chd , and hypertension were affected by the number of years of education . the proportions of females who accessed prevention and treatment information about dementia , ad , pd , and sleep disorders using health care professionals were significantly higher than those of males [ tables 4 and 5 ] . the prevalence of hypertension and chd among veterans was higher than 60.0% , which was the highest among ccd ; the prevalence of pd , ad and dementia were low among cdnd , whereas the prevalence of cvd and sleep disorders were relatively high [ table 1 ] . the awareness statuses of the disease name and of strategies for the prevention of cdnd in all surveyed veterans , those with cdnd , and those without cdnd were significantly worse than those with ccd such as hypertension . the awareness rates for hypertension , chd , and diabetes were approximately 100% ; hypertension displayed the highest awareness rate . the awareness rate for cvd was the highest among cdnd and was close to that for hypertension and the other ccd . the awareness rate for ad was the lowest at < 10% , followed by those for sleep disorders , pd , and dementia . the differences in the awareness rates between the veterans with cdnd and those with hypertension were statistically significant [ table 1 and figure 1 ] . prevalence rates of chronic diseases and awareness rates of prevention knowledge about chronic diseases among veterans ( % ( n / n ) ) * p < 0.001 ; p < 0.01 ; the p value reflects differences in the awareness rates of various chronic diseases relative to hypertension among veterans . the numbers in parentheses correspond to the numbers of veterans with the disease or who knew the disease name and prevention knowledge divided by the total numbers of veterans . ad : alzheimer s disease ; pd : parkinson s disease ; cvd : cerebrovascular disease ; chd : coronary heart disease . ad : alzheimer 's disease ; pd : parkinson 's disease ; cvd : cerebrovascular disease ; chd : coronary heart disease . the awareness statuses of the disease name and prevention and treatment knowledge about chronic diseases varied with age , years of education , and gender . except for hypertension and chd , the awareness rates of the name of chronic diseases in the oldest - old group ( aged 80 years ) were significantly lower than those in the younger elderly group ( aged 6079 years ) . the awareness rates of the prevention and treatment knowledge about dementia , cvd , and hypertension among the oldest - old group were remarkably lower than those among the younger elderly group . except for hypertension , with increasing years of education , except for ad and pd , awareness status of the prevention and treatment knowledge about chronic diseases increased significantly with increasing years of education based on the chi - square trend test . except for dementia , the awareness status of the name of chronic diseases among females was significantly higher than that among males . additionally , the awareness statuses of the prevention and treatment knowledge about ad among females were significantly higher than those among males based on the chi - square test [ tables 2 and 3 ] . awareness rates of chronic disease names among veterans based on different demographic characteristics * p < 0.05 ; p < 0.01 ; p < 0.001 ; the p value reflects differences in the awareness rates of various chronic diseases among veterans grouped by age , education level and gender . the chi - square trend test was used to explore the educational differences in the awareness rate of chronic diseases among veterans . the numbers in parentheses correspond to the numbers of veterans who knew the disease name divided by the total numbers of veterans . ad : alzheimer s disease ; pd : parkinson s disease ; cvd : cerebrovascular disease ; chd : coronary heart disease . awareness rates of chronic disease prevention knowledge among veterans based on different demographic characteristics * p < 0.05 ; p < 0.01 ; p < 0.001 ; the p value reflects differences in the awareness rates of various chronic diseases among veterans grouped by age , education level and gender . the chi - square trend test was used to explore the educational differences in the awareness rate of chronic diseases among veterans . the numbers in parentheses correspond to the numbers of veterans with prevention knowledge divided by total numbers of veterans . ad : alzheimer s disease ; pd : parkinson s disease ; cvd : cerebrovascular disease ; chd : coronary heart disease . regarding the approaches used to access knowledge about ccd such as hypertension , diabetes , chd and cvd , media was the most frequently selected mode of communication , displaying the highest rate of nearly 80% . the rate of the use of health care professionals was 46.158.6% , which was similar to or higher than that of the word of mouth [ 48.651.9% ; table 4 ] . compared with ccd such as hypertension , among the approaches to access knowledge about cdnd , the rates of the use of health care professionals were significantly reduced to 10.628.2% . excluding ad , the rates of the use of the word of mouth for cdnd were 56.576.5% , which were higher than those for ccd . additionally , the rates of the use of media for cdnd were 46.256.2% , which were lower than those for ccd [ tables 4 , 5 , and figure 2 ] . rates of the use of different approaches to access prevention knowledge about chronic disabling neurological diseases among veterans * p < 0.05 ; p < 0.01 ; p < 0.001 ; the p value reflects differences in the use of different approaches to access prevention knowledge about chronic disabling neurological diseases among veterans grouped by age , education level and gender . the chi - square trend test was used to explore the educational differences in the use of different approaches to access prevention knowledge about chronic disabling neurological diseases among veterans . the numbers in parentheses correspond to the numbers of veterans who used the indicated approach divided by the total numbers of veterans . ad : alzheimer s disease ; pd : parkinson s disease ; cvd : cerebrovascular disease ; chd : coronary heart disease . rates of the use of different approaches to access prevention knowledge about common chronic diseases among veterans * p < 0.05 ; p < 0.01 ; p < 0.001 ; the p value reflects differences in the use of different approaches to access prevention knowledge about common chronic diseases among veterans grouped by age , education level and gender . the chi - square trend test was used to explore the educational differences in the use of different approaches to access prevention knowledge of chronic diseases among veterans . the numbers in parentheses correspond to the numbers of patients who used the indicated approach divided by the total numbers of veterans . rates of the use of different approaches to access prevention knowledge about chronic diseases among veterans . ad : alzheimer 's disease ; pd : parkinson 's disease ; cvd : cerebrovascular disease ; chd : coronary heart disease . the approaches used to access knowledge about most chronic diseases were not significantly affected by age , years of education or gender [ tables 4 and 5 ] . except for ad , pd , and sleep disorders , the proportions of the oldest - old group that obtained knowledge about chronic diseases using media the proportion of the oldest - old group that obtained prevention and treatment knowledge about dementia through word of mouth was significantly higher than that of the younger elderly group . the approaches used to access prevention and treatment knowledge about dementia , pd , cvd , diabetes , chd , and hypertension were affected by the number of years of education . the proportions of females who accessed prevention and treatment information about dementia , ad , pd , and sleep disorders using health care professionals were significantly higher than those of males [ tables 4 and 5 ] . this study demonstrated that the chronic disease awareness status among elderly veterans with or without chronic diseases was relatively good , but the awareness status of strategies for the prevention of cdnd was far lower than that of ccd . the health care professionals played a limited role in the popularization of knowledge about cdnd . the propagation of knowledge about cdnd greatly relied on nonmedical professional media and peer education by word of mouth ; thus , the accuracy and guiding significance of the information obtained are not assured . compared with ccd , the awareness status of cdnd , particularly dementia , among elderly veterans was significantly lower . although the prevalence of cdnd was lower than that of ccd , cdnd are the primary chronic disabling diseases in the elderly population , resulting in the heaviest disease burden , including huge costs . however , regardless of the income level of a country ( high or middle income ) , public awareness of dementia is generally poor , and this limited understanding delays the early diagnosis and treatment of patients . although the prevention and control of chronic diseases in elderly veteran communities are implemented earlier than in the general population , the poor awareness status of cdnd has maintained a huge gap with respect to its heavy social and economic burden . therefore , extensive popularization should be implemented to raise awareness of these diseases , thus contributing to the efficient treatment of cdnd and reducing the burden of these diseases . regarding the approaches used to access information about the prevention of cdnd , the rates of the use of health care professionals were only 10.628.2% , whereas those for ccd were approximately 50% . compared with ccd , the role of health care professionals in the dissemination of information about cdnd was significantly lacking , and this weakness should be strengthened . according to a report from world health organization ( who ) , regardless of economic development level of a country , the awareness of dementia is generally lacking not only among the public but also among health care and social service providers . some physicians even look down on dementia patients and their families , increasing the patients sense of shame and hindering the efficient treatment of patients . therefore , the who advocates that countries worldwide should learn from the experience of the national dementia programs such as those of japan and the uk to improve the awareness status of dementia among health care workers and the public . the health care professionals in geriatric neurology should provide professional training to medical personnel working in veteran communities and general population to improve their capability of preventing and controlling cdnd . although the chinese government had strengthened chronic disease prevention program , the awareness rates of ccd were generally low . the awareness rate of hypertension was 7080% in beijing and shanghai , whereas that in low economic status regions and rural areas was lower than 10% , and the awareness rates for chd and cvd among those of low economic status were < 1% . the awareness status for cdnd was worse than that for hypertension regardless of the economic development level . the awareness rates of the early symptoms and nursing knowledge of dementia were only approximately 15% and 5% , respectively . the awareness rates of prevention knowledge about ccd , dementia and cdnd among elderly veterans were significantly higher than those among the general elderly population . notably , the veterans without chronic diseases also exhibited a higher awareness status of chronic diseases , improving the efficient diagnosis and effective treatment of chronic diseases . since veteran communities were established more than 30 years ago , the prevention and control of ccd and cdnd have been the focus of health care for veterans of advanced age . this survey also documented the excellent accomplishments regarding the prevention and control of chronic diseases among veterans . the proportions of elderly veterans who accessed knowledge about ccd or cdnd from health care professionals were approximately 50% and 10% , respectively . this finding indicated that medical personnel played an effective role in the popularization of chronic disease prevention knowledge . although the proportion of the oldest - old group and the prevalence of chronic diseases among veterans were significantly higher than those among the elderly in the general population , the overall health and functional status of veterans were significantly superior to those of the elderly in the general population . these results reflected the higher awareness status of chronic diseases among veterans and the active provision of education about chronic diseases by medical personnel . therefore , referring to the experience in veteran communities , education about chronic diseases in the general population by medical personnel should be strengthened to improve the currently poor awareness status of chronic disease prevention and control in china . similar to the findings in the general population , the most common approach used by elderly veterans to access prevention knowledge of chronic diseases was nonmedical professional media . the overall level of education among veterans was significantly higher than that among the elderly in the general population , and more than 90% of veterans regularly read books and newspapers , watch tv , and listen to the radio . accordingly , media plays an important role in the control of chronic diseases among veterans . however , information propagated through the media should be released after revision by healthcare professionals to avoid the dissemination of inaccurate information . concerning the popularization of the prevention knowledge about cdnd , the role of media was not as significant as that about ccd , whereas the role of word of mouth was nearly the leading approach for the dissemination of information about cdnd . these results suggested that media did not sufficiently address cdnd and that the popularization of this knowledge through media should be strengthened . although the word of mouth may not provide accurate information , numerous leisure activities at veteran communities enable frequent communication between veterans , such that prevention knowledge about chronic diseases can rapidly spread through peer education by word of mouth . because the knowledge about cdnd is highly technical , media and word of mouth should be guided to rapidly spread truthful prevention information to improve the currently poor awareness status of cdnd and their prevention . this study only investigated the awareness status of the prevention knowledge about chronic diseases , and the status of the correct understanding and the control of these diseases among veterans should be further investigated .
background : the awareness , treatment and prevention of chronic diseases are generally poor among the elderly population of china , whereas the prevention and control of chronic diseases in elderly veteran communities have been ongoing for more than 30 years . therefore , investigating the awareness status of chronic disabling neurological diseases ( cdnd ) and common chronic diseases ( ccd ) among elderly veterans may provide references for related programs among the elderly in the general population.methods:a cross - sectional survey was conducted among veterans 60 years old in veteran communities in beijing . the awareness of preventive strategies against dementia , alzheimer 's disease ( ad ) , parkinson 's disease ( pd ) , sleep disorders , cerebrovascular disease ( cvd ) and ccd such as hypertension , and the approaches used to access this information , including media , word of mouth ( verbal communication among the elderly ) and health care professionals , were investigated via face - to - face interviews.results:the awareness rates for ccd and cvd were approximately 100% , but that for ad was the lowest at < 10% . the awareness rates for sleep disorders , pd and dementia , were 51.089.4% . media was the most commonly selected mode of communication by which veterans acquired knowledge about ccd and cvd . media was used by approximately 80% of veterans . both health care professionals and word of mouth were used by approximately 50% of veterans . with respect to the source of information about cdnd excluding ad , the rates of the use of health care professionals , word of mouth and media were 10.628.2% , 56.576.5% , and approximately 50% , respectively.conclusions:the awareness of cdnd among elderly veterans was significantly lower than that of ccd . more information about cdnd should be disseminated by health care professionals . appropriate guidance will promote the rapid and extensive dissemination of information about the prevention of cdnd by media and word - of - mouth peer education .
The suspect in a mass shooting at a Colorado movie theater dropped out of medical school last month. Spokeswoman Jacque Montgomery says 24-year-old James Holmes was a student at the University of Colorado School of Medicine in Denver until last month. She did not know when he started school or why he withdrew. Holmes is accused of killing a dozen people when he fired into a crowded movie theater in the Denver suburb of Aurora. He was wearing a gas mask and set off an unknown gas in the theater. Holmes is in police custody, and the FBI says there is no indication the attack is tied to any terrorist groups. ||||| AURORA —The 24-year-old accused of shooting some 70 people early Friday morning was a former honor student and recent graduate school dropout who apparently booby trapped his apartment and left the stereo blaring non-stop techno music before he headed to the local movie theater where police say he killed 12 people. James Eagan Holmes of 1690 Paris Street surrendered to police in the parking lot outside the theater "without any significant incident," Aurora police Chief Dan Oates said. Oates said Holmes made a statement to officers about possible explosives in his home. That prompted police to evacuate five buildings nearby and begin searching his third-floor apartment using a police robot and camera attached to a long pole. James Holmes (University of Colorado Denver Medical Campus) Inside, officers found trip wires attached to 1-liter plastic bottles that contain an unknown substance. Police Chief Dan Oates said the explosive devices were "pretty sophisticated." "We could be here for days," he said at midday. Holmes grew up in San Diego and graduated from Westview High School there in 2006. In 2010, he earned a degree in neuroscience from the University of California Riverside, a spokeswoman for the university said. Chancellor Timothy White said Holmes distinguished himself academically, graduating with highest honors, but that he did not walk at his commencement ceremony. "Academically, he was the top of the top," White said. Advertisement The Mai family has lived next door to the Holmes family for abut 15 years on a middle-class street in suburban San Diego. Christine Mai, 17, said she never saw James Holmes act violent or inappropriately. She never knew him or his family to have weapons or any conflicts. He grew up with a younger sister who plays guitar and attends San Diego State University. Christine Mai said Holmes' father went to Colorado to be with his son and his mother was holed up inside her home and didn't want to have any visitors. Dozens of reporters were camped outside the house. Police investigate the Aurora theater shooting suspect's apartment near the intersection of 17th Street and Paris Street on Friday, July 20, 2012. (Stephen Mitchell, The Denver Post) The Holmes had Christmas parties in their front yard and often exchanged gifts with the Mai family, she said. Last year, they shared hot apple cider in the front yard with other neighbors. "He seemed like a nice guy," she said. "His mother used to tell us he was a good son." Holmes left home to attend UCR, but returned home after graduation and had a hard time finding work. He took a part-time job at a nearby McDonald's to pay for school, she said. "He didn't have a job,"she said. "I felt bad for him because he studied so hard. My brother said he looked kind of down, he seemed depressed." Christine Mai and her father, Tom said they never saw Holmes socializing with friends, partying at his house or with any girlfriends. Contact The Post If you have information or tips related to this story, please call us at 303-893-TIPS or email us at tips@denverpost.com. "James was nice and quiet," Tom Mai said. "He was studious, he cut the grass, and cleaned the car. He was very bright." Julie Adams said her son played soccer with Holmes at Westview High. Holmes played his freshman and sophomore year, she said. While most of the other kids — her son Taylor included — played league soccer and continues the sport throughout high school, Holmes wasn't as involved, she said. "I could tell you a lot about every single kid on that team except for him," Adams said. "He was more aloof." She was shocked to discover this morning that the helicopters were circling her San Diego neighborhood because Holmes' alleged rampage. "Taylor remembers playing soccer with him. He said he was quiet, reserved and a respectful kid," Adams said. According to her son's yearbook, Holmes also ran cross country as a freshman but did not continue the sport. Holmes enrolled in the graduate program in neurosciences at the University of Colorado Anschutz Medical Campus in Aurora in June 2011 but was in the process of withdrawing, university spokeswoman Jacque Montgomery said Friday. In an e-mail message to members of the campus community, Doug Abraham, Chief of Police for the university, said Holmes left the school in June and his access to campus buildings was terminated while his withdrawal was being processed. He said officials do not believe Holmes had been on campus since then, but authorities evacuated non-essential personnel from the research buildings as a precautionary measure while they wait for bomb-sniffing dogs to do a search of the buildings "to add another level of assurance." In an apartment rental application he submitted for a different apartment early last year, Holmes described himself as a "quiet and easy-going" student. Other tenants in his building — which is reserved for students, faculty and staff of the medical campus — described him as a recluse. A pharmacy student who also lives in the building told The Post he called 911 around 12:30 a.m. because there was a song blaring from the stereo inside apartment 10, where Holmes lived. The student, who wanted to be identified only as Ben, said he couldn't make out the song but that it seemed to be playing on repeat. Kaitlyn Fonzi, a 20-year-old biology student at University of Colorado Denver, lives in an apartment below Holmes. Around midnight, Fonzi said she heard techno music blasting from Holmes apartment. She went upstairs and knocked on the door. When no one answered, she put her hand on the door knob and realized the door was unlocked. Fonzi decided not to go inside the apartment. The music turned off at almost exactly 1 a.m., Fonzi said. Police received several reports of the shooting at the Century 16 Movie Theaters at the Aurora Town Center around 12:39 a.m. Witnesses told police that a man entered the dark, packed theater and opened fire after throwing two smoke canisters. Oates said he was dressed in black and wearing a ballistic helmet and vest, ballistic leggings, throat and groin protector and gas mask and black tactical gloves. He was armed with three weapons. NBC News said law enforcement officials told them the weapons were bought from local stores of two national chains — Gander Mountain Guns and Bass Pro Shop — beginning in May. His neighbors in Aurora said he kept to himself and wouldn't acknowledge people when they passed in the hall and said hello. "No one knew him. No one," one man said. Fonzi said Holmes seemed normal and studious. The maintenance person at Holmes' last apartment in Riverside remembered Holmes much the same way. Jose Torres,45, said he didn't remember Holmes having a roommate and said he wasn't social. Torres gasped when he realized Holmes was the accused shooter in Colorado. "He did not talk too much," Torres said after looking at a photograph of Holmes. "He don't say hi. He was just quiet with no problems." When told of the booby traps authorities found in Holmes' apartment in Aurora, Torres appeared shocked. "He didn't destroy this apartment when he left," Torres said. "When he left it was in good condition." Authorities began searching Holmes' Aurora apartment building around 2 a.m. A resident of the building who didn't want to give his name said he answered his door to see police with rifles. An officer asked if he had seen a white guy with crazy hair, possibly dyed unnatural colors, the man said. It was unclear if the officer was referring to Holmes. Wes Bradshaw and his mother Lavonne watched the search of the third-floor apartment from their apartment, they said. The two watched a police robot enter the building right before they heard a small explosion. The two, along with the rest of the building's tenants, were ordered to evacuate soon after, they said. Residents of the area were huddled on street corners waiting for news. Police at the scene told them it could be hours before they are allowed to return to their homes. About 6:30 a.m., three police officers on a fire truck bucket were looking through the window of the third-floor apartment and taking pictures. Using a long pole, responders broke into the window from the basket atop a ladder truck. Aurora Deputy Fire Chief Chris Henderson said authorities could see several "string-like contraptions" inside. "We're not sure exactly where they connect to," Henderson said. Jim Yacone, special agent in charge of the Denver FBI, said they were working on "how to disarm the flammable or explosive material." Neighbors in a fourth-floor apartment one building had a bird's eye view of the suspect's apartment. They said the curtains are usually closed and they never see any movement inside the apartment. Even at night, there are no lights or anything in that apartment, said Yesenia Lujan, 24, who has lived in her apartment for seven months. Using a camera with a zoom lens, Lujan's roommate said he could see into the suspect's kitchen, where a poster of Will Farrell in the movie "Anchorman" was hanging on the wall. Holmes is scheduled to appear in court in Arapahoe County on Monday morning. Staff writers Jordan Steffen, Kieran Nicholson and Monte Whaley and freelance writer Felisa Cardona contributed.
– The 24-year-old accused of a movie theater shooting rampage was attending medical school until just last month, says a rep for the University of Colorado. The spokeswoman didn't know why James Holmes dropped out of the university, the AP reports. He lived in an apartment building exclusive to med school affiliates, according to a fellow tenant—and police say Holmes' apartment is filled with complex booby traps, including explosives. "We could be here for days" disarming the "sophisticated" traps, police chief Dan Oates tells the Denver Post. Police, firefighters, and FBI officials have been at the apartment all day, with some taking photographs through the window via a fire engine's ladder. Neighbors say Holmes was reclusive: "No one knew him. No one," says another student who lives in the building. Holmes ignored it when others greeted him, the tenant says. The tenant called 911 early this morning after hearing a song playing loudly and repeatedly at 12:30am in Holmes' apartment, and the building was evacuated after 2am as police searched it. Public records show that Holmes is from San Diego, where his parents still live, and may have once attended the University of California Riverside. He moved into the Aurora apartment in May 2011, and enrolled at the University of Colorado as a grad student in the neurosciences program the following month.
Facebook Inc. (FB) co-founder Eduardo Saverin will save at least $67 million in federal income taxes by dropping U.S. citizenship, according to a Bloomberg analysis of the company’s stock price. Those savings will keep growing if Facebook’s shares increase. Saverin renounced his citizenship around September and he lives in Singapore, according to his spokesman, Tom Goodman. Saverin, 30, was part of a small group of Harvard University students who started the social networking site. He owns about 4 percent of the company, according to whoownsfacebook.com. The would-be savings underscore why more people are giving up U.S. citizenship before potential increases in taxes for the highest earners. The value of Saverin’s stake has swelled along with increases in Facebook’s share price before its planned initial public offering. The company plans to sell shares for as high as $38 apiece this week, compared with $32.10 in private auctions on SharesPost Inc. on Sept. 26. Saverin’s stake may be worth as much as $2.89 billion, based on the company’s 1.898 billion total shares outstanding. His stake was worth about $2.44 billion in September. Capital Gains Bloomberg calculated the $67 million figure by applying the 15 percent U.S. capital gains rate to the approximate $448 million spread between the two values. Bloomberg’s methodology was reviewed by Robert Willens, an independent tax adviser based in New York. “The calculations and assumptions are not only erroneous, they also further perpetuate the false impression that tax was the reason behind Eduardo’s decision,” Goodman said, declining to cite specific errors. “His motive had nothing to do with tax and everything to do with his desire to live and work in Singapore.” Whoownsfacebook.com is published by Massinvestor Inc. and draws its information from Facebook’s filings with the U.S. Securities and Exchange Commission, press releases, news reports and other publicly available sources. Saverin’s capital gains tax liability comes “at a time when the rate is probably the lowest it ever will be, and it’s a substantial discount to the value of what his position in Facebook will likely be two weeks from now,” said Edward Kleinbard, a tax law professor at the University of Southern California in Los Angeles. Any profit from future appreciation of Saverin’s Facebook stock will be earned free of capital gains tax in the U.S. and Singapore, which doesn’t impose the tax. “That’s got to be by far the biggest benefit, assuming Facebook’s stock appreciates at even a fraction of the level people expect,” Willens said. Exit Tax Americans who give up their citizenship owe what is effectively an exit tax on the estimated capital gains from their stock holdings at the time of the renunciation. Saverin’s bill would be about $365 million, though even that can be deferred indefinitely until he actually sells the shares. In the mean time, Saverin could choose to only pay interest to the U.S. government during the deferral period -- now at an annual rate of 3.28 percent. His savings may be even greater because Saverin’s tax advisers could argue that the value of his stake in September was less than the $2.44 billion used in Bloomberg’s calculation because selling such a large amount of stock at the then-market price wasn’t possible. By locking in his liability last year, Saverin may enjoy one more benefit: The capital gains tax rate is set to increase to 20 percent or even higher. To contact the reporter on this story: Jesse Drucker in New York at jdrucker4@bloomberg.net To contact the editor responsible for this story: Tom Giles at tgiles5@bloomberg.net ||||| When Eduardo Saverin was 13, his family discovered that his name had turned up on a list of victims to be kidnapped by Brazilian gangs. Saverin’s father was a wealthy businessman in São Paulo, and it was inevitable that he’d attract this kind of unwanted attention. Now the family had to make a permanent decision. They hastily arranged a move out of the country. And of all the places in the world they could move to, the Saverin family saw only one option. They took their talents to Miami. Would it be too much to say that America saved Eduardo Saverin? Probably. Maybe that’s just too overwrought. The Saverins were just another in a long line of immigrants who’d come to America for the opportunity it affords—the opportunity, among other things, to not have to worry that your child will be kidnapped just because you’ve become wealthy. Just because his parents moved here doesn’t mean Eduardo Saverin owes America anything, right? Yet if you study the trajectory of Saverin’s life—the path that took him from being an immigrant kid to a Harvard student to an instant billionaire to the subject of an Oscar-winning motion picture—it emerges as a uniquely American story. At just about every step between his landing in Miami and his becoming a co-founder of Facebook, you find American institutions and inventions playing a significant part in his success. Would Eduardo Saverin have been successful anywhere else? Maybe, but not as quickly, and not as spectacularly. It was only thanks to America—thanks to the American government’s direct and indirect investments in science and technology; thanks to the U.S. justice system; the relatively safe and fair investment climate made possible by that justice system; the education system that educated all of Facebook’s workers, and on and on—it was only thanks to all of this that you know anything at all about Eduardo Saverin today. Now comes news that Saverin has decided to renounce his U.S. citizenship, most likely to avoid a large long-term tax bill on his winnings in the Facebook IPO. Saverin owns about 4 percent of Facebook stock. By renouncing his citizenship last fall, well in advance of the IPO, Saverin will pay an “exit tax” on his assets as they were valued then. But he’ll pay no tax on income derived from stock sales in the future—that’s because he now lives in Singapore, which has no capital gains tax. It’s unclear how much this move will save him, since it depends on how Facebook’s stock performs. But let’s say the value of his stock doubles over the long run, from an estimated $3.8 billion now to around $8 billion. If that happens, he won’t pay any tax on the $4 billion increase in value—which, at a 15 percent capital gains rate, will save him $600 million in taxes. Is this fair? No. It’s worse than that, though. It’s ungrateful and it’s indecent. Saverin’s decision to decamp the U.S. suggests he’s got no idea how much America has helped him out. So, to enlighten him, let’s list all the ways Eduardo Saverin has benefitted from America. First and most obviously, he lived a life of relative safety in Miami, something that wasn’t guaranteed for him in Brazil. Second, also obvious: If Saverin hadn’t come to America, he wouldn’t have met Mark Zuckerberg, and—not to put too fine a point on it—if Saverin hadn’t met Zuckerberg, Saverin wouldn’t be Saverin. Third: Harvard. Zuckerberg and his cofounders met in the dorms, and while Harvard is a nominally private institution, it enjoys significant funding and protections from the government. In 2011, Harvard received $686 million, about 18 percent of its operating revenue, from federal grants; that’s almost as much as it received from student tuition. Would Facebook have been founded without Harvard? Perhaps—maybe Facebook would have come about wherever Zuck went to school. Still, there were social networks at lots of other schools. There was clearly something about Harvard’s student body that was receptive to Facebook. More generally, elite, government-sponsored American universities like Harvard have been instrumental in the founding of many tech giants. Microsoft’s founders met at Harvard. Yahoo and Google’s founders met at Stanford. But even if you believe that these universities shouldn’t claim credit for the companies they brought about, it’s still hard to argue that Facebook would be where it is today without the American taxpayers’ large investment in public education. Facebook depends on really smart people to make its products. You don’t get smart people without tax dollars. Fourth: The American government’s creation of the Internet. The strangest thing about Silicon Valley’s libertarian politics is how few people here recognize how the Internet came about. ARPANET, the earliest large-scale computer network that morphed into the Internet, was funded by the U.S. Defense Department, as was the research into fundamental technologies like packet switching and TCP/IP. Delve deeper into the network and you get to the microprocessors that run the world’s computers—another technology that wouldn’t have come about by loads of federal research grants. Even the Web itself can trace its founding to government grants. Tim Berners-Lee worked at CERN, the research group funded by Europeans governments, when he worked on the HTTP protocol. Marc Andreessen worked at National Center for Supercomputing Applications—which is funded by in a partnership between the federal government and the state of Illinois—when he created the Mosaic Web browser. Then you’ve got GPS, a technology that makes much of the mobile revolution possible, and one that is wholly created and operated by the U.S. government. Fifth: The judicial system. If it weren’t for the U.S. courts and laws, Saverin might have been permanently shut out of Facebook. But in 2009, he settled a lawsuit with Facebook that gave him credit as a co-founder and his current stake in the firm. In other words, it’s only because Saverin could sue Facebook and depend on a relatively fair judicial system that he’s got the billions on which he’s now skirting taxes. Fair courts aren’t to be taken for granted, by the way. There are many places in the world where, if you are wronged by a billionaire, you wouldn’t be able to do anything about it. One of those places is Brazil; according to Transparency International, the courts in Saverin’s birth country are beset by corruption. Now, none of this is to discount Saverin’s own contributions to Facebook’s success. Though he was only there at the beginning—and although he had some pretty terrible ideas for Facebook, including his plan to show interstitial ads when you went to add a friend—let’s assume that he did in fact add $4 billion of value to the world. The question is, what’s fair for him to keep? As an immigrant myself, I’ve got no patience for the argument that he should keep all of it. Pretty much everything in my life that I enjoy wouldn’t have happened without my being in the United States. My education, my job, my wife and family, the fact that I’m not persecuted for my race or religion (I was born in South Africa), the fact that I can sometimes forget to lock my doors at night and not end up killed by marauding bands—I hate paying taxes as much as the next guy, but when I think about all the ways that the United States has been integral to everything in my life, taxes seem like a tiny price. Now, remember that the tax rate on long-term capital gains is only 15 percent. In other words, Saverin gets to keep 85 percent of everything he’s making from Facebook’s IPO. Given how much of his wealth depends on the government, that’s more than fair. ||||| ACCORDING to the internet's hilarious headline writers, Eduardo Saverin, a Facebook co-founder, dis-"likes" America's tax rules and has "un-friended" the land of the free in order to dodge a potentially monumental tax bill after Facebook goes public. Mr Saverin is Brazilian by birth, but has been an American citizen since 1998. Last fall, he filed the papers to renounce his American citizenship. Considering how well Mr Saverin has done here, is this jake? Farhad Manjoo thinks that not only is Mr Saverin's extreme self-deportation unfair, "It's ungrateful and it's indecent. Saverin's decision to decamp the U.S. suggests he's got no idea how much America has helped him out." Ilyse Hogue of The Nation is incensed: In making this decision, the Brazilian native did more than expose his blind disregard for all that his adopted country has done for him. He has made himself the poster child for the callous class of 1 percenters who are all too happy to use national resources to enrich themselves, and then skate, or cry foul, when asked to pay their fair share. The story evokes the image of the marauding aliens from the movie Independence Day, who come to Earth to take what they can get before moving on to another planet. Get our daily newsletter Upgrade your inbox and get our Daily Dispatch and Editor's Picks. Wait a second! Did Eduardo Saverin plunder us? Are we now a desolate husk of a country, sucked dry by Eduardo Saverin's rapine? Well, no. Facebook created wealth. Mr Saverin is leaving having deployed his capital in a manner that made America better off than it was when he arrived. But will he escape without rendering unto Caesar what is Caesar's? Well, no. Both Mr Manjoo and Ms Hogue mumble in passing under their breath while coughing that Mr Saverin will have to pony up an "exit tax". So what's this woefully insufficient tribute come to, such that Mr Saverin may be so bitterly denounced for exploitation and despoilment? According to Danielle Kucera, Sanat Vallikappen and Christine Harper of Bloomberg: Saverin won't escape all U.S. taxes. Americans who give up their citizenship owe what is effectively an exit tax on the capital gains from their stock holdings, even if they don't sell the shares, said Reuven S. Avi-Yonah, director of the international tax program at the University of Michigan's law school. For tax purposes, the IRS treats the stock as if it has been sold. Got that? Mr Saverin's on the hook for the amount his capital-gains tax would have come to had he sold all his American stock holdings. Tim Worstall sketches it out on his napkin: [T]he net effect of his citizenship renunciation on his immediate tax bill is to increase it, hugely. For it will, at minimum, start with the idea that he's just made a $3.5 billion or so profit (adjusted downwards for the difference between the private market value of Facebook last fall and the IPO price) on his Facebook stock which he got originally for minimal amounts of money. At the standard 15% long term capital gains rate that's near $500 million right there. Half a billion dollars! That is not scot-free. Did the marauding aliens in "Independence Day" leave behind a half billion American dollars after having successfully invested in Earth? They did not! One wonders how many pounds of flesh Mr Manjoo and Ms Hogue think Mr Saverin owes for the privilege of having Uncle Sam's hooks out once and for all. Mr Saverin is actually taking a bit of a gamble. This is a bet that his post-IPO shares will be worth more than his pre-IPO shares. There's a good chance that he's getting a discount relative to the prospective, immediately post-IPO valuation of his Facebook shares, due to the potential difficulty of offloading privately-held stock. But stocks go down as well as up. Should the value of his Facebook stock decline below the amount at which it has been valued for exit-tax purposes, Mr Saverin may end up having donated handsomely to the Treasury. Pace Mr Manjoo, Jim Rogers, a well-known investing guru, thinks it's America's terms of exit that are unfair: The press seemed to say [Mr Saverin] did it to avoid [taxes], he has to pay taxes. He had to pay huge taxes, hundreds of millions of dollars to give up his citizenship and if it [Facebook] hadn't gone public or if something had gone wrong with the IPO, he would have been in a real bind. When you give up your American citizenship, it's not fair as far as I'm concerned, but the rules are that you have to pay everything, you have to pay taxes on everything you own and then you can leave. I mean no other country in the world does that, we've got our own Berlin Wall, it's very expensive to leave, to give up your citizenship. Thanks for everything, Eduardo. Enjoy Singapore. (Photo credit: Corbis)
– Eduardo Saverin's decision to abandon the United States like an old MySpace account will save the Facebook co-founder at least $67 million in federal income tax, according to a Bloomberg analysis—and those savings will only grow in the not-unlikely event that Facebook stock soars. That's because Saverin will only pay an "exit tax" on his stake's September valuation (about $2.44 billion), even though it's grown since. But a Saverin spokesman says Bloomberg's figures are "erroneous" and that Saverin's departure "had nothing to do with tax." Most people, of course, assume it did have something to do with tax, and many are furious. "It's ungrateful and it's indecent," fellow immigrant Farhad Manjoo fumed in Pando Daily, saying Saverin owed America "nearly everything"—let's not forget, US courts saved his Facebook stake, and its government created the Internet itself. But the Economist disagrees. "Facebook created wealth," it argues. "Mr. Saverin is leaving having deployed his capital in a manner that made America better off than it was when he arrived."
The New Jersey man who just won a $338 million Powerball jackpot is now due in court. Pedro Quezada is scheduled to appear Monday afternoon in state Superior Court in Paterson. The 1:30 p.m. hearing stems from a child support warrant issued for the 44-year-old Passaic resident, who authorities say owes about $29,000 in back support. Authorities announced Saturday that the warrant had been stayed until the scheduled court appearance. Quezada claimed a lump-sum payment last week worth $221 million, or about $152 million after taxes. Sheriff's department spokesman Bill Maer (mayr) says the state Lottery Division generally satisfies such judgments before winnings are released. The unpaid child support payments go back to 2009. It's not known which of Quezada's five children are covered under the payments. ||||| These crawls are part of an effort to archive pages as they are created and archive the pages that they refer to. That way, as the pages that are referenced are changed or taken from the web, a link to the version that was live when the page was written will be preserved.Then the Internet Archive hopes that references to these archived pages will be put in place of a link that would be otherwise be broken, or a companion link to allow people to see what was originally intended by a page's authors.The goal is to fix all broken links on the web . Crawls of supported "No More 404" sites.
– Pedro Quezada won a $338 million Powerball jackpot—and he's using it to pay his New Jersey neighbors' rent, a friend tells the New York Daily News. "He’s such a good guy," the buddy says. "He said he’s going to pay the rent for everybody here on this block for at least a month or two." Said another neighbor: "God bless him, and thank you." And while a friend says Quezada has now paid the $29,000 in child support he owed, the AP reports the 44-year-old is due to today appear in state Superior Court in Paterson, for an afternoon hearing stemming from a child support warrant he had been issued.
the evolution and eventual dispersal of discs around young stars is an important area of study , as it provides constraints on theories of both star and planet formation . it is now well established that the majority of young stars at ages of @xmath0yr have circumstellar discs which are optically thick at optical and infrared wavelengths @xcite . these discs are relatively massive , with masses of the order of a few percent of a solar mass @xcite . however by an age of @xmath1yr most stars are no longer seen to have such massive discs , although low - mass `` debris discs '' may remain ( eg . @xcite ) . the mechanism by which these discs are dispersed remains an important unsolved question . one fact which is clear , however , is that a large fraction of the mass from the disc will eventually be accreted on to the central ( proto)star . the inner edge of the accretion disc is typically truncated by the magnetosphere at a radius of around 5@xmath2 @xcite and so material falling from the disc on to the central star can attain an extremely high velocity , resulting in a so - called `` accretion shock '' when this material impacts upon the stellar surface . existing models of the accretion shock @xcite have paid a great deal of attention to the emission in the ultraviolet ( 1000 - 3000 ) and visible ( 3500 - 7000 ) wavebands , comparing theoretical predictions to observed spectra . extremely good models have been constructed , and these emission spectra are now well understood . however , very little attention has been paid to the emission shortward of the lyman break ( @xmath3 912 ) , primarily because absorption by interstellar hi makes it impossible to observe young stars in this wavelength regime . recent theoretical studies @xcite have suggested that photoionisation by the central object may play an important role in disc dispersal , and so the emission shortward of the lyman break has become important . currently the origin of photoionising radiation from young stars such as t tauri stars is unclear , and even the magnitude of such emission is poorly constrained @xcite . to date , models of disc photoionisation have used either a constant ionising flux ( eg . @xcite ) , assumed to be chromospheric in origin , or modelled the accretion - driven flux simply as a constant temperature hotspot on the stellar surface , emitting as a blackbody @xcite . the latter produces an ionising flux which is proportional to the mass accretion rate , thus decreasing dramatically with time as the accretion rate falls , and so naturally the models of @xcite and @xcite have produced markedly different results . these models @xcite also find that the mass - loss rate scales approximately with the square root of the ionising flux , and so very high ionising fluxes are required to influence disc evolution significantly : an ionising flux of @xmath4photons s@xmath5 will drive mass - loss from the disc at around the @xmath6@xmath7 level . in this paper we study the issue of the ionising flux generated by an accretion shock . the simplified model of the accretion shock adopted by @xcite neglects two key points , which we address in turn . firstly , it seems likely that the lyman continuum emission from such a hotspot would resemble a stellar atmosphere rather than a blackbody ; whilst the two are almost identical at longer wavelengths , photo - absorption by hi provides a strong suppression of the flux shortward of 912 . secondly , the photons emitted by the accretion shock must pass _ through _ the column of material accreting on to it in order to interact with material in the disc , and again we would expect photo - absorption by hi in the column to supress the lyman continuum significantly . in order to address these issues we have modelled this process in some detail . in section [ sec : atmos ] we investigate the effect of replacing the blackbody hotspot with a more realistic stellar atmosphere . in section [ sec : columns ] we investigate the effect of passing these photons , from both the blackbody and the stellar atmosphere , through a column of accreting material . in section [ sec : dis ] we discuss our results and the limitations of our analysis , and in section [ sec : summary ] we summarise our conclusions . the photoionisation models of @xcite model the ionising photons as follows . they assume that the flux from the accretion shock can be modelled as a constant temperature hotspot , and adopt a blackbody spectral energy distribution at a temperature of @xmath8k . they assume that half the accretion luminosity is radiated by this hotspot , and so for a star of mass @xmath9 and radius @xmath2 the accretion shock luminosity @xmath10 is given by : @xmath11 where @xmath12 is the rate of mass accretion from the disc , @xmath13 is the area of the hotspot , and @xmath14 is the stefan - boltzmann constant . as the temperature is constant the rate of ionising photons @xmath15 is simply proportional to the accretion rate @xmath12 . in addition , @xcite add a further contribution to the ionising flux from the stellar photosphere , neglecting the poorly - constrained chromospheric contribution , at a constant rate of @xmath16photons s@xmath5 , with the total ionising photon rate given by @xmath17 . our assertion is that , given the high density of atomic hydrogen , it is extremely unlikely that such a hotspot would radiate as a blackbody . a spectrum akin to a stellar atmosphere , showing a significant `` lyman edge '' , seems much more likely . as a result , we first consider the strength of this effect . our first models simply involve substituting model stellar atmospheres in place of the blackbody emission in equation [ eq : mjh ] . we have adopted the same stellar parameters as @xcite ( @xmath18 , @xmath19 ) , and similarly adopted a constant hotspot temperature of 15,000k . the luminosity @xmath10 scales with @xmath12 in the same manner as in equation [ eq : mjh ] . we have utilised the kurucz model atmospheres @xcite ( which have been incorporated into the cloudy code ) for this temperature and surface gravity . the model atmospheres do not deviate significantly from the blackbody at longer wavelengths , but are some 3 orders of magnitude less luminous than the corresponding blackbody at wavelengths shortward of the lyman limit at 912 , due to absorption by hi . [ fig : spectra ] compares the blackbody and kurucz spectra . ( the apparent lack of emission lines in fig [ fig : spectra ] is an artefact of the relatively large bin - width used within the code . the use of such large wavelength bins results in line fluxes which are negligible in comparison to the continuum . ) fig.[fig : atmos ] plots the ionising fluxes as a function of accretion rate , and we see that the stellar atmosphere hotspot produces ionising photons at a rate that is a factor of 1100 less than that obtained from the blackbody model . another consideration is that of the hotspot area . the @xcite blackbody formulation described in equation [ eq : mjh ] uses a hotspot temperature which remains constant for different mass accretion rates , implying a hotspot size which decreases as the accretion rate drops . the accreting material is thought to be channelled on to the magnetic poles as it falls on to the stellar surface , and unless the topology of the magnetic field varies systematically wth the accretion rate , the hotspot area should remain approximately constant with time . if the hotspot area @xmath13 is constant then we would expect , from equation [ eq : mjh ] , the hotspot temperature to vary as @xmath20 the _ total _ luminosity of a model atmosphere is very similar to that of a blackbody , and so we adopt this relationship for the stellar atmospheres also . we re - evaluated the model atmospheres described in section [ sec : const_t ] , keeping the scaling luminosity proportional to @xmath12 , but now with the temperature given by : @xmath21 the result of this is shown in fig.[fig : atmos ] ; the drop - off in the ionising photon rate is much more precipitous than in the constant temperature case , with only very high mass accretion rates , greater than @xmath22@xmath23yr@xmath5 , producing ionising photons at greater than the photospheric rate . in fact , the relationship between the hotspot area and the mass accretion rate is not well understood , and @xcite even found observational evidence for a hotspot area which increases with @xmath12 ( which would imply an even steeper decline in the ionising flux ) . however , we go on to show that the presence of an accretion column above the hotspot is by far the dominant factor in controlling the ionising photon rate , and so the exact details of the hotspot area are not of great significance . the other issue which affects the emitted ionising flux is the assumed presence of a column of accreting material directly above the hotspot . this material will absorb lyman continuum photons through photoionisation of hi , and so a large attenuation of the ionising flux is expected . adopting the photoionisation cross - section from @xcite of @xmath24@xmath25@xmath26@xmath27 , indicates that any column density greater than 5@xmath25@xmath28@xmath29 will result in an attenuation of the incident flux by a factor of @xmath30 , enough to reduce _ any _ incident ionising photon rate to below photospheric levels . the density of the infalling material is of order 5@xmath25@xmath31@xmath32 @xcite and so this results in an attenuation length of order 10@xmath33 cm ( 10@xmath34@xmath35 ) . as a result , the only lyman continuum photons which can be emitted , at any significant rate , by an accretion column must be due to radiative recombination of hydrogen in the column , the so - called diffuse continuum . in order to investigate this effect further we have constructed models of the accretion column using the cloudy photoionisation code @xcite . these models consist of a uniform central source , radiating either as a blackbody or a stellar atmosphere , and an accretion column . the central source emits in the radial direction only , and the accretion column covers a constant solid angle . thus , by a simple linear subtraction of the emission not incident on the column we can treat the column as if it were illuminated solely by a hotspot at its base with a flux in the radial direction . in reality such a hotspot would produce some lateral component of flux near to its edges : this is discussed in section [ sec : dis ] . the accretion column has a solar chemical composition ( although as the dominant effect is photoionisation of hi there is almost no dependence on chemical composition ) and covers a small fraction @xmath36 of the stellar surface . following @xcite we adopt a `` free - fall '' scaling density of the form : @xmath37 the radial behaviour of the density is obtained by assuming that the material falls along magnetic field lines , in a manner consistent with standard magnetospheric accretion models ( eg . therefore at a given radius the product of the field strength and the column cross - sectional area is a constant . assuming a dipole magnetic field we have @xmath38 , and therefore the cross - sectional area is proportional to @xmath39 . for a column with cross - sectional area @xmath40 and mass density @xmath41 , mass conservation requires that : @xmath42 the number density @xmath43 , and the free - fall velocity @xmath44 , and so we have a density scaling law of : @xmath45 with the condition in equation [ eq : hden ] used to fix the scaling constant . cloudy allows us to evaluate the continuum and line emission from the top of the accretion column , which is a combination of the continuum incident on the bottom of the column , attenuated by the column , and the diffuse emission from the heated column . it does not allow direct evaluation of the emission from the `` sides '' of the accretion column , which may be significant and is discussed in section [ sec : dis ] . again , we have adopted @xmath18 and @xmath19 , and have constructed models of these accretion columns for a broad range of accretion rates , hotspot areas and column heights . initially both the blackbody and stellar atmosphere hotspot formulations were used , with the `` constant temperature '' formalism used as it provides the greatest ionising flux to the column and can be treated as a limiting case . however , as discussed above , the incident lyman continua from both are extinguished over a very short length scale , and so the only lyman continuum emission which emerges from the columns is the so - called `` diffuse '' emission due to the radiative recombination of atomic hydrogen . as seen in fig.[fig : spectra ] , the lyman continua emitted by the columns are identical in both cases , and depend only the nature of the column rather than the spectrum of the illuminating hotspot , as both hotspot spectra have the same bolometric luminosity . as a result , only the more realistic stellar atmosphere models were used for the remainder of the cases . the results of the simulations described above are presented in figs.[fig : colspec ] & [ fig : phi ] . fig.[fig : colspec ] shows the incident and transmitted spectra for a typical case at a variety of different column heights , and fig.[fig : phi ] shows the dependence of the emergent ionising flux on column height . cloudy is only valid for temperatures greater than @xmath463000k - for temperatures lower than this the thermal solutions are no longer unique - and for large column heights or cases with little heating ( ie . low accretion rates ) the temperature dropped below this critical value . the models were not pursued beyond this point , with 3000k used as the temperature limit of the calculations . as seen in fig.[fig : phi ] , the emergent ionising flux , as expected , decreases with both decreasing accretion rate and with increasing column height . smaller hotspots result in higher density of material in the accretion column for a given accretion rate , and so tend to produce slightly larger ionising fluxes . however these variations are small relative to those due to variations in column height or accretion rate . the most important point , however , is that the emergent ionising photon rates for _ all _ of the columns are less than the photospheric value of 10@xmath47photons s@xmath5 . further , the photospheric value is itself some 10 orders of magnitude lower than the rate required to influence disc evolution significantly . this means that the accretion columns we have modelled can not emit ionising photons at a rate that will be significant in disc photoionisation models , for any choice of parameters . there are obvious caveats to these models . the first , and most significant , is that the cloudy code is only completely reliable at densities less than @xmath48@xmath32 ; it is prone to numerical problems at higher densities , and for cases of high @xmath12 and small @xmath36 the density in our models can exceed this value . however , the main uncertainties at these densities are regarding the treatment of heavy elements . the dominant effects in the regime in which we are interested are photoionisation and recombination of atomic hydrogen , and these processes _ are _ treated reliably by the code . however in order to compensate for this numerical problem we were forced to limit the density artificially to have a maximum value of @xmath49@xmath32 . this affected the 3 models with the highest densities ( @xmath50@xmath7 , @xmath51 and @xmath52@xmath7 , @xmath53 ) , and the reduction in density results in these models under - estimating the ionising flux somewhat . this does introduce some uncertainty into our models , but we consider the effect to be small relative to the gross effects which dominate the calculations . a further caveat regards the issue of non - radial emission , both from the hotspot and from the `` sides '' of the column , as mentioned in section [ sec : columns ] . the ionising photon rates from our models are those emitted from the top of the accretion columns only , and neglect emission from the sides of the columns . further , the incident flux from the hotspot is assumed to be purely radial . the lyman continuum emission in which we are interested arises from the radiative recombination of hydrogen atoms , an intrinsically isotropic emission process , and so emission from the sides of the column could be significant . however , as the recombination process is isotropic it is reasonable to assume that the lyman continuum emitted from the sides of a column will be comparable to that emitted from the top of a column that is truncated at a height equal to its diameter . fig.[fig : phi ] shows that the diffuse lyman continuum from the top of the column decreases dramatically as the column height increases ( and the density decreases ) . consequently the lyman continuum emission from the sides of the column will only be significant over a distance comparable to the hotspot diameter : the sides of the column only produce a significant lyman continuum near to the stellar surface . as noted in section [ sec : columns ] , lyman continuum photons incident on the columns will be attenuated by a factor of @xmath48 over a distance of approximately 10@xmath34@xmath35 . as a result of this , the majority of the non - radial lyman continuum photons emitted by an isotropically emitting hotspot will be absorbed by the column . the only such photons not absorbed will be those emitted within a fraction of an attenuation length of the edge of the hotspot , in a direction away from the centre of the hotspot . a hotspot covering 1% of the stellar surface has a radius of 0.2@xmath35 , so less than @xmath54 of the hotspot photons are emitted within 10@xmath34@xmath35 of the hotspot edge . given this fact , and also the behaviour of the `` raw '' hotspot emission described in fig.[fig : atmos ] , we neglect this effect . the net result of these two simplifications is that in reality the emission from the bottom of any accretion column will dominate the lyman continuum emitted by the column , and accretion columns of any height will emit ionising photons at a rate comparable to that provided by the lower part of the column ( the left - hand end of the curves in fig.[fig : phi ] ) . this will increase the largest calculated ionising photon rates by a small geometric factor but still can not increase their flux to significantly greater than 10@xmath47photons s@xmath5 ; the photospheric emission will still dominate the overall lyman continuum . more importantly , much higher ionising fluxes ( of the order of 10@xmath55photons s@xmath5 ) are required to have a significant effect on disc evolution @xcite , and the uncertainties caused by the approximations we have made are negligible compared to this 10 orders - of - magnitude difference . further simplifications used in our models regard the geometry of the accretion column . our models use a column with a constant covering factor - essentially a truncated radial cone - and so the area of the outer surface at a given radius @xmath56 is proportional to @xmath57 . however in reality the column is channelled by the magnetosphere and , as explained in section [ sec : columns ] , has an area proportional to @xmath39 ; our model accretion columns are somewhat less flared than we would expect to see in reality . however the difference between the two is only significant at large radii ; small column heights , which provide the highest ionising fluxes , will not show significant deviation between the two cases . again , the net result of this is that we probably under - estimate the ionising flux by a small factor , but not by enough to alter the results significantly . as our accretion columns are radial cones they do not bend to follow the magnetosphere , as expected in more realistic magnetospheric accretion models ( eg . @xcite ) . in such a model the column would bend over to meet the accretion disc , with a curvature dependent on both the latitude of the hotspot and the strength of the magnetosphere . however , as discussed above , the emission from the bottom part of the accretion column dominates over that from the upper parts ( those affected by this curvature ) , so this simplification will not affect our results significantly . there is also the issue of an infall velocity , which our models do not address . in reality the accretion column will be falling towards the stellar surface at close to the free - fall velocity , which can be several hundred kms@xmath5 , and so this could modify the absorbing effect of the cloud . however the infall velocity is much less than the speed of light , and there are no strong emission lines near to the lyman break , and so we consider the impact of this effect on the emitted lyman continuum to be negligible . it should be noted that our model makes no predictions as to the behaviour of the photons emitted at wavelengths longward of the lyman break . as shown in fig.[fig : spectra ] , the emission from the top of the column longward of the lyman break is essentially identical to that from the hotspot at the base of the column . we approximate the accretion shock crudely , and so are not able to fit our models to observed spectra in a manner similar to @xcite or @xcite . however the lyman continuum emitted by our columns is insensitive to variations in the hotspot spectrum , due to the high optical depth of the columns to lyman continuum photons . consequently we find that any reasonable accretion - shock model will produce a similar lyman continuum , and that this lyman continuum is independent of the emission at longer wavelengths . we have adopted stellar parameters of @xmath18 and @xmath19 to provide direct comparisons with the models of @xcite and @xcite . however in the case of t tauri stars a radius of @xmath58 and a mass of @xmath59 would be more realistic @xcite . the result of this will be that our models over - estimate the ionising flux somewhat , due to both reduction in the energy released by accretion on to the stellar surface and also due to a reduction in the star s surface gravity . once again , however , it is unlikely that these factors are significant in comparison to the gross effects we have already considered . similarly , the use of the `` constant temperature '' hotspot to heat the column probably over - estimates both the ionising flux and heating provided by the hotspot , and thus over - estimates the diffuse lyman continuum . in effect we have constructed a `` best - case '' model , designed to produce the maximum ionising flux , and still found the ionising flux to be less than that emitted by the stellar photosphere . it seems extremely unlikely that any conceivable accretion column could produce ionising photons at a rate significantly greater than this . in an attempt to provide some constraints on the nature and magnitude of the ionising continuum emitted by t tauri stars , we have constructed models which treat the accretion shock as a hotspot on the stellar surface beneath a column of accreting material . we have modelled these columns for a variety of different accretion rates , hotspot sizes and column heights , and have found that : * a hotspot radiating like a stellar atmosphere radiates ionising photons at a rate some 3 orders of magnitude less than the corresponding blackbody . * a constant area hotspot radiating like a stellar atmosphere can only emit ionising photons at greater than photospheric rates for mass accretion rates greater than @xmath22@xmath23yr@xmath5 . such accretion rates are near the upper limit of the rates derived from observations @xcite . * photoionisation of neutral hydrogen in the accretion column attenuates the lyman continuum from any hotspot to zero over a very short length scale . the ionising photons which do emerge are due to radiative recombination of hydrogen atoms in the column , and the rate of ionising photon emission is less than the photospheric level for all of the accretion columns we have modelled . in short , we find that accretion shocks and columns are extremely unlikely to produce lyman continuum photons at a rate significantly greater than that expected from the stellar photosphere . the photospheric level itself is some 10 orders of magnitude below the rates required for photoionisation to affect disc evolution significantly , and so it seems that the lyman continuum emitted by an accretion shock will not be large enough to be significant in disc photoionisation models . these models have provided an attractive explanation of some observed disc properties ( eg . @xcite ) but we have shown , as suggested by @xcite , that they must be powered by something other than the accretion - shock emission . in order to produce ionising photons at a rate large enough to enable photoionisation models to provide a realistic means of disc dispersal , the central objects must sustain a source of ionising photons that is not driven by accretion from the disc . we thank bob carswell for useful discussions and advice on the use of the cloudy code . rda acknowledges the support of a pparc phd studentship . cjc gratefully acknowledges support from the leverhulme trust in the form of a philip leverhulme prize . we thank an anonymous referee for helpful comments which improved the clarity of the paper . 99 armitage p.j . , clarke c.j . , palla f. , 2003 , mnras , 342 , 1139 beckwith s.v.w . , sargent a.i . , chini r.s . , gsten r. , 1990 , aj , 99 , 924 calvet n. , gullbring e. , 1998 , apj , 509 , 802 clarke c.j . , gendrin a. , sotomayor m. , 2001 , mnras , 328 , 485 cox a.n . ( editor ) , 2000 , _ allen s astrophysical quantities _ , aip press eisner j.a . , carpenter j.m . , 2003 , apj in press ( astro - ph/0308279 ) ferland g.j . , _ a brief introduction to cloudy _ , university of kentucky department of physics and astronomy internal report gahm g.f . , fredga k. , liseau r. , dravins d. , 1979 , a&a , 73 , l4 ghosh p. , lamb f.k . , 1978 , apj , 223 , l83 gullbring e. , hartmann l. , briceo c. , calvet n. , 1998 , apj , 492 , 323 gullbring e. , calvet n. , muzerolle j. , hartmann l. , 2000 , apj , 544 , 927 haisch k.e . , lada e.a . , lada c.j . , 2001 , apj , 553 , l153 hartmann l. , calvet n. , gullbring e. , dalessio p. , 1998 , apj , 495 , 385 hollenbach d. , johnstone d. , lizano s. , shu f. , 1994 , apj , 428 , 654 hollenbach , d.j . , yorke , h.w . & johnstone , d. , 2000 , in _ protostars & planets iv _ , eds . v. mannings , a.p . boss , s.s . russell , university of arizona press , p401 imhoff c.l . , appenzeller i. , 1987 , in astrophysics and space science library , vol . 129 , _ exploring the universe with the iue satellite _ , ed . y. kondo ( dordrecht : reidel ) , 295 johns - krull c.m . , valenti j.a . , linsky j.l . , 2000 , apj , 539 , 815 kenyon s.j . , hartmann l. , 1995 , apjs , 101 , 117 kurucz r.l . , 1992 , in iau symp . 149 , stellar population of galaxies , ed . b. barbuy & a. renzini ( dordrecht : kluwer ) , 225 lamzin s.a . , 1998 , astronomy reports , 42 , 322 mannings v. , sargent a.i . , 1997 , apj , 490 , 792 matsuyama i. , johnstone d. , hartmann l. , 2003 , apj , 582 , 893 meyer m.r . , calvet n. , hillenbrand l.a . , 1997 , aj , 114 , 288 richling s. , yorke , h.w . , 1997 , a&a , 327 , 317 strom k.m . , strom s.e . , edwards s. , cabrit s. , skrutskie m.f . , 1989 , aj , 97 , 1451 wyatt m.c . , dent w.r.f . , greaves j.s . , 2003 , mnras , 342 , 876
we address the issue of the production of lyman continuum photons by t tauri stars , in an attempt to provide constraints on theoretical models of disc photoionisation . by treating the accretion shock as a hotspot on the stellar surface we show that lyman continuum photons are produced at a rate approximately three orders of magnitude lower than that produced by a corresponding black body , and that a strong lyman continuum is only emitted for high mass accretion rates . when our models are extended to include a column of material accreting on to the hotspot we find that the accretion column is extremely optically thick to lyman continuum photons . further , we find that radiative recombination of hydrogen atoms within the column is not an efficient means of producing photons with energies greater than 13.6ev , and find that an accretion column of any conceivable height suppresses the emission of lyman continuum photons to a level below or comparable to that expected from the stellar photosphere . the photospheric lyman continuum is itself much too weak to affect disc evolution significantly , and we find that the lyman continuum emitted by an accretion shock is similarly unable to influence disc evolution significantly . this result has important consequences for models which use photoionisation as a mechanism to drive the dispersal of circumstellar discs , essentially proving that an additional source of lyman continuum photons must exist if disc photoionisation is to be significant . [ firstpage ] accretion , accretion discs - circumstellar matter - planetary systems : protoplanetary discs - stars : pre - main - sequence
the quantitative understanding of structure , function , dynamics and transport of biomolecules is a fundamental theme in contemporary life sciences . geometric analysis and associated biophysical modeling have been the main workhorse in revealing the structure - function relationship of biomolecules and contribute enormously to the present understanding of biomolecular systems . however , biology encompasses over more than twenty orders of magnitude in time scales from electron transfer and ionization on the scale of femtoseconds to organism life spanning over tens of years , and over fifteen orders of magnitude in spatial scales from electrons and nuclei to organisms . the intriguing complexity and extraordinarily large number of degrees of freedom of biological systems give rise to formidable challenges to their quantitative description and theoretical prediction . most biological processes , such as signal transduction , gene regulation , dna specification , transcription and post transcriptional modification , are essentially intractable for atomistic geometric analysis and biophysical simulations , let alone _ ab - initio _ quantum mechanical descriptions . therefore , the complexity of biology and the need for its understanding offer an extraordinary opportunity for innovative theories , methodologies , algorithms and tools . the study of subcellular structures , organelles and large multiprotein complexes has become one of the major trends in structural biology . currently , one of the most powerful tools for the aforementioned systems is cryo - electron microscopy ( cryo - em ) , although other techniques , such as macromolecular x - ray crystallography , nuclear magnetic resonance ( nmr ) , electron paramagnetic resonance ( epr ) , multiangle light scattering , confocal laser - scanning microscopy , small angle scattering , ultra fast laser spectroscopy , etc . , are useful for structure determination in general @xcite . in cryo - em experiments , samples are bombarded by electron beams at cryogenic temperatures to improve the signal to noise ratio ( snr ) . the working principle is based on the projection ( thin film ) specimen scans collected from many different directions around one or two axes , and the radon transform for the creation of three - dimensional ( 3d ) images . one of major advantages of cryo - em is that it allows the imaging of specimens in their native environment . another major advantage is its capability of providing 3d mapping of entire cellular proteomes together with their detailed interactions at nanometer or subnanometer resolution @xcite . the resolution of cryo - em maps has been improved dramatically in the past two decades , thanks to the technical advances in experimental hardware , noise reduction and image segmentation techniques . by further taking the advantage of symmetric averaging , many cryo - em based virus structures have already achieved a resolution that can be interpreted in terms of atomic models . there have been a variety of elegant methods @xcite and software packages in cryo - em structural determination @xcite . most biological specimens are extremely radiation sensitive and can only sustain a limited electron dose of illumination . as a result , cryo - em images are inevitably of low snr and limited resolution @xcite . in fact , the snrs of cryo - tomograms for subcellular structures , organelles and large multi - protein complexes are typically in the neighborhood of 0.01 @xcite . to make the situation worse , the image contrast , which depends on the difference between electron scattering cross sections of cellular components , is also very low in most biological systems . consequently , cryo - em maps often do not contain adequate information to offer unambiguous atomic - scale structural reconstruction of biological specimens . additional information obtained from other techniques , such as x - ray crystallography , nmr and computer simulation , is indispensable to achieve subnanometer resolutions . however , for cryo - em data that do not have much additional information obtained from other techniques , the determination of what proteins are involved can be a challenge , not to mention subnanometer structural resolution . to improve the snr and image contrast of cryo - em data , a wide variety of denoising algorithms has been employed @xcite . standard techniques , such as bilateral filter @xcite and iterative median filtering @xcite have been utilized for noise reduction . additionally , wavelets and related techniques have also been developed for cryo - em noise removing @xcite . moreover , anisotropic diffusion @xcite or beltrami flow @xcite approach has been proposed for cryo - em signal recovering . however , cryo - em data denoising is far from adequate and remains a challenge due to the extremely low snrs and other technical complications @xcite . for example , one of difficulties is how to distinguish signal from noise in cryo - em data . as a result , one does not know when to stop or how to apply a threshold in an iterative noise removing process . there is a pressing need for innovative mathematical approaches to further tackle this problem . recently , persistent homology has been advocated as a new approach for dealing with big data sets @xcite . in general , persistent homology characterizes the geometric features with persistent topological invariants by defining a scale parameter relevant to topological events . the essential difference between the persistent homology and traditional topological approaches is that traditional topological approaches describe the topology of a given object in truly metric free or coordinate free representations , while persistent homology analyzes the persistence of the topological features of a given object via a filtration process , which creates a family of similar copies of the object at different spatial resolutions . technically , a series of nested simplicial complexes is constructed from a filtration process , which captures topological structures continuously over a range of spatial scales . the involved topological features are measured by their persistent intervals . persistent homology is able to embed geometric information to topological invariants so that birth " and death " of isolated components , circles , rings , loops , pockets , voids or cavities at all geometric scales can be monitored by topological measurements . the basic concept of persistent homology was introduced by frosini and landi @xcite and by robins @xcite in 1999 independently . edelsbrunner et al . @xcite introduced the first efficient computational algorithm , and zomorodian and carlsson @xcite generalized the concept . a variety of elegant computational algorithms has been proposed to track topological variations during the filtration process @xcite . often , the persistent diagram can be visualized through barcodes @xcite , in which various horizontal line segments or bars are the homology generators lasted over filtration scales . it has been applied to a variety of domains , including image analysis @xcite , image retrieval @xcite , chaotic dynamics verification @xcite , sensor network @xcite , complex network @xcite , data analysis @xcite , computer vision @xcite , shape recognition @xcite and computational biology @xcite . the concept of persistent homology has also been used for noise reduction . it is generally believed that short lifetime events ( or bars ) are of less importance and thus regarded as `` noise '' while long lifetime ones are considered as `` topological signals '' @xcite , although this idea was challenged in a recent work @xcite . in topological data analysis , pre - processing algorithms are needed to efficiently remove these noise . depending on the scale of a feature , a simple approach is to pick up a portion of landmark points as a representative of topological data @xcite . the points can be chosen randomly , spatially evenly , or from extreme values . more generally , certain functions can be defined as a guidance for node selection to attenuate the noise effect , which is known as thresholding . clustering algorithms with special kernel functions can also be employed to recover topological signal @xcite . all of these methods can be viewed as a process of data sampling without losing the significant topological features . they rely heavily on the previous knowledge of the geometric or statistic information . in contrast , topological simplification @xcite , which is to remove the simplices and/or the topological attributes that do not survive certain threshold , focuses directly on the persistence of topological invariant . in contrast , gaussian noise is known to generate a band of bars distributes over a wide range in the barcode representation @xcite . thank to the pairing algorithm , persistence of a homology group is measured through an interval represented by a simplex pair . if the associated topological invariant is regarded less important , simplices related to this simplex pair are reordered . this approach , combined with morse theory , proves to be a useful tool for denoising @xcite , as it can alters the data locally in the region defined as noise . additionally , statistical analysis has been carried out to provide confidence sets for persistence diagram . however , persistent homology has not been utilized for cryo - em data noise reduction , to our knowledge . a large amount of experimental data for macroproteins and protein - protein complexes has been obtained from cryo - em . to analyze these structural data , it is a routine procedure to fit them with the available high - resolution crystal structures of their component proteins . this approach has been shown to be efficient for analyzing many structures and has been integrated into many useful software packages such as chimera @xcite . however , this docking process is limited by data quality . for some low resolution data , which usually also suffer from low snrs , there is enormous ambiguity in structure fitting or optimization , i.e. , a mathematically ill - posed inverse problem . sometimes , high correlation coefficients can be attained simultaneously in many alternative structures , while none of them proves to be biologically meaningful . basically , the fitting or optimization emphasizes more on capturing `` bulk '' regions , which is reasonable as greater similarities in density distributions imply higher possibility . however , little attention is paid to certain small `` linkage '' regions , which play important roles in biological system especially in macroproteins and protein - protein complexes . different linkage parts generate different connectivity , and thus directly influence biomolecular flexibility , rigidity , and even its functions . since persistent homology is equally sensitive to both bulk regions and small linkage regions , it is able to make a critical judgment on the model selection in structure determination , however , nothing has been reported on persistent homology based solution to ill - posed inverse problems , to our knowledge . although persistent homology has been applied to a variety of fields , the successful use of persistent homology is mostly limited to characterization , identification and analysis ( cia ) . indeed , persistent homology has seldom employed for quantitative prediction . recently , we have introduced molecular topological fingerprints ( mtfs ) based on persistent homology analysis of molecular topological invariants of biomolecules @xcite . we have utilized mtfs to reveal the topology - function relationship of macromolecules . it was found that protein flexibility and folding stability are strongly correlated to protein topological connectivity , characterized by the persistence of topological invariants ( i.e. , accumulated bar lengths ) @xcite . most recently , we have employed persistent homology to analyze the structure and function of nano material , such as nanotubes and fullerenes . the mtfs are utilized to quantitatively predict total curvature energies of fullerene isomers @xcite . the overall objective of this work is to explore the utility of persistent homology for cryo - em analysis . first , we propose a topology based algorithm for cryo - em noise reduction and clean - up . we study the topological fingerprint or topological signature of noise and its relation to the topological fingerprint of cryo - em structures . we note that the histograms of topological invariants of the gaussian random noise have gaussian distributions in the filtration parameter space . contrary to the common belief that short barcode bars correspond to noise , it is found that there is an inverse relation between the snr and the band widths of topological invariants , i.e. , the lower snr , the larger barcode band width is . therefore , at a low snr , noise can produce long persisting topological invariants or bars in the barcode presentation . moreover , for cryo - em data of low snrs , intrinsic topological features of the biomolecular structure are hidden in the persistent barcodes of noise and indistinguishable from noise contributions . to recover the topological features of biomolecular structures , geometric flow equations are employed in the present work . it is interesting to note that topological features of biomolecular structures persist , while the topological fingerprint of noise moves to the right during the geometric flow iterations . as such , `` signal '' and noise separate from each other during the geometric flow based denoising process and make it possible to prescribe a precise noise threshold for the noise removal after certain iterations . we demonstrate the efficiency of our persistent homology controlled noise removal algorithm for both synthetic data and cryo - em density maps . additionally , we introduce persistent homology as a new strategy for resolving the ill - posed inverse problem in cryo - em structure determination . although the structure determination of microtubule data emd 1129 is used as an example , similar problems are widespread in other intermediate resolution and low resolution cryo - em data . as emd 1129 is contaminated by noise , a preprocess of denoising is carried out by using our persistent homology controlled geometric flow algorithm . a helix backbone is obtained for the microtubule intermediate structure . based on the assumption that the voxels with high electron density values are the centers of tubulin proteins , we construct three different microtubule models , namely a monomer model , a two - monomer model , and a dimer model . we have found that all three models give rise to essentially the same high correlation coefficients , i.e. , 0.9601 , 0.9607 and 0.9604 , with the cryo - em data . this ambiguity in structure fitting is very common with intermediate and low resolution data . fortunately , after our topology based noise removal , the topology fingerprint of microtubule data is very unique , which is true for all cryo - em data or data generated by using other molecular imaging modalities . it is interesting to note that although three models offer the same correlation coefficients with the cryo - em data , their topological fingerprints are dramatically different . it is found that the topological fingerprint of the microtubule intermediate structure ( emd 1129 ) can be captured only when two conditions are simultaneously satisfied : first , there must exist two different types of monomers , and additionally , two type of monomers from dimers . therefore , based on topological fingerprint analysis , we can determine that only the third model is a correct model for microtubule data emd 1129 . the rest of this paper is organized as follows . the essential methods and algorithms for geometric and topological modelings of biomolecular data are presented in section [ methods ] . approaches for geometric modeling , which are necessary for topological analysis , are briefly discussed . methods for persistent homology analysis are described in detail . we illustrate the use of topological methods with both synthetic volumetric data and cryo - em density maps . their persistence of topological invariants is represented by barcodes . the geometric interpretation of the topological features is given . section [ sec : ph_noise ] is devoted to the persistent homology based noise removal . the characterization of gaussian noise is carried out over a variety of snrs to understand noise topological signature . based on this understanding , we design a persistent homology monitored and controlled algorithm for noise removal , which is implemented via the geometric flow . persistent homology guided denoising is applied to the analysis of a supramolecular filamentous complex . in section [ sec : ph_microtubule ] , we demonstrate topology - aided structure determination of microtubule cryo - em data . several aspects are considered including helix backbone evaluation , coarse - grained modeling and topology - aided structure design and evaluation . we show that topology is able to resolve ill - posed inverse problem . this paper ends with a conclusion . persistent homology has been utilized to analyze biomolecular data , which are collected by different experimental means , such as macromolecular x - ray crystallography , nmr , epr , etc . due to their different origins , these data may be available in different formats , which requires appropriate topological tools for their analysis . additionally , their quality , i.e. , resolution and snr varies for case to case , and thus , a preprocessing may be required . moreover , although biomolecular structures are not a prerequisite for persistent homology analysis , the understanding of biomolecular structure , function and dynamics is crucial for the interpretation of topological results . as a consequence , appropriate geometric modeling @xcite is carried out in a close association with topological analysis . furthermore , information from geometric and topological modelings is in turn , very valuable for data preprocessing and denoising . finally , topological information is shown to be crucial for geometric modeling , structural determination and ill - posed inverse problems . [ cols="^ " , ] in the @xmath0 panel of fig . [ fig:1129_barcodes_polish ] , bars can be roughly groups into three parts from the top to the bottom , i.e. , an irregular hair - like " part on the top , a narrow regular body " part in the middle and a large regular base " part in the bottom . topologically , these parts represent different components in the microtubule intermediate structure . the irregular hair - like " part corresponds to the partial monomer structures located on the top and the bottom boundaries of the structure . as can be seen in fig . [ fig : microtubule_fitting ] , each monomer has lost part of the structure at the boundary regions . the regular body " and base " parts are basically related to two types of monomers in the middle region where the structure is free of boundary effect . from the barcodes , it can be seen that body " part has a later birth " time and earlier death " time compared with the base " barcode part . this is due to the reason that this type of monomers has relative lower electron density . as the filtration is defined to go from highest electron density values to lowest ones , their corresponding barcodes appear later . their earlier death time , however , is due to the reason that they form dimers with the other type of monomers represented by the base " barcode part . it can be derived from these nonuniform behavior that monomers are not equally distributed along the helix backbone structure . instead , two adjacent different types of monomers form a dimer first and then all these dimers simultaneously connect with each other as the filtration goes on . moreover , from the analysis in the previous section , it is obvious to see that the body " and base " parts are topological representations of type ii " monomer and type i " monomer , respectively . for the @xmath1 panel of fig . [ fig:1129_barcodes_polish ] , there also exists a consistent pattern when the denoising process passing a certain stage . two distinctive types of barcodes can be identified in the fingerprint , i.e. , a shorter band of barcodes on the top and a longer band of bars on the bottom . topologically , these @xmath1 bars correspond to the rings formed between two adjacent helix circles of monomers or dimers . during the filtration , dimers are formed between type i " and type ii " monomers and soon after that , all dimers connect with each other and form the helix backbone . as the filtration goes on , type i " monomers from the upper helix circle first connect with type ii " monomers at the lower circle . geometrically , this means six monomers , three ( i - ii - i " ) from the upper layer and three ( ii - i - ii " ) from the lower layer , form a circle . as the filtration goes further , this circle evolves into two circles when two middle monomers on two layers also connect . however , these two circles do not disappear simultaneously . instead , one persists longer than the other . this entire process generates the unique topological fingerprint in @xmath1 barcodes . the topological fingerprint we extracted from the denoising process can be used to guide the construction and evaluation of our microtubule models . to this end , we analyze the topological features of three theoretical models . our persistent homology results for three models are demonstrated in figs . [ fig:1129_fit_barcodes ] * a * , * b * and * c * , respectively . it can be seen that all the three models are able to capture the irregular hair " region in their @xmath0 barcode chart . from the topological point of view , the first model is the poorest one . it fails to capture the regular fingerprint patterns in both @xmath0 and @xmath1 panels of the original cryo - em structure in fig . [ fig:1129_barcodes_polish ] * d*. with two different weight functions to represent two types of monomers , the second model delivers a relatively better topological result . it is able to preserve part of the difference between type i " and type ii " barcodes in the @xmath0 panel . in @xmath1 panel , some nonuniform barcodes emerges . the persistent homology results are further improved in the third model when the intra - dimer and inter - dimer interactions are considered . in our third model , fingerprint patterns of the cryo - em structure in both @xmath0 and @xmath1 panels of fig . [ fig:1129_barcodes_polish ] * d * are essentially recovered by those of fig . [ fig:1129_fit_barcodes ] * c*. even though their scales are different , their shapes are strikingly similar . the essential topological features that are associated with major topological transitions of the original cryo - em structure are illustrated in figs . [ fig:1129_homology ] * a@xmath2 * , * a@xmath3 * , * a@xmath4 * and * a@xmath5*. as shown in figs . [ fig:1129_homology ] * b@xmath2 * , * b@xmath3 * , * b@xmath4 * and * b@xmath5 * , these features have been well - preserved during the denoising process . our best predicted model is depicted in figs . [ fig:1129_homology ] * c@xmath2 * , * c@xmath3 * , * c@xmath4 * and * c@xmath5*. in these figure labels , subscripts @xmath6 and @xmath7 denote four topological transition stages in the filtration process , namely hetero - dimmer formation , large circles formation , evolution of one large circle into two circles , and finally death of one of two circles . by the comparison of denoising results ( figs . * b@xmath2 , b@xmath3 , b@xmath4 * and * a@xmath5 * ) with original structures ( figs . * a@xmath2 , a@xmath3 , a@xmath4 * and * a@xmath5 * ) , it is seen that , in the noise reduction process , although some local geometric and topological details are removed , fundamental topological characteristics are well preserved . as illustrated in fig . [ fig:1129_barcodes_polish ] , using the persistent homology description , these fundamental topological characteristics are well - preserved in topological persistence patterns , which are further identified as fingerprints of the microtubule intermediate structure . we believe that topological fingerprints are crucial to the characterization , identification , and analysis of the biological structure . as demonstrated in figs . [ fig:1129_homology ] * c@xmath2 * , * c@xmath3 * , * c@xmath4 * and * c@xmath5 * , once our model successfully reproduces the topological fingerprints , the simulated structure is able to capture the essential topological characteristics of the original one . moreover , through the analysis in section [ sec : mode_evaluation ] , it can be seen that to reproduce the topological fingerprint of emd 1129 , two conditions are essential . the first is the creation of two types of monomers . the second is the differentiation of intra - dimers and inter - dimers . biologically , these requirements means : 1 ) there are two types of monomers , i.e. , @xmath8-tubulin monomers and @xmath9-tubulin monomers ; and 2 ) intra - dimers and inter - dimers should behave differently from hetero - dimers . it also should be noticed that a higher correlation coefficient may not guarantee the success of the model , especially when the original data is of low resolution and low snr . as can be seen in section [ sec : three_models ] , our three theoretical models have very similar fitting coefficients . the second model even has a slightly higher cross - correlation coefficients . however , only the third model is able to reproduce the essential topological features of the original cryo - em data . this happens as topological invariants , i.e. , connected components , circles , loops , holes or void , tend to be very sensitive to tiny " linkage parts , which are almost negligible in the density fitting process , compared to the major body part . we believe these linage parts play important roles in biological system especially in macroproteins and protein - protein complexes . different linkage parts generate different connectivity , thus can directly influence biomolecular flexibility , rigidity , and even its functions . by associates topological features with geometric measurements , our persistent homology analysis is able to distinguish these connectivity parts . therefore , persistent homology is able to play a unique role in protein design , model evaluation and structure determination . cryo - electron microscopy ( cryo - em ) is a major workhorse for the investigation of subcellular structures , organelles and large multiprotein complexes . however , cryo - em techniques and algorithms are far from mature , due to limited sample quality and or stability , low signal to noise ( snr ) , low resolution , and the high complexity of the underlying molecular structures . persistent homology is a new branch of topology that is known for its potential in the characterization , identification and analysis ( cia ) of big data . in this work , persistent homology is , for the first time , employed for the cryo - em data cia . methods and algorithms for the geometric and topological modeling are presented . here , geometric modeling , such the generation of density maps for proteins or other molecules , is employed to create known data sets for validating topological modeling algorithms . we demonstrate that cryo - em density maps and fullerene density data can be effectively analyzed by using persistent homology . since topology is very sensitive to noise , the understanding of the topological signature of noise is a must in cryo - em cia . we first investigate the topological fingerprint of gaussian noise . we reveal that for the gaussian noise , its topological invariants , i.e. , @xmath0 , @xmath1 and @xmath10 numbers , all exhibit the gaussian distribution in the filtration space , i.e. , the space of volumetric density isovalues . at a low snr , signal and noise are inseparable in the filtration space . however , after denoising with the geometric flow method , there is clear separation between signal and noise for various topological invariants . as such , a simple threshold can be prescribed to effectively remove noise . for the case of low snr , the understanding of noise characteristic in the filtration space enable us to use persistent homology as an efficient means to monitor and control the noise removal process . this new strategy for noise reduction is called topological denoising . persistent homology has been applied to the theoretical modeling of a microtubule structure ( emd 1129 ) . the backbone of the microtubule has a helix structure . based on the helix structure , we propose three theoretical models . the first model assumes that protein monomers form the helix structure . the second model adopts two types of protein monomers evenly distributed along helix chain . the last model utilizes a series of protein dimers along the helix chain . these models are fitted with experimental data by the least square optimization method . it is found that all the three models give rise to similar high correlation coefficients with the experimental data , which indicates that the structural optimization is ill - posed . however , the topological fingerprints of three models are dramatically different . in the denoising process , the cryo - em data of the microtubule structure demonstrate a consistent pattern which can be recognized as the intrinsic topological fingerprint of the microtubule structure . by careful examination of the fingerprint , we reveal two essential topological characteristics which discriminate the protein dimers from the monomers . as such , we conclude that only the third model , i.e. , the protein dimer model , is able to capture the intrinsic topological characteristics of the cryo - em structure and must be the best model for the experimental data . it is believed that the present work offers a novel topology based strategy for resolving ill - posed inverse problems . this work was supported in part by nsf grants dms-1160352 and iis-1302285 , nih grant r01gm-090208 and msu center for mathematical molecular biosciences initiative . the authors acknowledge the mathematical biosciences institute for hosting valuable workshops . shawn q. zhenga , bettina keszthelyi , eric branlunda , john m. lyleb , michael b. braunfelda , john w. sedatb , and david a. agarda . : an integrated software suite for real - time electron microscopic tomographic data collection , alignment , and reconstruction . , 157:138147 , 2007 . t. hrabe , y. chen , s. pfeffer , l.k . cuellar , a.v . mangold , and f. forster . : a python - based toolbox for localization of macromolecules in cryo - electron tomograms and subtomogram analysis . , 178:178188 , 2012 . r. s. pantelic , c. y. rothnagel , r.and huang , d. muller , d. woolford , m. j. landsberg , a. mcdowall , b. pailthorpe , p. r. young , j. banks , b. hankamer , and g. ericksson . the discriminative bilateral filter : an enhanced denoising filter for electron microscopy data . , 155:395c408 , 2006 . p. van der heide , x. p. xu , b. j. marsh , d. hanein , and n. volkmann . efficient automatic noise reduction of electron tomographic reconstructions based on iterative median filtering . , 158:196c204 , 2007 . v. de silva and g. carlsson . topological estimation using witness complexes . in _ proceedings of the first eurographics conference on point - based graphics _ , pages 157166 . eurographics association , 2004 . r. j. adler , o. bobrowski , m. s. borman , e. subag , and s. weinberger . persistent homology for random fields and complexes . in _ borrowing strength : theory powering applications a festschrift for lawrence d. brown _ , volume 6 , pages 124143 . institute of mathematical statistics , 2010 . m. lysaker , a. lundervold , and x. c. tai . noise removal using fourth - order partial differential equation with application to medical magnetic resonance images in space and time . , 12(12):15791590 , 2003 . q. qiao , c. h. yang , c. zheng , l. fontn , l. david , x. yu , c. bracken , m. rosen , a. melnick , e. h. egelman , and h. wu . structural architecture of the carma1/bcl10/malt1 signalosome : nucleation - induced filamentous assembly . , 51(6):766779 , 2013 .
in this work , we introduce persistent homology for the analysis of cryo - electron microscopy ( cryo - em ) density maps . we identify the topological fingerprint or topological signature of noise , which is widespread in cryo - em data . for low signal to noise ratio ( snr ) volumetric data , intrinsic topological features of biomolecular structures are indistinguishable from noise . to remove noise , we employ geometric flows which are found to preserve the intrinsic topological fingerprints of cryo - em structures and diminish the topological signature of noise . in particular , persistent homology enables us to visualize the gradual separation of the topological fingerprints of cryo - em structures from those of noise during the denoising process , which gives rise to a practical procedure for prescribing a noise threshold to extract cryo - em structure information from noise contaminated data after certain iterations of the geometric flow equation . to further demonstrate the utility of persistent homology for cryo - em data analysis , we consider a microtubule intermediate structure ( emd-1129 ) . three helix models , an alpha - tubulin monomer model , an alpha- and beta - tubulin model , and an alpha- and beta - tubulin dimer model , are constructed to fit the cryo - em data . the least square fitting leads to similarly high correlation coefficients , which indicates that structure determination via optimization is an ill - posed inverse problem . however , these models have dramatically different topological fingerprints . especially , linkages or connectivities that discriminate one model from another one , play little role in the traditional density fitting or optimization , but are very sensitive and crucial to topological fingerprints . the intrinsic topological features of the microtubule data are identified after topological denoising . by a comparison of the topological fingerprints of the original data and those of three models , we found that the third model is topologically favored . the present work offers persistent homology based new strategies for topological denoising and for resolving ill - posed inverse problems . key words : cryo - em , topological signature , geometric flow , topological denoising , topology - aided structure determination .
KUALA LUMPUR (Reuters) - A piece of a wing that washed up on an Indian Ocean island beach last week was part of the wreckage of Malaysian Airlines flight MH370, Malaysia said on Thursday, confirming the discovery of the first trace of the plane since it vanished last year. “Today, 515 days since the plane disappeared, it is with a heavy heart that I must tell you that an international team of experts have conclusively confirmed that the aircraft debris found on Reunion Island is indeed from MH370,” Prime Minister Najib Razak said in an early morning televised address. “I would like to assure all those affected by this tragedy that the government of Malaysia is committed to do everything within our means to find out the truth of what happened,” Najib said. The announcement, by providing the first direct evidence that the plane crashed in the ocean, closes a chapter in one of the biggest mysteries in aviation history but still gives families of the 239 victims little clue as to why. “It’s not the end,” said Jacquita Gonzales, who lost her husband Patrick Gomes, a flight attendant on board the aircraft. “Although they found something, you know, it’s not the end. They still need to find the whole plane and our spouses as well. We still want them back,” she said. The airline described the discovery as “a major breakthrough for us in resolving the disappearance of MH370. “We expect and hope that there would be more objects to be found which would be able to help resolve this mystery,” it said in a statement issued as soon as Najib had spoken. The fragment of wing known as a flaperon was flown to mainland France after being found last week covered in barnacles on a beach on France’s Indian Ocean island of Reunion. Despite the Malaysian confirmation it was part of MH370, prosecutors in France stopped short of declaring they were certain, saying only that there was a “very strong presumption” that it was the case. Paris Prosecutor Serge Mackowiak said this was based on technical data supplied by both the manufacturer and airline but gave no indication that experts had discovered a serial number or unique markings that would put the link beyond doubt. Boeing representatives confirmed that the flaperon came from a 777 jet “due to its technical characteristics, mentioning the color, the structure of the joints,” he said. Secondly, Malaysia Airlines was able to provide documentation of the actual aircraft used on flight MH370. French gendarmes and police inspect a large piece of plane debris which was found on the beach in Saint-Andre, on the French Indian Ocean island of La Reunion, July 29, 2015. REUTERS/Zinfos974/Prisca Bigot “On this basis, it was possible for a connection to be made between the object examined by the experts and the flaperon of the Boeing 777 of MH370,” Mackowiak told reporters in Paris. He said more analysis would be carried out on Thursday, but he could not say when further results would be announced. A fragment of luggage also found in Reunion would be examined by French police as soon as possible, he added. “YET TO BEGIN” Investigators looking at the wing flap are likely to start by putting thin slices of metal under a high-powered microscope, to see subtle clues in the metal’s crystal structure about how it deformed on impact, said Hans Weber, president of TECOP International, Inc., an aerospace technology consulting firm based in San Diego, California. Later, investigators would probably clean the piece and “do a full physical examination, using ultrasonic analysis before they open it up to see if there’s any internal damage,” Weber said. “That might take quite awhile. A month or months.” John Goglia, a former board member of the U.S. National Transportation Safety Board, told Reuters: “The real work is yet to begin”. “They will identify everything they can from the metal: damage, barnacles, witness marks on the metal. They’re going to look at the brackets (that held the flaperon in place) to see how they broke. From that they can tell the direction and attitude of the airplane when it hit. There’s a lot to be told from the metal.” However, experts said the cause of the disaster may remain beyond the reach of investigators until other debris or data and cockpit voice recorders are recovered. “A wing’s moving surfaces give you far fewer clues than bigger structures like the rudder, for example. As a single piece of evidence, it is likely to reveal quite little other than it comes from MH370,” said a former investigator who has participated in several international probes of crashes at sea. The examination of the part is being carried out under the direction of a judge at an aeronautical test facility run by the French military at Balma, a suburb of the southwestern city of Toulouse, and witnessed by Malaysian and other officials. Officials from the United States, Australia, China, Britain and Singapore as well as manufacturer Boeing were also on hand. Boeing said it was providing technical expertise. Flight MH370 disappeared on March 8 last year while en route from Kuala Lumpur to Beijing. It is believed to have crashed in the southern Indian Ocean, about 3,700 km (2,300 miles) from Reunion. Slideshow (5 Images) The Boeing 777 was minutes into its scheduled flight when it disappeared from civil radar. Investigators believe that someone may have deliberately switched off the aircraft’s transponder, diverted it thousands of miles off course, and deliberately crashed into the ocean off Australia. A $90 million hunt along a rugged 60,000 sq km patch of sea floor 1,600 km (1,000 miles) west of the Australian city of Perth has yielded nothing. The search has been extended to another 60,000 sq km (23,000 sq miles) and Malaysian and Australian authorities say this will cover 95 percent of MH370’s flight path. ||||| The piece of a plane wing that washed up on an island in the Indian Ocean, he announced, was indeed part of missing Malaysia Airlines Flight 370. "It is my hope that this confirmation, however tragic and painful," Prime Minister Najib Razak said, "will at least bring certainty to the families and loved ones of the 239 people on board MH370." But a top French prosecutor was slightly less definite when he stepped up to a podium in his country an hour later. There are "very strong presumptions" that the part belongs to the missing Boeing 777, Paris Deputy Prosecutor Serge Mackowiak said, adding that more testing will be done to prove it conclusively. France, which already had opened a criminal investigation into the plane's disappearance, has been drawn deeper into the matter after the plane part's discovery last month on Reunion island, a remote part of its overseas territory. Investigators at a specialized laboratory in Toulouse are examining it. Even with tests ongoing, analysts said the Malaysian government's highly anticipated announcement marks a key step in the investigation into what happened to the plane. "I was left somewhat confused and, frankly, a little angry and dismayed," said K.S. Narendran, whose wife was one of the passengers Authorities announced their conclusions, Narendran said, without detailing their findings. "I didn't hear facts. I didn't hear the basics. I heard nothing," he said, "and so it leaves me wondering whether there is a foregone conclusion and everyone is racing for the finish." Investigators analyze debris When the debris -- a part of a wing known as a flaperon -- washed up July 29 on Reunion island, its discovery was considered possibly the first physical evidence that might help shed light on a mystery that has vexed even the most seasoned aviation experts: How could a commercial airplane just vanish? Photos: MH370 debris discovered on Reunion Island Photos: MH370 debris discovered on Reunion Island Debris discovered on the island of Reunion, a French territory in the Indian Ocean, was confirmed to be from Malaysia Airlines Flight 370, Malaysian Prime Minister Najib Razak said August 5. The plane disappeared in March 2014 with 239 people on board. Hide Caption 1 of 5 Photos: MH370 debris discovered on Reunion Island French police officers carry the plane debris on July 29. Experts say the metallic object may be a piece of a moving wing surface, known as a flaperon, from a Boeing 777. Hide Caption 2 of 5 Photos: MH370 debris discovered on Reunion Island Police officers inspect debris on July 29. Hide Caption 3 of 5 Photos: MH370 debris discovered on Reunion Island Another piece of debris resembles remnants of a suitcase. Hide Caption 4 of 5 Photos: MH370 debris discovered on Reunion Island Police carry debris on July 29. Hide Caption 5 of 5 On Wednesday, investigators met at a specialized laboratory near Toulouse to begin examining the part. Their work took hours, and Najib made the announcement very early Thursday, Kuala Lumpur time, 515 days since the flight bound for Beijing from the Malaysian capital disappeared with 239 people aboard. Shortly before he spoke, Malaysia Airlines sent a message to victims' families saying a "major announcement" that the flaperon was from the missing plane was imminent. "This has been confirmed jointly by the French Authorities, Bureau d'Enquetes et d'Analyses pour la Securites de I'Aviation Civile (BEA), the Malaysian Investigation Team, Technical Representative from PRC and Australian Transportation Safety Bureau (ATSB)," the airline's statement said. The French prosecutor, who's involved in a criminal probe that country launched because four French nationals were aboard the flight, said the flaperon matches a Boeing 777, and the characteristics of the part match the technical specifications provided by Malaysia Airlines for that part of the missing aircraft. But he phrased his assessment differently from the Malaysian Prime Minister, saying the analysis of the flaperon would continue in order to provide "complete and reliable information." It sounded "less certain," said Mary Schiavo, a CNN aviation analyst and former inspector general for the U.S. Department of Transportation. "Really, we didn't get too much more than what Boeing already told us from looking at the pictures," she said. "So I was actually a little disappointed, thinking what the families must think on hearing that." In a "tug of war among nations," she said, the passengers' families seem to be stuck in the middle. Photos: Remembering the passengers of MH370 Photos: Remembering the passengers of MH370 There is still no way to know for sure why Flight MH370 ended, but we are learning more about the lives of those on board. CNN is remembering them through snapshots shared with us. Hide Caption 1 of 12 Photos: Remembering the passengers of MH370 Rodney and Mary Burrows were looking forward to becoming first-time grandparents after their return home to Australia. Hide Caption 2 of 12 Photos: Remembering the passengers of MH370 Australians Catherine and Robert Lawton were traveling with friends on vacation when the flight disappeared. Hide Caption 3 of 12 Photos: Remembering the passengers of MH370 Paul Weeks was traveling to Mongolia for a new job as an engineer. His wife says Paul left behind his watch and his wedding ring before the trip, in case anything happened to him while he was away. Anderson spoke with Paul's brother & sister who said they are coping by spending time together as a family. Hide Caption 4 of 12 Photos: Remembering the passengers of MH370 Chandrika Sharma, left, was on Flight 370; her daughter Meghna and husband K.S. Narendran wait patiently, trying to manage their anxiety and longing for her return. Hide Caption 5 of 12 Photos: Remembering the passengers of MH370 Muktesh Mukherjee and Xiaomo Bai had been vacationing in Vietnam and were on their way home to their two young sons in Beijing. Hide Caption 6 of 12 Photos: Remembering the passengers of MH370 76-year-old Liu Rusheng, an accomplished calligrapher and one of the oldest passengers on the flight, was in Malaysia to attend an art exhibition with his wife. Hide Caption 7 of 12 Photos: Remembering the passengers of MH370 Teens Hadrien Wattrelos and Zhao Yan are shown in a photo on Wattrelos' Facebook page. The photo is captioned, simply, "I love you," in French. Hide Caption 8 of 12 Photos: Remembering the passengers of MH370 Firman Chandra Siregar, 24, studied electrical engineering in Indonesia and was on his way to Beijing on board Flight 370 to start a new job at an oil company. Hide Caption 9 of 12 Photos: Remembering the passengers of MH370 Patrick Francis Gomes, center, was the in-flight supervisor for the missing plane. His daughter describes him as a quiet person with a sense of humor. Hide Caption 10 of 12 Photos: Remembering the passengers of MH370 Ch'ng Mei Ling, a Malaysian citizen who lives in Pennsylvania, is a process engineer at a chemical company. Hide Caption 11 of 12 Photos: Remembering the passengers of MH370 We do not have photos of all 239 passengers, but we wanted to remember that there are loved ones around the world missing them right now. View CNN's complete coverage of Flight 370. Hide Caption 12 of 12 Relatives of those on board have said real closure won't come until their family members' remains have been recovered and the truth about what happened to the plane is established. Progress on those fronts is unlikely to be made unless the Australian-led underwater hunt locates the aircraft's wreckage and flight recorders somewhere in the huge southern Indian Ocean search area, which covers and area bigger than the U.S. state of Pennsylvania. Global effort to solve mystery Malaysia, the country where MH370 began its journey and whose flag it was carrying, is in charge of the overall investigations. But there are a number of other countries involved. French authorities have also opened their own criminal investigation last year into possible manslaughter and hijacking in the loss of MH370. Officials from Australia earlier said they thought it was likely that the Boeing wing component was from MH370 -- no other 777 aircraft was believed to have gone missing in the Indian Ocean. Australia is overseeing the underwater search for the wreckage because the plane is believed to have gone down far off its western coast, in a remote part of the southern Indian Ocean. China, which had the largest number of citizens on the plane, has been involved in decisions about the search for the plane. Photos: The search for MH370 Photos: The search for MH370 Two years after Malaysia Airlines Flight 370 went missing, a relative of one of the passengers burns incense in Beijing on March 8, 2016. Flight 370 vanished on March 8, 2014, as it flew from Kuala Lumpur, Malaysia, to Beijing. There were 239 people on board. Hide Caption 1 of 43 Photos: The search for MH370 On July 29, police carry a piece of debris on Reunion Island, a French territory in the Indian Ocean. A week later, authorities confirmed that the debris was from the missing flight. Hide Caption 2 of 43 Photos: The search for MH370 Staff members with the Australian Transport Safety Bureau examine a piece of aircraft debris at their laboratory in Canberra, Australia, on July 20. The flap was found in June by residents on Pemba Island off the coast of Tanzania, and officials had said it was highly likely to have come from Flight 370. Experts at the Australian Transport Safety Bureau, which is heading up the search for the plane, confirmed that the part was indeed from the missing aircraft. Hide Caption 3 of 43 Photos: The search for MH370 In late February, American tourist Blaine Gibson found a piece of plane debris off Mozambique, a discovery that renewed hope of solving the mystery of the missing flight. The piece measured 35 inches by 22 inches. A U.S. official said it was likely the wreckage came from a Boeing 777, which MH370 was. Hide Caption 4 of 43 Photos: The search for MH370 Relatives of the flight's passengers console each other outside the Malaysia Airlines office in Subang, Malaysia, on February 12, 2015. Protesters had demanded that the airline withdraw the statement that all 239 people aboard the plane were dead. Hide Caption 5 of 43 Photos: The search for MH370 A police officer watches a couple cry outside the airline's office building in Beijing after officials refused to meet with them on June 11, 2014. The couple's son was on the plane. Hide Caption 6 of 43 Photos: The search for MH370 Members of the media scramble to speak with Azharuddin Abdul Rahman, director general of Malaysia's Civil Aviation Department, at a hotel in Kuala Lumpur, Malaysia, on May 27, 2014. Data from communications between satellites and the missing flight was released the day before, more than two months after relatives of passengers said they requested it be made public. Hide Caption 7 of 43 Photos: The search for MH370 Operators aboard the Australian ship Ocean Shield move Bluefin-21, the U.S. Navy's autonomous underwater vehicle, into position to search for the jet on April 14, 2014. Hide Caption 8 of 43 Photos: The search for MH370 A member of the Royal New Zealand Air Force looks out of a window while searching for debris off the coast of western Australia on April 13, 2014. Hide Caption 9 of 43 Photos: The search for MH370 The HMS Echo, a vessel with the British Roya; Navy, moves through the waters of the southern Indian Ocean on April 12, 2014. Hide Caption 10 of 43 Photos: The search for MH370 A Royal Australian Air Force AP-3C Orion, on a mission to drop sonar buoys to assist in the search, flies past the Australian vessel Ocean Shield on April 9, 2014. Hide Caption 11 of 43 Photos: The search for MH370 A relative of a missing passenger cries at a vigil in Beijing on April 8, 2014. Hide Caption 12 of 43 Photos: The search for MH370 Australian Defense Force divers scan the water for debris in the southern Indian Ocean on April 7, 2014. Hide Caption 13 of 43 Photos: The search for MH370 A towed pinger locator is readied to be deployed off the deck of the Australian vessel Ocean Shield on April 7, 2014. Hide Caption 14 of 43 Photos: The search for MH370 A member of the Royal New Zealand Air Force looks at a flare in the Indian Ocean during search operations on April 4, 2014. Hide Caption 15 of 43 Photos: The search for MH370 On March 30, 2014, a woman in Kuala Lumpur prepares for an event in honor of those aboard Flight 370. Hide Caption 16 of 43 Photos: The search for MH370 The sole representative for the families of Flight 370 passengers leaves a conference at a Beijing hotel on March 28, 2014, after other relatives left en masse to protest the Malaysian government's response to their questions. Hide Caption 17 of 43 Photos: The search for MH370 A member of the Royal Australian Air Force is silhouetted against the southern Indian Ocean during the search for the missing jet on March 27, 2014. Hide Caption 18 of 43 Photos: The search for MH370 Flight Lt. Jayson Nichols looks at a map aboard a Royal Australian Air Force aircraft during a search on March 27, 2014. Hide Caption 19 of 43 Photos: The search for MH370 People in Kuala Lumpur light candles during a ceremony held for the missing flight's passengers on March 27, 2014. Hide Caption 20 of 43 Photos: The search for MH370 Malaysian Prime Minister Najib Razak, center, delivers a statement about the flight on March 24, 2014. Razak's announcement came after the airline sent a text message to relatives saying it "deeply regrets that we have to assume beyond any reasonable doubt that MH 370 has been lost and that none of those onboard survived." Hide Caption 21 of 43 Photos: The search for MH370 Grieving relatives of missing passengers leave a hotel in Beijing on March 24, 2014. Hide Caption 22 of 43 Photos: The search for MH370 A passenger views a weather map in the departures terminal of Kuala Lumpur International Airport on March 22, 2014. Hide Caption 23 of 43 Photos: The search for MH370 A Chinese satellite captured this image, released on March 22, 2014, of a floating object in the Indian Ocean, according to China's State Administration of Science. It was a possible lead in the search for the missing plane. Surveillance planes were looking for two objects spotted by satellite imagery in remote, treacherous waters more than 1,400 miles from the west coast of Australia. Hide Caption 24 of 43 Photos: The search for MH370 Satellite imagery provided by the Australian Maritime Safety Authority on March 20, 2014, showed debris in the southern Indian Ocean that could have been from Flight 370. The announcement by Australian officials raised hopes of a breakthrough in the frustrating search. Hide Caption 25 of 43 Photos: The search for MH370 Another satellite shot provided by the Australian Maritime Safety Authority shows possible debris from the flight. Hide Caption 26 of 43 Photos: The search for MH370 A distraught relative of a missing passenger breaks down while talking to reporters at Kuala Lumpur International Airport on March 19, 2014. Hide Caption 27 of 43 Photos: The search for MH370 On March 18, 2014, a relative of a missing passenger tells reporters in Beijing about a hunger strike to protest authorities' handling of information about the missing jet. Hide Caption 28 of 43 Photos: The search for MH370 U.S. Navy crew members assist in search-and-rescue operations in the Indian Ocean on March 16, 2014. Hide Caption 29 of 43 Photos: The search for MH370 Members of the Chinese navy continue search operations on March 13, 2014. After starting in the sea between Malaysia and Vietnam, the plane's last confirmed location, search efforts expanded west into the Indian Ocean. Hide Caption 30 of 43 Photos: The search for MH370 A Vietnamese military official looks out an aircraft window during search operations March 13, 2014. Hide Caption 31 of 43 Photos: The search for MH370 Malaysian air force members look for debris near Kuala Lumpur on March 13, 2014. Hide Caption 32 of 43 Photos: The search for MH370 Relatives of missing passengers wait for the latest news at a hotel in Beijing on March 12, 2014. Hide Caption 33 of 43 Photos: The search for MH370 A member of the Vietnamese air force checks a map while searching for the missing plane on March 11, 2014. Hide Caption 34 of 43 Photos: The search for MH370 A Vietnamese air force plane found traces of oil that authorities had suspected to be from the missing Malaysia Airlines plane, the Vietnamese government online newspaper reported on March 8, 2014. However, a sample from the slick showed it was bunker oil, typically used to power large cargo ships, Malaysia's state news agency, Bernama, reported on March 10, 2014. Hide Caption 35 of 43 Photos: The search for MH370 A U.S. Navy Seahawk helicopter lands aboard the USS Pinckney to change crews on March 9, 2014, before returning to search for the missing plane in the Gulf of Thailand. Hide Caption 36 of 43 Photos: The search for MH370 Buddhist monks at Kuala Lumpur International Airport offer a special prayer for the missing passengers on March 9, 2014. Hide Caption 37 of 43 Photos: The search for MH370 Members of a Chinese emergency response team board a rescue vessel at the port of Sanya in China's Hainan province on March 9, 2014. Hide Caption 38 of 43 Photos: The search for MH370 The rescue vessel sets out from Sanya in the South China Sea on March 9, 2014. Hide Caption 39 of 43 Photos: The search for MH370 Malaysian Prime Minister Najib Razak, center, arrives to meet family members of missing passengers at the reception center at Kuala Lumpur International Airport on March 8, 2014. Hide Caption 40 of 43 Photos: The search for MH370 A relative of two missing passengers reacts at their home in Kuala Lumpur on March 8, 2014. Hide Caption 41 of 43 Photos: The search for MH370 Chinese police at the Beijing airport stand beside the arrival board showing delayed Flight 370 in red on March 8, 2014. Hide Caption 42 of 43 Photos: The search for MH370 Malaysia Airlines Group CEO Ahmad Juahari Yahya, front, speaks during a news conference at a hotel in Sepang on March 8, 2014. "We deeply regret that we have lost all contacts" with the jet, he said. Hide Caption 43 of 43 U.S. and British government agencies -- as well as experts from Boeing and the satellite company Inmarsat -- have contributed to the investigations. The Toulouse lab previously examined wreckage from Air France Flight 447, a passenger jet that went down in the Atlantic Ocean en route from Rio de Janeiro to Paris in 2009. The remnants of a suitcase found on Reunion the day after the flaperon was discovered have been sent to a lab outside Paris for analysis. Debris won't change underwater search The wing part found on Reunion won't prompt a rethink of the search area, Australian officials say. JUST WATCHED From disappearance to debris: CNN's coverage of MH370 Replay More Videos ... MUST WATCH From disappearance to debris: CNN's coverage of MH370 03:16 "Because of the turbulent nature of the ocean, and the uncertainties of the modeling, it is impossible to use the La Reunion finding to refine or shift the search area," the Australian Transport Safety Bureau said in a report published Tuesday, citing the country's national science agency. That the debris drifted thousands of miles to the west of the underwater search area is consistent with ocean drift models, Australian officials say. The ATSB report admitted, though, that an earlier prediction that some debris from MH370 could wash up in July 2014 on the shores of the Indonesian island of Sumatra, north of where the aircraft is calculated to have entered the ocean, was incorrect because of an error in the use of wind data. Searches are taking place on Reunion for more possible debris from MH370. But Australian Deputy Prime Minister Warren Truss has said he's doubtful many pieces are likely to turn up. "Reunion Island is a pretty small speck in a giant Indian Ocean," he told The Wall Street Journal. "Most pieces that were even floating by the time they got to this area would simply float past."
– Malaysia's prime minister confirmed today that the debris found on France's Reunion Island is in fact from Malaysia Airlines MH370, lost 17 months ago, reports CNN. "Today, 515 days since the plane disappeared, it is with a heavy heart that I must tell you that an international team of experts have conclusively confirmed that the aircraft debris found on Reunion Island is indeed from MH370," Najib Razak said in a televised statement, per Reuters. The airline announced separately that relatives of the 239 passengers and crew had been notified. The announcement came as experts assembled near Toulouse, France, to begin analysis on the Boeing 777 wing component, called a flaperon. It's the first confirmed piece of wreckage from the aircraft, and though it confirms the plane's fate, it sheds little light on what caused the crash. "We expect and hope that there would be more objects to be found which would be able to help resolve this mystery," says the airline.
SECTION 1. SHORT TITLE. This Act may be cited as the ``International Tax Competitiveness Act of 2011''. SEC. 2. TREATMENT OF FOREIGN CORPORATIONS MANAGED AND CONTROLLED IN THE UNITED STATES AS DOMESTIC CORPORATIONS. (a) In General.--Section 7701 of the Internal Revenue Code of 1986 (relating to definitions) is amended by redesignating subsection (p) as subsection (q) and by inserting after subsection (o) the following new subsection: ``(p) Certain Corporations Managed and Controlled in the United States Treated as Domestic for Income Tax.-- ``(1) In general.--Notwithstanding subsection (a)(4), in the case of a corporation described in paragraph (2) if-- ``(A) the corporation would not otherwise be treated as a domestic corporation for purposes of this title, but ``(B) the management and control of the corporation occurs, directly or indirectly, primarily within the United States, then, solely for purposes of chapter 1 (and any other provision of this title relating to chapter 1), the corporation shall be treated as a domestic corporation. ``(2) Corporation described.-- ``(A) In general.--A corporation is described in this paragraph if-- ``(i) the stock of such corporation is regularly traded on an established securities market, or ``(ii) the aggregate gross assets of such corporation (or any predecessor thereof), including assets under management for investors, whether held directly or indirectly, at any time during the taxable year or any preceding taxable year is $50,000,000 or more. ``(B) General exception.--A corporation shall not be treated as described in this paragraph if-- ``(i) such corporation was treated as a corporation described in this paragraph in a preceding taxable year, ``(ii) such corporation-- ``(I) is not regularly traded on an established securities market, and ``(II) has, and is reasonably expected to continue to have, aggregate gross assets (including assets under management for investors, whether held directly or indirectly) of less than $50,000,000, and ``(iii) the Secretary grants a waiver to such corporation under this subparagraph. ``(C) Exception from gross assets test.-- Subparagraph (A)(ii) shall not apply to a corporation which is a controlled foreign corporation (as defined in section 957) and which is a member of an affiliated group (as defined section 1504, but determined without regard to section 1504(b)(3)) the common parent of which-- ``(i) is a domestic corporation (determined without regard to this subsection), and ``(ii) has substantial assets (other than cash and cash equivalents and other than stock of foreign subsidiaries) held for use in the active conduct of a trade or business in the United States. ``(3) Management and control.-- ``(A) In general.--The Secretary shall prescribe regulations for purposes of determining cases in which the management and control of a corporation is to be treated as occurring primarily within the United States. ``(B) Executive officers and senior management.-- Such regulations shall provide that-- ``(i) the management and control of a corporation shall be treated as occurring primarily within the United States if substantially all of the executive officers and senior management of the corporation who exercise day-to-day responsibility for making decisions involving strategic, financial, and operational policies of the corporation are located primarily within the United States, and ``(ii) individuals who are not executive officers and senior management of the corporation (including individuals who are officers or employees of other corporations in the same chain of corporations as the corporation) shall be treated as executive officers and senior management if such individuals exercise the day-to-day responsibilities of the corporation described in clause (i). ``(C) Corporations primarily holding investment assets.--Such regulations shall also provide that the management and control of a corporation shall be treated as occurring primarily within the United States if-- ``(i) the assets of such corporation (directly or indirectly) consist primarily of as sets being managed on behalf of investors, and ``(ii) decisions about how to invest the assets are made in the United States.''. (b) Effective Date.--The amendments made by this section shall apply to taxable years beginning on or after the date which is 2 years after the date of the enactment of this Act. SEC. 3. CURRENT TAXATION OF ROYALTIES AND OTHER INCOME FROM INTANGIBLES RECEIVED FROM A CONTROLLED FOREIGN CORPORATION. (a) Repeal of Look-Thru Rule for Royalties Received From Controlled Foreign Corporations.--Paragraph (6) of section 954(c) of the Internal Revenue Code of 1986 is amended-- (1) by striking ``rents, and royalties'' in subparagraph (A) and inserting ``and rents'', and (2) by striking ``, rent, or royalty'' both places it appears in subparagraph (B) and inserting ``or rent''. (b) Entities Not Permitted To Be Disregarded in Determining Royalties.--Subsection (c) of section 954 of such Code is amended by adding at the end the following new paragraph: ``(7) All royalties taken into account.--For purposes of determining the foreign personal holding company income which consists of royalties, this subsection shall be applied without regard to any election to disregard any entity which would be taken into account for Federal income tax purposes but for such election.''. (c) Certain Other Income Derived From United States Intangibles Taken Into Account as Subpart F Income.--Subsection (d) of section 954 of such Code is amended by adding at the end the following new paragraph: ``(5) Special rule for certain products produced pursuant to intangibles made available by united states persons.--For purposes of this subsection, personal property shall be treated as having been purchased from a related person if any intangible property (within the meaning of section 936(h)(3)(B)) made available to a controlled foreign corporation, directly or indirectly, by a related person which is a United States person contributes, directly or indirectly, to the production of such personal property by the controlled foreign corporation. The preceding sentence shall not apply to any personal property produced directly by the controlled foreign corporation, without regard to any election to disregard any entity which would be taken into account for Federal income tax purposes but for such election.''. (d) Effective Date.--The amendments made by this section shall apply to taxable years of foreign corporations beginning after December 31, 2011, and to taxable years of United States shareholders within which or with which such tax years of such foreign corporations end. SEC. 4. TAXATION OF BOOT RECEIVED IN REORGANIZATIONS. (a) In General.--Paragraph (2) of section 356(a) of the Internal Revenue Code of 1986 is amended-- (1) by striking ``If an exchange'' and inserting ``Except as otherwise provided by the Secretary-- ``(A) In general.--If an exchange''; (2) by striking ``then there shall be'' and all that follows through ``February 28, 1913'' and inserting ``then the amount of other property or money shall be treated as a dividend to the extent of the earnings and profits of the corporation''; and (3) by adding at the end the following new subparagraph: ``(B) Certain reorganizations.--In the case of a reorganization described in section 368(a)(1)(D) with respect to which the requirements of subparagraphs (A) and (B) of section 354(b)(1) are met (or any other reorganization specified by the Secretary), in applying subparagraph (A)-- ``(i) the earnings and profits of each corporation which is a party to the reorganization shall be taken into account, and ``(ii) the amount which is a dividend (and source thereof) shall be determined under rules similar to the rules of paragraphs (2) and (5) of section 304(b).''. (b) Earnings and Profits.--Paragraph (7) of section 312(n) of such Code is amended by adding at the end the following: ``A similar rule shall apply to an exchange to which section 356(a)(1) applies.''. (c) Conforming Amendment.--Paragraph (1) of section 356(a) of such Code is amended by striking ``then the gain'' and inserting ``then (except as provided in paragraph (2)) the gain''. (d) Effective Date.--The amendments made by this section shall apply to exchanges after the date of the enactment of this Act.
International Tax Competitiveness Act of 2011 - Amends the Internal Revenue Code to: (1) treat foreign corporations that are managed, directly or indirectly, within the United States as domestic corporations for U.S. tax purposes; (2) make certain royalty income and income from intangibles received from a controlled foreign corporation subject to U.S. taxation; and (3) revise the tax treatment of property other than stock (i.e., boot) received in connection with a corporate reorganization to provide that such property shall be treated as a taxable dividend.
ordered mesoporous materials are subject to considerable research effort aimed at using the materials for various applications such as catalysis , drug delivery , and adsorbents . one course of action is to tailor the materials according to specific needs by the functionalization of ( primarily ) the huge internal surface area . mesoporous silica materials , including the well - known mcm-41 and sba-15 silica , have in this respect been functionalized by several methods including postsynthesis grafting of various molecules on the inner pore surface . for instance , poly - n - isopropylacrylamide ( pnipaam ) was grafted on the walls of the mesopores in order to have a mean of active diffusion control . below this temperature , which is usually around 36 c , the polymer is in a good solvent and is highly water - soluble , whereas above the temperature the polymer solubility decreases , which leads to the conformational collapse of the polymer chains . in other words , the polymer chains can at low temperature be regarded as more hydrophilic and therefore can adopt an elongated state ( and can function thus as a diffusion hindrance ) whereas the chains above the lcst appear to be more hydrophobic and collapse into a globular conformation ( allowing free diffusion ) . upon functionalization the pore size , pore volume , and surface properties will be affected by the functionalization process , and for proper characterization , it is necessary to monitor both the porosity of the material and the dramatic changes in surface properties caused by the grafted material . in this work , we demonstrate that water sorption calorimetry is a suitable technique for fulfilling the characterization requirements regarding porosity as well as surface properties . using this technique , valuable information is gained on sba-15 functionalized with pnipaam and the different materials obtained in the stepwise functionalization process . porous material sba-15 is characterized by a 2d hexagonal structure ( plane group p6 m ) with order on the mesoscopic scale . the hexagonally ordered primary mesopores , with a very narrow pore size distribution , are highly connected through complementary pores , referred to as intrawall pores . these pores exhibit a broad pore size distribution from the micropore size ( < 2 nm ) to the primary mesopore size ( approximately 5 nm ) . this generates a highly complex 3d pore system resulting in complex diffusion processes of , for example , molecules in the pores . nitrogen sorption is a technique that is commonly used to characterize porous materials , including mesoporous silica . it is also one of the first choices for the characterization of porous composites such as mesoporous silica functionalized with organic entities , including polymers . this method provides information mainly on geometrical properties ( e.g. , surface area , pore volume , and pore size ) . however , the surface properties are taken into account only to a certain extent in the available theoretical models for the analysis . the bet c value , for instance , provides some information because it is to a certain extent influenced by the adsorbent adsorbate interaction . however , it is not solely influenced by the surface properties but also influenced by the microporosity , which is present in sba-15 . the performance of many applications ( e.g. , drug delivery ) relies on the available functional groups present on the surface . it is also strongly influenced by the interaction between the surface and molecules ( e.g. , drugs ) that are taken up . it is thus important to monitor not only the porosity of the materials but also its surface properties . the particle diffusion in porous material is controlled by a number of mechanisms , with surface diffusion being an important contribution to the overall process . hence , it is important to gain information on the surface properties of the functionalized mesoporous materials in order to tailor the drug - loading mechanism according to the needs of specific applications . the traditional schematic presentations of surface - functionalized materials with a rather homogeneous distribution throughout the whole material has been found to be too simplistic . it is necessary to consider that even after functionalization the concentration of surface silanol groups can be rather high and hence can contribute strongly to the overall surface properties of the functionalized material . water sorption calorimetry , a technique sensitive to both surface properties , such as wettability , and porosity ( see below ) , can provide valuable information that has an impact on a number of parameters that are critical for potential applications of mesoporous silica materials . this technique has previously been implemented and used for the characterization of mesoporous silica materials such as mcm-41 and sba-15 in order to determine water sorption isotherms , the enthalpy of hydration , and the pore size distribution . the method has proven to be valuable for the characterization of both the hydration ( i.e. , the wettability ) and porosity of these materials . with water sorption calorimetry , the presence ( or absence ) of intrawall pores in a mesoporous material , such as sba-15 , can be directly identified from the sorption isotherms . nitrogen sorption , however , can not directly identify the intrawall porosity from the sorption isotherms , but with advanced analysis methods , such as non - local - density functional theory ( nldft ) , this pore category can be identified . in this study , we evaluate the ( pore ) surface properties and the porosity of pure sba-15 and functionalized sba-15 with water sorption calorimetry . in addition to the presence of different types of pores , the water wettability of the surface is detectable from the water sorption isotherms . we have systematically studied the materials obtained in different steps leading to the final composite material of sba-15 with grafted pnipaam . the properties of the final material are controlled by the first two steps ( the location of the anchor determines where pnipaam is located because it functions as the starting point of polymerization ) , hence it is important to characterize them . first the calcined sba-15 ( scheme 1a ) material , denoted as sba-15 - 1 , is treated with an aqueous solution of nitric acid , which is aimed at increasing the number of surface ( primarily at the porous inner surface ) silanol groups ( scheme 1b ) . subsequently , 1-(trichlorosilyl)-2-(m-/p-(chloromethylphenyl ) ethane is grafted to the surface ( scheme 1c ) , followed by the surface - initiated polymerization of n - isopropylacrylamide ( scheme 1d ) . the materials from each step in the modification process ( i.e. , sba-15 - 1 to sba-15 - 4 ) are evaluated by water sorption calorimetry . complementary characterization is performed with small - angle x - ray diffraction ( saxd ) and nitrogen sorption measurements . in a previous study , the corresponding samples were investigated by tga , nitrogen sorption , and argon sorption . tga showed a steep weight loss at around 400 c , which is characteristic of pnipaam . the poly(ethylene oxide)-poly(propylene oxide)-based triblock copolymer ( p104 ) was obtained as a gift from basf and used as received . tetramethylorthosilicate ( tmos ) was obtained from sigma - aldrich ( > 98% ) , and hcl ( 12 m ) was obtained from merck . n - isopropylacrylamide ( > 99% sigma - aldrich ) was recrystallized from n - hexane ( 97% sigma - aldrich ) . 1-(trichlorosilyl)-2-(m-/p-(chloromethylphenyl)ethane ( fluorochem ltd . ) , toluene ( anhydrous , sigma - aldrich ) , dmf ( sigma - aldrich ) , cucl ( sigma - aldrich ) , 2,2-bipyridine ( sigma - aldrich ) , acetone , and methanol were used without further purification . on the basis of the original sba-15 protocol , the initial sba-15 silica material was synthesized by applying a modified procedure as described elsewhere . triblock copolymer pluronic p104 ( eo27po61eo27 , 2.4 g ) was dissolved in 93.8 g of 1.6 m hcl . the temperature of the solution was regulated to 60 c ( using a water bath ) . the solution was stirred at 60 c for 24 h. subsequently , the flask was put in an oven for hydrothermal treatment at 80 c for another 24 h. afterwards , the product was filtered , dried , and calcined in air at 500 c for 6 h. this material is denoted as sba-15 - 1 . end - tethered poly - n - isopropylacrylamide ( pnipaam ) was grafted inside the mesoporous silica matrix by first covalently attaching a monomer to an anchor grafted onto the pore surface and initiating polymerization using atom - transfer radical polymerization ( atrp ) . the number of possible grafting sites ( i.e. , silanol groups ) for the anchor was increased by dispersing the original calcined sba-15 ( sba-15 - 1 ) in an aqueous solution of hno3 . thereafter , the anchor , 1-(trichlorosilyl)-2-(m-/p-(chloromethylphenyl ) ethane , was grafted onto the surface of sba-15 - 2 . first , sba-15 - 2 was impregnated with anhydrous toluene ( 1.0 g of silica in 100 ml of toluene ) and flushed with argon for several minutes to obtain a dry , water - free sample . after the toluene treatment , the anchor was added in large excess ( 1 ml of anchor for 1.0 g of silica ) at room temperature . during the 48 h of reaction time , the product was filtered and subsequently washed with toluene , methanol , and acetone in order to remove excess ungrafted anchor molecules from the pores . the filtered material was dried for 2 h at 110 c in an argon stream . this material is referred to as sba-15 - 3 . for the surface - initiated polymerization of n - isopropylacryl - amide ( nipaam ) n , n - dimethylformamide ( dmf , 150 ml ) was added to sba-15 - 3 in a round - bottomed flask that was then flushed with argon for several minutes . thereafter , 2,2-bipyridine ( bipy , 2.5 g ) and cucl ( 0.535 g ) were added , followed by n - isopropylacrylamide ( 15.0 g ) . after 30 min of flushing with an argon flow , the flask was sealed and stirred at 30 c for another 72 h. the product was cooled to room temperature , filtered , and washed with dmf , methanol , and water . following this , scheme 1 presents the stepwise modification process used to incorporate ( by atrp ) pnipaam into sba-15 . the different materials ( sba-15 - 1 to sba-15 - 4 ) were characterized by powder diffraction measurements performed with a kratky camera equipped with a linear - position - sensitive detector containing 1024 channels of width 53.6 m . cu k radiation of wavelength 1.542 was supplied by a seifert i d 2200 w x - ray tube operated at 55 kv and 40 ma . the measured raw data were normalized to the first - order peak ( 10 ) in order to compare the relative peak intensities of the higher - order bragg peaks ( i.e. , ( 11 ) and ( 20 ) ) directly . the products obtained by the different steps in the modification procedure ( sba-15 - 1 to sba-15 - 4 ) were characterized by water sorption calorimetry performed at 25 c in a two - chamber sorption calorimetric cell inserted into a double - twin microcalorimeter . prior to the measurements , the samples were dried under high vacuum for two days and sample preparation ( filling of the calorimetric cell ) was performed in a glovebag in a dry nitrogen atmosphere . the mesoporous silica sample was inserted into the upper chamber , and milli - q water was injected into the lower chamber . during the sorption measurement , water evaporates from the lower chamber , diffuses through the tube connecting the two chambers , and is subsequently adsorbed by the sample in the upper chamber . the thermal power released in the two chambers is recorded simultaneously . the activity of water , aw ( the partial pressure of water in the sample divided by the partial pressure of water over pure liquid water ) , was calculated from the thermal power of vaporization of water in the lower chamber as described previously . the partial molar enthalpy of the mixing of water was calculated using the following equation1where p and p are thermal powers registered in the vaporization and sorption chambers , respectively , and hwvap is the molar enthalpy of evaporation of pure water . the enthalpy of mixing considered here , corresponds to the following reaction when liquid water is adsorbed by the silica material : h2o(l ) h2o(ads ) . to obtain an accurate calculation of the partial molar enthalpy of mixing of water , the partial molar entropy of mixing of water was calculated from the enthalpy and the water activity data as follows2where t is the absolute temperature and r is the gas constant . prior to the measurements , the unmodified sample ( sba-15 - 1 ) was outgassed at 200 c overnight , whereas the modified samples ( sba-15 - 2 , sba-15 - 3 , and sba-15 - 4 ) were outgassed at 80 c overnight . the pore size distribution was obtained by applying the barret joyner halenda method . for the determination of micropore volume in the presence of mesopores , the t - plot method using the de boer equation was applied . in the plot , the adsorbed amount of gas n is plotted against t , the statistical thickness of an adsorbed film . in the case of a nonmicroporous material , the t - plot function crosses the origin of the plot . for all four materials ( sba-15 - 1 to sba-15 - 4 ) , the expected and well - defined structure of sba-15 was observed ( figure 1 ) . the saxd patterns show three peaks , which can be indexed on the 2d hexagonal lattice ( plane group p6 m ) . the unit cell parameter ( table 1 ) is constant , 10.6 nm , for all samples . this demonstrates the advantage of using sba-15 as a mesoporous host as compared to mcm-41 materials , for which the structure is modified during the grafting process . the treatment with acidic solution , which was done to increase the number of silanol groups on the surface , did not affect the periodic structure of the sample , which is in agreement with similar results by rosenholm et al . the better stability of sba-15 as compared to that of mcm-41 can be attributed to the thicker pore walls of sba-15 . saxd data of calcined sba-15 - 1 ( red ) , acid - treated sba-15 - 2 ( blue ) , sba-15 - 3 with a grafted anchor ( green ) , and sba-15 - 4 with grafted pnipaam ( black ) . sbet , the bet specific surface area , is deduced from the isotherm analysis in the relative pressure range from 0.05 to 0.2 . the water sorption isotherms and the corresponding enthalpy curve for sba-15 - 1 are shown in figure 2 , and the derived parameters are summarized in table 2 . we present the isotherms as a function of water content ( mass of water per mass of dry sample ) to be consistent with previous water sorption results and to provide the possibility for easy comparison with the enthalpy data ( figure 2b ) . in the supporting information , the graphs are presented as a function of water activity for readers used to the typical representations of gas sorption isotherms . the water sorption isotherm of sba-15 - 1 is divided into four distinct regimes , in agreement with earlier published results : ( a ) water sorption isotherm of calcined sba-15 - 1 and the corresponding ( b ) enthalpy curve obtained from the water sorption isotherm showing the four regimes of water adsorption : ( 1 ) surface adsorption , ( 2 ) filling of intrawall pores , ( 3 ) capillary condensation in the primary mesopores , and ( 4 ) postcapillary condensation uptake of water . for simplicity , the density of liquid water was assumed to be 1.0 cm / g . regime 1 - surface adsorption ; regime 2 - filling of intrawall pores ; regime 3 - capillary condensation in the primary mesopores ; and regime 4 - postcapillary condensation uptake of water . these regimes are identified in the water sorption isotherms for sba-15 - 1 ( figure 2 ) , where the starting and ending points for every regime are recognized by a change in the slope of the isotherm . this also demonstrates the strength of using water sorption calorimetry to probe the porosity of sba-15 because it directly indicates the presence of the intrawall pores that are a typical attribute of sba-15 . the enthalpy curve shown in figure 2b can be divided into these four regimes as well . a more detailed analysis of the data will be discussed separately for samples sba-15 - 1 to sba-15 - 4 further below . figure 3 shows the water sorption isotherms for sba-15 - 1 to sba-15 - 4 , and the derived parameters are summarized in table 2 . the water sorption isotherms are clearly distinct for each material , demonstrating that water sorption calorimetry is an effective technique for monitoring changes imposed on the material during a functionalization process . below are detailed descriptions of the results , isotherms , and enthalpy data obtained for each of the respective materials . water sorption isotherms of calcined sba-15 - 1 ( red ) , acid - treated sba-15 - 2 ( blue ) , sba-15 - 3 with a grafted anchor ( green ) , and sba-15 - 4 with grafted pnipaam ( black ) . regime 1 ( up to 0.037 g of water / g of silica , see table 2 ) is characterized by the adsorption of water molecules to the silica surface . at low water content , the dominant process is water physisorbing to the hydroxyl groups on the silica surface . subsequently , the filling of intrawall pores ( large micropores and mesopores ) with water sets in at a water content of 0.037 g / g of silica . this is the main process in regime 2 . the capillary condensation in the mesopores ( regime 3 ) starts at a water content of about 0.270 g of water / g of silica , which is comparable to earlier published values . the water activity curve levels off at the onset of capillary condensation in mesopores , reflecting the narrow pore size distribution of the mesopores . finally , after capillary condensation in mesopores , a further uptake of water by the sample is observed at high pressures ( water content of 0.665 g / g of silica ) . the enthalpy effect of water sorption in sba-15 - 1 is relatively small at low water content but negative ( figure 4 ) . the negative value reflects the exothermic effect expected as water adsorbs to highly polar silanol groups on the surfaces of the silica pores . initially , the value decreases with increasing water content before it reached a local maximum at around 0.035 g of water / g of silica just prior to the filling of the intrawall pores . thereafter , it levels off and reaches a constant value in the regime of capillary condensation in the primary mesopores ( hw > 0.27 , regime 3 ) . enthalpy curves obtained from water sorption isotherms of calcined sba-15 - 1 ( red ) , acid - treated sba-15 - 2 ( blue ) , sba-15 - 3 with a grafted anchor ( green ) , and sba-15 - 4 with grafted pnipaam ( black ) . please note that for presentation reasons at higher water contents a larger filter factor was used . the first regime in the water sorption isotherm of sba-15 - 2 ( figure 3 ) is prolonged compared to that of sba-15 - 1 , up to a water content of 0.076 g / g of silica . this observation will be discussed further below . also , the starting point of the capillary condensation in the mesopores ( regime 3 ) is shifted to a slightly higher water content , namely 0.289 g of water / g of silica . the rather unexpected observation that capillary condensation takes place at the same water activity as for sba-15 - 1 leads us to the conclusion that an increase in the number of silanol groups has mainly occurred in the intrawall pores . a possible explanation of the above - mentioned phenomenon is based on the wetting abilities of the calcined sba-15 material . earlier published results show that upon immersing silica into liquid water the formation of air pockets can be observed . in our case , when impregnating the pores of sba-15 - 1 with an acidic aqueous solution , which does not necessarily occur under equilibrium conditions , air pockets may be formed in the main mesopores , hence a fraction of the surface will not be hydroxylated . however , the vapor pressure of the acidic solution is sufficiently high that capillary condensation can occur in the smaller pores even if they are not initially filled with aqueous solution , thus facilitating the hydroxylation of the surface in the intrawall pores . the enthalpy curve for sba-15 - 2 ( figure 4 ) has dramatically changed as compared to that of sba-15 - 1 . the enthalpy at low water content is much more negative than in sba-15 - 1 , with 11 kj mol compared to 2.5 kj mol . to understand this phenomenon , one has to recall that approximately 84% of the silica surface is hydrophobic for calcined silica ( sba-15 - 1 ) with a large portion of siloxane bonds . the adsorption of water molecules on such a surface leads to the disruption of hydrogen bonds ( compared to those in the bulk ) , but at the same time a new similar type of bond is formed with surface hydroxyl groups by the adsorption . the total heat effect will be weakly exothermic for surfaces with a low coverage of hydroxyl groups ( and a high portion of siloxane bonds ) . hence , the increase in the number of silanol groups on the surface , as a consequence of the acid treatment ( i.e. , sba-15 - 2 ) , renders the sba-15 surface more hydrophilic . this facilitates the adsorption of water molecules and therefore results in a larger exothermic effect than that observed for sba-15 - 1 . furthermore , this also explains the shift at the end of the first regime to higher water content for sba-15 - 2 compared to that for sba-15 - 1 as mentioned previously . the material is therefore able to adsorb more water before entering the second regime of filling the intrawall pores . this consequently contributes to the shift to higher water content for the onset of capillary condensation in the main mesopores . the sorption isotherm for sba-15 - 3 ( the material with a grafted anchor ) is also partitioned into four distinguishable regimes , but the regimes are clearly shifted in terms of water content compared to that for sba-15 - 1 and sba-15 - 2 ( figure 3 ) . in particular , the second regime ( filling of the intrawall pores ) is shifted ( starts at a lower water content of 0.076 g of water / g of silica ) and decreased in the total water content taken up ( 0.141 g of water / g of silica ) . this amount corresponds to 60% of the observed uptake for sba-15 - 1 ( 0.233 g water / g of silica ) in the same regime , demonstrating the lower water uptake ability . this change is accompanied by an earlier onset of capillary condensation inside the mesopores ( regime 3 ) at a water content of 0.199 g of water / g of silica , reflecting the filling of the mesopores at a lower water content as a result of the reduction of the intrawall pore volume due to the presence of the attached anchor molecules . the capillary condensation regime ( regime 3 ) ends at a water content of 0.565 g of water / g of silica . the enthalpy curve for sba-15 - 3 ( figure 4 ) is very similar at low water content to the enthalpy curve obtained for sba-15 - 2 . first it goes through a minimum at around 0.2 g of water / g of silica and then increases constantly until it finally levels off at the point where the capillary condensation in the primary mesopores starts . the minimum value of the enthalpy with 11 kj mol is reasonable and is comparable to sba-15 - 2 , which reflects the heat released as water molecules attach to silanol groups . it should be noted that this argument holds even if the anchor is covalently linked to the surface silanol groups , thereby decreasing the number of these groups , where the oxygen of the surface silanol groups that links to the anchor molecules still has a free electron pair that is able to form bonds with water molecules . the anchor is a relatively small molecule and will not significantly shield the connecting oxygen atoms from the adsorbing water molecules . finally , in the case of sba-15 - 4 , the material with grafted pnipaam , the most striking difference is that the regime of filling the intrawall pores has nearly vanished ( regime 2 ) . the isotherm ( figure 3 ) exhibits only three pronounced regimes : adsorption of water molecules at the surface ( regime 1 ) , capillary condensation ( regime 3 ) , and postcapillary condensation uptake of water ( regime 4 ) . this behavior is similar to water sorption calorimetric measurements of mcm-41 , which is characterized by only three regimes because the material does not have intrawall pores . the onset of the capillary condensation in the mesopores for sba-15 - 4 is shifted even further to a smaller amount of water ( 0.130 g of water / g of silica ) and additionally to higher water activity . the increase in activity reflects increased hydrophobicity as a consequence of the presence of pnipaam . the second regime , associated with the filling of the intrawall pores , has , as just mentioned , disappeared . the intrawall pores are thus not detected after the incorporation of the polymer , and hence this pore volume is no longer accessible . this indicates the preferential location of the pnipaam polymer in the intrawall pores or at the entrance of the pores , sealing them off . the enthalpy curve ( figure 4 ) for sba-15 - 4 exhibits a minimum at low water content , 8 kj mol , and therefore the value is smaller than for sba-15 - 2 and sba-15 - 3 . ( the groups located in the intrawall pores are no longer accessible to the water molecules . ) subsequently , the exothermic effect decreases until it reaches a constant level when the capillary condensation in the main mesopores begins . this constant level has a slightly higher absolute value than for the other three samples ( sba-15 - 1 , sba-15 - 2 , and sba-15 - 3 ) . generally , when the regime of capillary condensation in the main mesopores is reached , the main enthalpy contribution arises from water water interactions , independently of the molecules on the surface of the pores . for sba-15 - 4 , pnipaam is getting hydrated as it simultaneously changes its conformation from being collapsed toward an elongated state in which it most likely stretches toward the center of the pores . the backbone of the polymer chain is built up by polyethylene , which is rather hydrophobic . consequently , a second contribution to the overall heat effect emerges , namely , the interaction between water molecules and polymer chains . this is noticed in the slightly less negative value of the enthalpy for the pnipaam - containing sample as compared to that of the others . figure 5 shows the nitrogen sorption isotherms for sba-15 - 1 to sba-15 - 4 displaying distinct type iv isotherms exhibiting marked h2 adsorption the bet surface area for sba-15 - 1 is 843 m g , and this area is decreased after each modification step to 766 m g for sba-15 - 2 , 600 m g for sba-15 - 3 , and 411 m g for sba-15 - 4 . the total pore volume ( obtained from the bjh pore size distribution for the desorption branch , see table 1 ) follows the same trend . sba-15 - 1 exhibits a pore volume of 0.95 cm g , and this value decreases after each step of the modification procedure . sba-15 - 4 has a total pore volume of 0.56 cm g corresponding to approximately 59% of the original total pore volume of sba-15 - 1 . this value is in good agreement with the results from the water sorption measurements , which show that the pore volume of sba-15 - 4 corresponds to approximately 65% of the original volume of sba-15 - 1 . the values for the pore volume are taken at the end of the capillary condensation regime , explaining the lower total pore volume compared to the results obtained by nitrogen sorption measurements . furthermore , several publications , including simulations , have shown the formation of small cavities , which are not filled with water because of hydrophobic patches of the material . nitrogen sorption isotherms of calcined sba-15 - 1 ( red ) , acid - treated sba-15 - 2 ( blue ) , sba-15 - 3 with a grafted anchor ( green ) , and sba-15 - 4 with grafted pnipaam ( black ) . table 1 shows the bet c values for sba-15 - 1 to sba-15 - 4 . the bet c values allow some conclusions to be drawn about the strength of the adsorbate nevertheless , this number can not be seen as a valid absolute quantification because it has been shown that the microporosity also has a substantial impact on the bet c value . however , the change in the bet c values with the progressing modification step is in agreement with the water sorption data and is consistent with the changes in surface properties . the oxidative acid treatment ( hydroxylation ) leads to an increase in the bet c value from 119 for sba-15 - 1 to 160 for sba-15 - 2 , whereas the bet c values decreases to 103 after the anchor grafting and reaches a value of 90 for sba-15 - 4 . these values are in agreement with earlier published results , even though the value for sba-15 - 1 is slightly lower compared to our previous detailed investigation using nitrogen and argon sorption . nevertheless , 119 , obtained for sba-15 - 1 , is a typical value for mesoporous silica . the water sorption isotherms shown in figure 3 were used to calculate the pore size distributions for all four materials , sba-15 - 1 to sba-15 - 4 , by using the bjh method according to an earlier published procedure and are presented in figure 6 . pore size distributions from nitrogen sorption measurements ( figure 7 ) were also determined and compared to the pore size distributions obtained from the water sorption isotherms . bjh pore size distribution obtained from water sorption isotherms of calcined sba-15 - 1 ( red ) , acid - treated sba-15 - 2 ( blue ) , sba-15 - 3 with a grafted anchor ( green ) , and sba-15 - 4 with grafted pnipaam ( black ) . nitrogen bjh pore size distribution obtained from the desorption branch of calcined sba-15 - 1 ( red ) , acid - treated sba-15 - 2 ( blue ) , sba-15 - 3 with a grafted anchor ( green ) , and sba-15 - 4 with grafted pnipaam ( black ) . figure 6 shows the pore size distribution obtained from the analysis of the water sorption data . all four pore size distributions exhibit a peak at around 6 nm , and no clear decrease in the pore size of the main mesopores is observed for sba-15 - 3 and sba-15 - 4 , in accordance with previous studies . it has to be noted that , especially for sba-15 - 4 , the contact angle of water used to calculate the pore size distribution is difficult to estimate exactly for the given system . therefore , we assumed a water contact angle of 46 , which is an average between a freshly calcined silica surface of sba-15 ( 34 ) and a pnipaam - covered surface ( approximately 58 for hydrophilic fully extended pnipaam chains ) . for sba-15 - 1 and sba-15 - 2 , we chose a water contact angle of 28 , which corresponds to a silica surface that is not fully hydrated , with isotherms that show that the material is more hydrophobic than a fully hydrated silica as , for example , in earlier publications by kocherbitov et al . we chose for the slightly more hydrophobic sba-15 - 3 a value of 34 as the water contact angle , which was then consequently used for the approximation of the water contact angle of sba-15 - 4 the volume of pores that are smaller than the main mesopores clearly decreased for sba-15 - 3 , reflecting the grafting of the anchor in the intrawall pores . a further reduction is observed for sba-15 - 4 , which is in very good agreement with the fact that regime 2 ( filling of intrawall pores ) in the water sorption isotherm is absent . when comparing the pore size distribution calculated from the water sorption isotherms and from the desorption branch of nitrogen sorption , several differences are obvious . the maximum peaks from the nitrogen desorption branch are found at slightly smaller values , around 5 nm ( figure 7 ) . the value was calculated on the basis of a classical model for nitrogen sorption to obtain pore size distributions . usually , these pore size calculations underestimate the pore size by up to 25% . furthermore , it has been shown before that the pore size distributions calculated from the water sorption isotherm and the pore size distributions calculated using the nldft method ( desorption branch ) are in agreement . figure 8 shows the t plot obtained from the nitrogen sorption measurements in the region from t = 0.35 to 0.5 nm . the intercept of the fitted t plot reflects the amount of microporosity present in the material . the value of the intercept is much lower for sba-15 - 3 and sba-15 - 4 than for the other two materials . for sba-15 - 4 , the line almost passes through the origin , reflecting the fact that pnipaam is located in ( or in the vicinity of ) the micropores . the results from the t plot and the pore size distribution calculated from the water sorption isotherms agree very well , with both showing a decrease in microporosity / intrawall porosity . nitrogen t plot from 0.35 to 0.5 nm for calcined sba-15 - 1 ( ) , acid - treated sba-15 - 2 ( ) , sba-15 - 3 with a grafted anchor ( + ) , and sba-15 - 4 with grafted pnipaam ( * ) . at each step in the modification process , the intercept of the t plot is decreasing , reflecting the decrease in microporosity / intrawall porosity . we demonstrate that water sorption calorimetry contributes to a more comprehensive and complete picture of the porosity and surface properties of functionalized porous materials such as sba-15 with grafted pnipaam . the changes in porosity and surface properties for the materials obtained along the functionalization process were probed . by performing water sorption measurements , the surface properties of the pore walls in terms of hydrophobicity and hydrophilicity the water sorption measurements were also compared with nitrogen sorption studies , and saxd was used to monitor the structural properties . for original sba-15 silica , the water sorption isotherms clearly show the presence of intrawall pores displayed as an additional regime between the surface adsorption of water ( regime 1 ) and the capillary condensation in the main mesopores ( regime 3 ) . in the case of sba-15 - 2 , it should be noticed that the increasing number of silanol groups is mainly located in the intrawall pores and the main mesopores remained mostly unchanged . upon introducing the anchor ( sba-15 - 3 ) , capillary condensation in the mesopores appeared at a slightly higher water activity , reflecting increased hydrophobicity , as well as at a lower water content compared to that of the original material . the second regime , related to the intrawall porosity , was less pronounced and is in accordance with results obtained from the nitrogen sorption measurements . it also confirms the observation that the increase in the number of silanol groups has mainly taken place in the intrawall pores because they function as the grafting point for the anchor . the location of the anchor determines the starting point of the polymerization of nipaam . finally , the resulting sba-15-pnipaam ( sba-15 - 4 ) material has the most hydrophobic character of all four materials , and the second regime , characteristic of the presence of intrawall pores , is no longer present . water sorption provides chemical information about the pore surface , which is not attainable by the classical gas sorption typically used for the characterization of mesoporous materials . furthermore , the pore size distributions from the water sorption data are in good agreement with the nitrogen sorption analysis . hence , we have demonstrated the ability of water sorption as an advantageous tool in understanding the surface properties of mesoporous silica while providing information on the porosity of the material . water sorption calorimetry could thus be utilized to provide crucial information to evaluate specific interactions between drugs and functional groups attached to a mesoporous host and hence to obtain valuable information regarding the monitoring of drug release profiles .
mesoporous silica sba-15 was modified in a three - step process to obtain a material with poly - n - isopropylacrylamide ( pnipaam ) grafted onto the inner pore surface . water sorption calorimetry was implemented to characterize the materials obtained after each step regarding the porosity and surface properties . the modification process was carried out by ( i ) increasing the number of surface silanol groups , ( ii ) grafting 1-(trichlorosilyl)-2-(m-/p-(chloromethylphenyl ) ethane , acting as an anchor for ( iii ) the polymerization of n - isopropylacrylamide . water sorption isotherms and the enthalpy of hydration are presented . pore size distributions were calculated on the basis of the water sorption isotherms by applying the bjh model . complementary measurements with nitrogen sorption and small - angle x - ray diffraction are presented . the increase in the number of surface silanol groups occurs mainly in the intrawall pores , the anchor is mainly located in the intrawall pores , and the intrawall pore volume is absent after the surface grafting of pnipaam . hence , pnipaam seals off the intrawall pores . water sorption isotherms directly detect the presence of intrawall porosity . pore size distributions can be calculated from the isotherms . furthermore , the technique provides information regarding the hydration capability ( i.e. , wettability of different chemical surfaces ) and thermodynamic information .
Alexa Scimeca Knierim and Chris Knierim of the USA perform in the pair figure skating short program in the Gangneung Ice Arena at the 2018 Winter Olympics in Gangneung, South Korea, Wednesday, Feb. 14,... (Associated Press) Alexa Scimeca Knierim and Chris Knierim of the USA perform in the pair figure skating short program in the Gangneung Ice Arena at the 2018 Winter Olympics in Gangneung, South Korea, Wednesday, Feb. 14, 2018. (AP Photo/Morry Gash) (Associated Press) PYEONGCHANG, South Korea (AP) — Happy Valentine's Day from Pyeongchang! Wondering where the romance will be at the Olympics? AP's has the low down. All times Eastern. FIGURE SKATING Pairs final starts at 8:30 p.m., just in time to cuddle up with your Valentine and try to imagine the kind of trust required to allow someone to throw you up in the air, over ice, so you can land on one ¼-thick-blade. Just sayin'. In pairs competition, the scoring is the same as singles skating, with one score for technical elements and one for performance . Watch for the elements that are unique to pairs skating like lifts, death spirals and the aforementioned throw jumps. Choreographing pairs is challenging considering that side-by-side movements should be done in sync but one partner is much larger, making the physics different. Real life couple Alexa Scimeca-Knierim and Chris Knierim are competing in their first Olympics for the U.S. ICE HOCKEY The U.S. women play the Canadians at 10:10 p.m. The Canadians have taken home the last four gold medals in this event and the U.S. are their most difficult opponent as they try to grab a fifth. Watch to see if the U.S. goaltender has the Statue of Liberty on her helmet. The IOC reportedly ordered both goalies to cover their markings, but USA Hockey tells the AP that the masks are approved. The goalie wore hers in Tuesday's rout of the Russians . The U.S. men play Slovenia at 7:10 a.m., the same time the Russians play Slovakia. CURLING If it's Wednesday, there must be curling . The round robins continue, with the U.S. women playing the UK at 7:05 p.m. Watch, or more specifically listen, as the players shout commands at each other to "clean" (lightly clear the ice of debris) or go "hard," (sweep as hard and fast as you can) to move the stone in its desired path. In Olympic curling, athletes are mic'd up so viewers can listen to their banter at home. When team tensions rise, the mics give a reality-TV feeling to the sport some call shuffleboard on ice. Don't miss the Norwegian men's crazy pants! Actually, it is impossible to miss them. SNOWBOARD CROSS The men's final starts at 11:30 p.m., with the medal run set for after midnight. After solo timed seeding rounds, racers take to the course of banked turns and jumps in groups for elimination runs. The clock is the only judge, but racers can be disqualified for intentionally blocking or making contact with another racer. The jockeying for position through tight turns makes crashes common and has given the sport a dangerous reputation. U.S. racer Nick Baumgartner competed in this event in 2010 and 2014. ALPINE SKIING After bad weather delayed several alpine events, including the already postponed women's giant slalom, the marquee men's downhill race is scheduled to start at 9:30 p.m. There are no gates, poles or technical elements in the downhill. Skiers have one goal: find the fastest route down the 1¾-mile course. Top times will be around 1 minute, 40 seconds which is about 60 mph (97 kph)! The men's downhill often serves up a surprise, but world champion Beat Feuz of Switzerland and 2010 silver medalist Aksel Lund Svindal of Norway are popular picks. SPEEDSKATING The Dutch will try to maintain their sweep of speed skating gold medals in Pyeongchang with the women's 1,000-meter final starting at 5:00 a.m. Look for their enthusiastic fans decked in orange, the color of the royal family. You might even spot King Willem-Alexander, who has been in the stands cheering for the team. ___ More AP Olympic coverage: https://wintergames.ap.org ||||| Wind gusts of up to 40 miles per hour have forced the evacuation of the Gangneung Olympic Park. (Published Wednesday, Feb. 14, 2018) Strong winds have forced the closure of the Olympic Park in Gangneung. The local Olympic organizing committee announced the closure at about 5 p.m., though they were taking place well before that. Officials began evacuating the Olympic Park at about 3 p.m., with public-address announcements in Korean and English urging spectators to go indoors because of the wind. As workers disassembled tents that were taking the brunt of the wind, volunteers with bullhorns walked around telling fans to go inside for their safety. Many spectators sought shelter in buildings near the Gangneung Hockey Centre. As of mid-afternoon, general admission to the park was no longer allowed. Events, including the women's slalom, were forced to reschedule. Team USA's Mikaela Shiffrin must wait to ski in her signature event. She was initially set to make her Pyeongchang debut in the giant slalom race earlier in the week, but it was postponed then due to dangerous winds. The race has been moved to Friday, after the giant slalom race rescheduled for Thursday morning (Wednesday night in the U.S.). The women's biathlon 15km individual event also had to be postponed. High-speed wind gusts make it difficult for competitors to shoot their rifles. The event has been moved to Thursday, starting ahead of the men's individual biathlon. Winds are blowing steadily around 23 mph (37 kph) with stiffer gusts rattling and shaking the giant tent anchored with metal beams in Gangneung. A media work tent was closed because of the gusting winds ahead of a women’s hockey game between Japan and Korea. Copyright Associated Press / NBC Chicago ||||| The Olympics finally warmed up on Wednesday, and then the wind arrived. This was the sort of wind you could hear coming before it hit you. Unless you've lived through a typhoon, it's not something you've experienced. In the coastal cluster, it was strong enough to tear off street signs and shred canvas tents. It drove loose earth before it like a sandstorm. This explains why all the open ground here is covered in bamboo matting held in place by metal spikes. Story continues below advertisement "I'm from Saskatchewan," one colleague remarked. "And this has got Saskatchewan beat." The Olympics is being held in South Korea's Gangwon-do province. Weather-wise, it is one of the most inhospitable populated places on the planet. Though Pyeongchang lies on the same latitude as mid-California, its elevation and placing at the foot of a wind tunnel running down from Siberia make it remarkably frigid. This is the coldest place in the world at such a southerly position. Gangwon-do is also one of the few places on Earth to experience hurricane-force winds (moving at speeds greater than 118 kilometres per hour) without the benefit of an actual hurricane. It's been blowing for days, postponing, delaying and otherwise marring several ski events in the mountains. A few days of this was an anomaly. Now that the Games is nearing its second week, it's becoming a serious problem. The IOC has had its weather issues before (i.e. the blue snow trucked in to cover the slopes in sweaty Sochi). But this may be the first Olympics that cannot run owing to weather. Story continues below advertisement Story continues below advertisement I am a large human. And I was just barely able to walk through the gales on Wednesday. It was so strong, you could lean back into it and not fear hitting the ground. Around midday, city authorities pushed an "Emergency Alert" to all cellphones in the area warning people to go indoors to avoid the weather. (As an aside, the South Koreans are a bit loose with these alerts – all of which are, obviously, sent in Korean. This was our third in a week. You're thinking 'imminent missile impact' and they actually mean 'conditions are rather dry, so don't start fires'.) Out on the streets, heedless visitors lurched around like drunks, blown across roads or left chasing glasses knocked off their heads. The omnipresent tuques disappeared. You could not hope to hold on to them. The locals, more used to this sort of thing, had the sense to crouch in place when a bad gust kicked up and wait for it to pass. One man pulled up the hood of his coat, grabbed a light post and hung on. If it's bad for the civilians, it is much worse for the athletes. Story continues below advertisement Japanese ski jumper Noriaki Kasai is here at his eighth Olympics. He's been on the job for a quarter century. So he's seen some things. "The noise of the wind at the top of the jump was incredible," Kasai told The Asahi Shimbun. "I've never experienced anything like that on the World Cup circuit. I said to myself, 'Surely, they are going to cancel this'." Surely, they did not. The showpiece event here – downhill skiing – has been badly disrupted. Wednesday was meant to be the debut of the betting favourite for the global star of these Games, America's multi-discipline phenom Mikaela Shiffrin. Instead, the women's giant slalom was postponed until later in the week and NBC was left pulling out its hair. These logistical issues are becoming nervy – the alpine disciplines are already scheduled to run every day until the end of the Games; many athletes were meant to vacate rooms up in the mountains for teammates once they had completed their events. Having sampled the quality of Olympic furniture, you really don't want to be sleeping on anything other than a bed. As is their habit, the International Olympic Committee remains sanguine about developments. A meteor could hit the main stadium and they'd call it a wonderful opportunity to bring geology into the Olympic experience. "Plenty (of time left)," said IOC spokesperson Mark Adams on Wednesday. "If the wind continues to blow for the next 15 days then it might be a problem. But at present everything is okay … I think the (International Ski Federation) are pretty happy." A couple of hours later, they cancelled all the alpine events for the day. Then they cancelled all outdoor events in Gangneung, suspended admission to the Olympic Park and closed all temporary tented structures. Sounds okay to me. It's anyone's guess about the next fifteen days – even by normal meteorological standards, guessing at the weather here has been a mug's game. It is supposed to get better on Thursday (and then get worse again on Saturday). But I will guarantee you that no one is happy. Skiing's governing body, the FIS, has taken an almighty drubbing here, notably for choosing to run the women's slopestyle snowboard competition despite dangerous winds. Their excuse, in part, was that "the nature of outdoor sports also requires adapting to the elements." By this logic, more Olympic events should take place inside active volcanoes. All that adapting would make for some real excitement. Broadcasters cannot be happy, nor can sponsors or advertisers or the athletes themselves. The Olympics is scheduled to provide daily peaks, evenly spaced. What we're looking at now is a great glut of events in the second half of the Games, piled one atop the other. Viewership numbers are notoriously hard to come by, or trust once you do, but they appear to be down. A good part of that in the west is down to timing. But increasingly it may come down to unpredictability. Who's going to drag themselves out of bed at 4 in the morning for an event that might not go off? Of course, it wouldn't be a proper Olympics without thematic problems. In every other way, Pyeongchang has been clockwork. All it shows is that putting together a Winter Olympics is an exercise in Goldilocksism. It's often too cold or not cold enough, and very, very rarely just right. ||||| A spectators wearing the colors of the Dutch Royal House of Orange, holds on to his hat as fierce wind blows outside the Gangneung Oval at the 2018 Winter Olympics in Gangneung, South Korea, Wednesday,... (Associated Press) A spectators wearing the colors of the Dutch Royal House of Orange, holds on to his hat as fierce wind blows outside the Gangneung Oval at the 2018 Winter Olympics in Gangneung, South Korea, Wednesday, Feb. 14, 2018. (AP Photo/John Locher) (Associated Press) GANGNEUNG, South Korea (AP) — Sharp, gusting wind forced the temporary closure of the Olympic Park in Gangneung on Wednesday, the latest blow from wild weather that has affected the games for several days. Sustained winds of 23 mph (37 kph) with stronger gusts howled through the Olympic Park near the coast, knocking over tents, signs and even small refrigerators. The conditions have repeatedly forced the postponement of events in the mountains to the west, notably Alpine skiing. Local officials began evacuating Olympic Park at about 3 p.m., with public address announcements in Korean and English urging spectators to go indoors and eventually a police presence helped clear the area. Many spectators sought shelter in buildings near the Gangneung Hockey Centre. Normal activity resumed several hours later, before speedskating and hockey events were scheduled to being. Inside the Gangneung Oval, home to long-track speedskating, Dutch oompah band Kleintje Pils had some fun with windy conditions by opening their performance with "Stormy." There has been much discussion of the cold conditions at these Olympics, but temperatures had moderated on Wednesday — only for the wind to cause more disruptions. Gusts topping 45 mph have forced the postponement of three of four scheduled Alpine ski races, the latest coming Wednesday as the women's slalom was pushed back until Friday. The women's biathlon at Alpensia Biathlon Center was also postponed, until Thursday. With ski racing, the wind can make it dangerous for athletes already traveling at as much as 75 mph; in technical events, such as the slalom, wind that changes direction can be considered unfair, because some skiers will get a helpful tailwind, while others will be hurt by a headwind. "All of them are anxious to race, absolutely, but they all want to race in fair conditions. That's the main thing," U.S. women's Alpine coach Paul Kristofic said after the slalom was called off. "To have unstable wind like that for one racer and not for the other, it creates not the best sporting event." Gusts of more than 15 mph pushed the women's biathlon back because that much wind makes it difficult for competitors to properly handle their rifles. Indoor sport competitions weren't affected, but conditions surrounding some of the venues were treacherous. Several venue media centers, giant tents anchored with metal beams, were closed temporarily. ___ AP Sports Writers Beth Harris and Teresa M. Walker in Gangneung, South Korea, and Howard Fendrich in Pyeongchang, South Korea, contributed. ___ More AP Olympic coverage: https://wintergames.ap.org ||||| Image copyright Getty Images Image caption Expensive guests: North Korean cheerleaders at the Pyeongchang Olympics South Korea has approved a plan to pay the cost of hosting North Korea's delegation to the Winter Olympics. The 2.86bn Korean won ($2.64m, £1.9m) will come from the South's unification ministry budget. A group of more than 400 North Korean supporters and performers have travelled to Pyeongchang, South Korea for the Games. Their visit was controversial with some South Koreans who questioned the North's commitment to reconciliation. North Korea's attendance at the Winter Olympics came as a surprise development after a year of increasing tensions between Pyongyang and the international community over its nuclear ambitions and repeated missile tests. The South Korean government invited the North to join the Games saying it was a chance at dialogue and rolling back tensions. But some South Koreans have staged protests warning it would merely give the North a platform for propaganda. The decision to integrate North Koreans into the South's women's ice hockey team was particularly controversial as it meant that some of the South's athletes would get less of a chance to play. Media playback is unsupported on your device Media caption The Korean ping pong pals separated forever The government funds will pay mainly for accommodation and food for Pyongyang's cheer squad, an orchestra sent to perform on the sidelines of the Games as well as a group of taekwondo performers and supporting personnel. According to the Reuters news agency, the majority of the visiting North Koreans stayed at luxury hotels in Seoul and near the Olympic venues in Pyeongchang. The cost of hosting 22 North Korean athletes will be paid by International Olympic Committee while the cost for the North's high level political delegation visiting the South will be paid separately from the government budget, said unification ministry spokesperson Baik Tae-hyun Reuters reports. On Wednesday, South Korea's Unification Minister Cho Myoung-gyon described the North's participation in the Games as a milestone that opened up the door to build peace on the Korean Peninsula, according to South Korea's Yonhap news agency. He added that Seoul was keeping in mind the UN sanctions against the North designed to prevent any foreign support of its weapons programme, according to Yonhap. South Korea also paid the expenses when more than 600 North Koreans visited the South during the 2002 Asian Games in Busan.
– Weather played havoc with the Winter Olympics again on Wednesday, with fierce winds forcing authorities to close the Olympic Park in Gangneung, near South Korea's east coast. Officials began evacuating the park around 3pm, urging visitors to go indoors, NBC Chicago reports. The wind caused the women's slalom race to be postponed until Friday. "All of them are anxious to race, absolutely, but they all want to race in fair conditions. That's the main thing," US coach Paul Kristofic said, per the AP. The women's biathlon was also postponed because gusts of more than 15mph made it difficult for competitors to use their rifles. In other Olympics happenings: A serious issue. Writing for the Globe and Mail, Cathal Kelly reports that with the winds arriving just as the Games began to warm up after days of bitter cold, the weather is becoming a serious issue that could make it impossible to complete all the events, or at least lead to "a glut of events in the second half of the Games, piled one atop the other." The science: Kelly calls Gangwon-do province, where the Olympics are taking place, "one of the most inhospitable populated places on the planet" in terms of weather. He explains that while Pyeongchang lines up with mid-California latitude-wise, elevation and winds from Siberia make it "the coldest place in the world at such a southerly position." It's also unique in that it can see hurricane-force winds without there actually being a hurricane. Paying the North's tab. South Korea has approved a plan to cover the cost of hosting North Korea's Olympic delegation, the BBC reports. The $2.64 million it cost to host around 400 supporters and performers will come from Seoul's reunification budget. White tigers. The Guardian looks at why winners are being handed stuffed animals on platters instead of medals immediately after the events. The IOC says the "cuddly toy ceremony" involving white tigers, an important animal in Korean mythology, is a tradition and winners get their real medals at a ceremony in the evening. Snow volleyball. Snow volleyball isn't an Olympic sport and isn't on the road to becoming one yet, but supporters tried to make the case for it with an energetic demonstration Wednesday, Yahoo reports. "We like to play in the mountains, in the beach, outside, inside, with children, with men and women," European Volleyball Confederation president Aleksandr Boricic said. "With snow volleyball, we can cover volleyball every day of the year." The lead character. Barry Svrluga has clearly had enough of the Korean winter weather. In his column at the Washington Post, he writes that the "lead character" in Pyeongchang was the cold, not any athlete—until it was upstaged by wind "that has set off car alarms, impaled sand into skin, toppled concession stands and forced officials to shut down an entire cluster of venues." Day 5 preview. The AP's preview of Day 5's events includes the figure skating pairs final at 8pm Eastern and the USA versus Canada women's hockey final at 10:10pm.
most galaxies with massive spheroidal components appear to harbor central black holes ( bhs ) , with masses ranging from a few @xmath4 to over @xmath5 . these bh masses are well correlated with both the luminosity and the velocity dispersion of the galaxy spheroid @xcite , implying that the formation of the central bhs is connected intimately with the development of the galaxy bulges . however , a bulge is not necessarily a prerequisite for a massive bh . on the one hand , neither the late - type spiral galaxy m33 ( gebhardt et al . 2001 ) nor the dwarf spheroidal galaxy ngc 205 ( valluri et al . 2005 ) shows dynamical evidence for a massive bh . on the other hand , both the sd spiral galaxy ngc 4395 and the dwarf spheroidal galaxy pox 52 contain active bhs with masses @xmath6 , although neither contains a classic bulge ( fillipenko & ho 2003 ; barth et al . greene & ho ( 2004 , 2007 ) find that optically active intermediate - mass bhs ( imbhs , @xmath7 ) , while rare , do exist in dwarf galaxies , but optical searches are heavily biased toward sources accreting at high eddington rates . alternate search techniques are needed to probe the full demographics of imbhs . direct dynamical detection of imbhs is currently impossible outside of the local group . however , gebhardt , rich , & ho ( 2002 , 2005 ) have found dynamical evidence for an excess dark mass of @xmath8 at the center of the globular cluster g1 in m31 ; the evidence for this imbh was questioned by @xcite , but supported by the improved data and analysis of gebhardt et al . the physical nature of the central dark object is difficult to prove : it could be either an imbh or a cluster of stellar remnants . while the putative presence of a bh in a globular cluster center may appear unrelated to galaxy bulges , the properties of g1 , including its large mass , high degree of rotational support , and multi - aged stellar populations , all suggest that g1 is actually the nucleus of a stripped dwarf galaxy @xcite . most intriguingly , the inferred bh mass for g1 is about 0.1% of the total mass , consistent with the relation seen for higher mass bhs , and consistent with predictions based on mergers of bhs @xcite or stellar mergers in dense clusters @xcite . recently , @xcite reported an x - ray detection of g1 , with a 0.210 kev luminosity of @xmath9 ergs s@xmath2 . although this may represent accretion onto a central bh , it is within the range expected for either accretion onto an imbh or for a massive x - ray binary . unfortunately , the most accurate x - ray position determined recently by @xcite does not have sufficient accuracy to determine whether the x - ray source is located within the central core of g1 , which would help distinguish between these two possibilities . the radio / x - ray ratio of g1 provides an additional test of the nature of the g1 x - ray source . as pointed out by @xcite and maccarone , fender , & tzioumis ( 2005 ) , deep radio searches may be a very effective way to detect imbhs in globular clusters and related objects , since , for a given x - ray luminosity , stellar mass bhs produce far less radio luminosity than supermassive bhs . the relation between bh mass , and x - ray and radio luminosity empirically appears to follow a `` fundamental plane , '' in which the ratio of radio to x - ray luminosity increases as the @xmath10 power of the bh mass ( merloni , heinz , & di matteo 2003 ; falcke , krding , & markoff 2004 ) . for an imbh mass of @xmath11 in g1 , one thus would expect a radio / x - ray ratio about 400 times higher than for a @xmath12 stellar bh . in this paper , we report a deep very large array ( vla ) integration on g1 and a radio detection that apparently confirms the presence of an imbh whose mass is consistent with that found by gebhardt et al . ( 2002 , 2005 ) . we obtained a 20-hr observation of g1 using the vla in its c configuration ( maximum baseline length of 3.5 km ) at 8.46 ghz . the observation was split into two 10-hr sessions , one each on 2006 november 24/25 and 2006 november 25/26 . each day s observation consisted of repeated cycles of 1.4 minutes observation on the local phase calibrator j0038 + 4137 and 6 minutes observation on the target source g1 . in addition , each day contained two short observations of 3c 48 ( j0137 + 3309 ) that were used to calibrate the flux density scale to that of @xcite . thus , the total integration time on g1 was 14.1 hr . we also obtained a total of 9.5 hr of observing in c configuration at 4.86 ghz on 2007 january 13/14 and 2007 january 14/15 , using a similar observing strategy , and achieving a total of 7.3 hr of integration on source . all data calibration was carried out in nrao s astronomical image processing system @xcite . absolute antenna gains were determined by the 3c 48 observations , then transferred to j0038 + 4137 , which was found to have respective flux densities of 0.52 mjy and 0.53 mjy at 8.4 and 4.9 ghz . in turn , j0038 + 4137 was used to calibrate the interferometer amplitudes and phases for the target source , g1 . erroneous data were flagged by using consistency of the gain solutions as a guide and by discarding outlying amplitude points . the vla presently is being replaced gradually by the expanded vla ( evla ) , which includes complete replacement of virtually all the electronic systems on the telescopes . since antennas are refurbished one at a time , the vla at the time of our observations consisted of 1820 `` old '' vla antennas and 6 `` new '' ( actually , refurbished ) evla antennas , having completely different electronics systems . although all antennas were cross - correlated for our observations , we found subtle errors in some of the evla data . thus , to be conservative , we discarded the data from all evla antennas except for 3 antennas that were confirmed to work very well on 2006 november 24/25 . the radio data were fourier transformed and total - intensity images were produced in each band , covering areas of 17@xmath1317 at each frequency . these images were cleaned in order to produce the final images . at 8.4 ghz , the rms noise was 6.2 @xmath14jy beam@xmath2 for a beam size of 294@xmath13272 ; at 4.9 ghz , the noise was 15.0 @xmath14jy beam@xmath2 for a beam size of 509@xmath13443 . a few radio sources with strengths of hundreds of microjansky to a few millijansky were found in the images , but we discuss only g1 in this _ letter_. at 8.4 ghz , an apparent source with a flux density of @xmath15 @xmath14jy [ corresponding to @xmath1 w hz@xmath2 for distance modulus of @xmath16 mag ( meylan et al . 2001 ) ] was found approximately one arcsecond from the g1 optical position reported by @xcite ; this radio source has j2000 coordinates of @xmath17 , @xmath18 . figure 1 shows our 8.4 ghz image of the 20 by 20 region centered on g1 ; this image includes a @xmath19 error circle of 15 radius for the x - ray position found by @xcite . the radio position has an estimated error of 06 in each dimension ( not shown in the figure ) , derived by dividing the beam size by the signal - to - noise ratio . the _ a priori _ probability of finding a @xmath20 noise spike or background source so close to g1 is quite small , as indicated by the lack of any other contours of similar strength in figure 1 . if we hypothesize that there are 9 independent beams ( roughly 8 by 8) within which a source would be considered to be associated with g1 , then the probability of a @xmath20 noise point close to g1 is less than @xmath21 . similarly , the expected density of extragalactic radio sources at 28 @xmath14jy or above is 0.25 arcmin@xmath22 @xcite , or @xmath23 in a box 8 on a side , making it unlikely that we have found an unrelated background source . in order to search for possible data errors that might cause a spurious source , we have subjected our data set to additional tests , imaging data from the two days separately , and also imaging the two different intermediate frequency channels separately . the g1 radio source remains in the images made from each data subset , with approximately the same flux density and position . the overall significance is reduced by @xmath24 to approximately @xmath25 in each image made with about half the data , as expected for a real source with uncontaminated data . other @xmath26@xmath25 sources appear in the central 20 box in some subsets of half the data , consistent with noise statistics , but none is above the @xmath27 level in the full data set . thus , all tests indicate that the detection of g1 is real , and we will proceed on that basis for the remainder of this paper . at 4.9 ghz , we find no detection at the g1 position , but the much higher noise level provides us only with very loose constraints on the source spectrum ( see below ) . @xcite and @xcite have quantified an empirical relation ( or `` fundamental plane '' ) among x - ray and 5 ghz radio luminosity and bh mass ; we use the @xcite relation @xmath28 . @xcite analyzed this relation in the context of accretion flows and jets associated with massive bhs . one might expect some general relationship among these three quantities , if an x - ray - emitting accretion flow onto a massive bh leads to creation of a synchrotron - emitting radio jet , with the detailed correlation providing some insight into the nature of that flow . by comparing the empirically determined relation with expectations from theoretical models , @xcite deduced that the data for bhs emitting at only a few percent of the eddington rate are consistent with radiatively inefficient accretion flows and a synchrotron jet , but inconsistent with standard disk accretion models . @xcite scaled the fundamental - plane relation to values appropriate for an imbh in a galactic globular cluster ; we rescale their equation here to find a predicted radio flux density of @xmath29 using the previously cited x - ray luminosity and bh mass for g1 and our adopted distance modulus , this predicts a 5 ghz flux density of 77 @xmath14jy for g1 . however , taking into account the 30% uncertainty in the imbh mass , the unknown spectral index of the radio emission , and the dispersion of 0.88 in @xmath30 @xcite , the predicted 8.4 ghz flux density for g1 is in the range of tens to a few hundred microjansky . thus , our radio detection of 28 @xmath14jy at 8.4 ghz is consistent with the predictions for a @xmath11 imbh , but strongly inconsistent with a @xmath12 bh . since neutron star x - ray binaries in a variety of states have radio / x - ray ratios much lower than bh x - ray binaries @xcite , and thus another 2 orders of magnitude below the observed value , stellar - mass x - ray binaries of any type are ruled out as the possible origin of the radio emission in g1 . we can use the radio / x - ray ratio to assess other possible origins for the radio emission . here , we use the ratio @xmath31 as a fiducial marker . for g1 , @xmath32 , which is considerably lower than @xmath33 that is common to the galactic supernova remnant cas a , low - luminosity active galactic nuclei ( supposing g1 might be a stripped dwarf elliptical galaxy ) , and most ultraluminous x - ray sources ( cf . table 2 of neff , ulvestad , & campion [ 2003 ] , and references therein ) . it is of interest to compare the g1 source to various relatives of pulsars as well . for instance , g1 is within the wide range of both luminosity and radio / x - ray ratio observed for pulsar wind nebulae ( pwns ) @xcite , less luminous than the putative pwn in m81 @xcite , but considerably more luminous than standard pulsars or anomalous x - ray pulsars @xcite . the 8.4 ghz luminosity of g1 is similar to that of the magnetar sgr @xmath34 about 10 days after its outburst in late 2004 , and the lack of a 4.9 ghz detection would be consistent with the fading of sgr @xmath34 two months after the outburst @xcite . however , there is no published evidence for a gamma - ray outburst from g1 , and the relatively steady apparent x - ray flux @xcite also argues against a transient source . thus , the only stellar - mass object that might account for the radio and x - ray emission would be a pwn ; using the scaling law given by @xcite , we find a likely size of @xmath35 milliarcseconds for a pwn radio source in g1 , implying that high - sensitivity vlbi observations could distinguish between a pwn and imbh origin for the radio emission from g1 . knowledge of the radio spectrum of g1 could provide more clues to the character of the radio emission , although either a pwn or an imbh accretion flow might have a flat spectrum . in any case , our 5 ghz observation simply is not deep enough . if we choose a @xmath36 upper limit of 30.0 @xmath14jy at 4.9 ghz ( @xmath36 chosen since we know the position of the 8.4 ghz source with high accuracy ) , we derive a spectral index limit of @xmath37 ( for @xmath38 , @xmath39 error in spectral index ) , which has little power to discriminate among models . the x - ray emission from g1 may be due to bondi accretion on the imbh , either from ambient cluster gas or from stellar winds @xcite . @xcite and @xcite give approximate relations for the bondi accretion on an imbh in a globular cluster ; for an ambient density of 0.1 @xmath40 , an ambient speed of 15 km s@xmath2 for the gas particles relative to the imbh , and a radiative efficiency of 10% , the bondi accretion luminosity for the g1 imbh would be @xmath41 ergs s@xmath2 . the x - ray luminosity of @xmath42 ergs s@xmath2 measured by @xcite thus implies accretion at just under 1% of the bondi rate . given that @xmath43 , a more likely scenario is that g1 accretes at closer to 10% of the bondi rate but with a radiative efficiency under 1% . in this context , we note that the radio / x - ray ratio for g1 is @xmath44 , which is above the value of @xmath45 used to divide radio - quiet from radio - loud objects @xcite . because this quantity traditionally is given in terms of the 210 kev luminosity , whereas @xmath46 would correspond to the value computed for the 0.210 kev luminosity given by @xcite . ] g1 therefore should be considered radio - loud , as inferred for bhs in galactic nuclei that radiate well below their eddington luminosities ( ho 2002 ) . if the globular clusters in our own galaxy also have central bhs that are 0.1% of their total masses , and they accrete and radiate in the same way as g1 , many would have expected 5 ghz radio flux densities in the 20100 @xmath14jy range ; flux densities often would be in the 110 @xmath14jy range even for less efficient accretion and radiation @xcite . as @xcite summarize , there are few radio images of globular clusters that go deep enough to test this possibility . @xcite points out that the square kilometer array ( ska ) will be able to test for the existence of imbhs in many globular clusters . however , based on our results for g1 , we suggest that it is not necessary to wait for the ska ; the current vla can reach the hypothesized flux densities with some effort . the evla @xcite , scheduled to be on line in about 2010 , will have 40 times the bandwidth and 6.3 times the sensitivity of the current vla in the frequency range near 8 ghz . this will enable the evla to reach the 1 @xmath14jy noise level in approximately 12 hours of integration , thus probing the range of radio emission predicted by @xcite for many globular clusters . we have detected faint radio emission from the object g1 , a globular cluster or stripped dwarf elliptical galaxy in m31 . the emission has an 8.4 ghz power of @xmath1 w hz@xmath2 . assuming that the radio source is associated with the x - ray source in g1 @xcite , the radio / x - ray ratio is consistent with the value expected for an accreting @xmath0 bh . thus , the radio detection lends support to the presence of such an imbh within g1 . the other possible explanation , a pulsar wind nebula , could be tested by making very high - sensitivity vlbi observations of g1 . the national radio astronomy observatory is a facility of the national science foundation operated under cooperative agreement by associated universities , inc . we thank the staff of the vla that made these observations possible . support for jeg was provided by nasa through hubble fellowship grant hf-01196 , and lch acknowledges support from nasa grant hst - go-09767.02 . both were awarded by the space telescope science institute , which is operated by the association of universities for research in astronomy , inc . , for nasa , under contract nas 5 - 26555 . we also thank dale frail for useful discussions about pulsar wind nebulae , and an anonymous referee for useful suggestions .
we have used the very large array ( vla ) to search for radio emission from the globular cluster g1 ( mayall - ii ) in m31 . g1 has been reported by gebhardt et al . to contain an intermediate - mass black hole ( imbh ) with a mass of @xmath0 . radio emission was detected within an arcsecond of the cluster center with an 8.4 ghz power of @xmath1 w hz@xmath2 . the radio / x - ray ratio of g1 is a few hundred times higher than that expected for a high - mass x - ray binary in the cluster center , but is consistent with the expected value for accretion onto an imbh with the reported mass . a pulsar wind nebula is also a possible candidate for the radio and x - ray emission from g1 ; future high - sensitivity vlbi observations might distinguish between this possibility and an imbh . if the radio source is an imbh , and similar accretion and outflow processes occur for hypothesized @xmath3 black holes in milky way globular clusters , they are within reach of the current vla and should be detectable easily by the expanded vla when it comes on line in 2010 .
periodontitis is an inflammatory condition initiated by chronic microbial load affecting the tooth - supporting tissues . despite studies focusing on microbial biofilm as the causative factor for periodontitis , there is a shift toward osteoimmunology which explains the interaction between host immune responses and cytokines in the development of periodontal diseases . bone remodeling being a multifaceted process requires several cross - talk mechanisms , and pathological activation of one system is bound to affect the other . during inflammation , the balance between formation and resorption is skewed toward osteoclastic resorption leading to the release of bone breakdown products into local tissues and also to the systemic circulation . collagen cross - links are generally reliable markers of bone resorption because they are stable in plasma and urine . as they result from posttranslational modification of collagen molecules they can not be reclaimed for collagen synthesis , therefore , are highly specific to bone resorption . in addition , calcium kinetic studies of the bone formation and resorption have also shown that the cross - links of collagen correlate highly with resorption . cross - linked n - terminal telopeptide ( ntx ) of type i collagen is an amino - terminal telopeptide which is exceptional because of its -2(i ) n - telopeptide . it is released as a resolute end product of bone resorption and is not a part of soft tissues around the teeth . skin and other soft tissues have histidine cross - links and do not have pyridinoline crosslinks . studies assessing the role of ntx in gingival crevicular fluid ( gcf ) , serum and peri - implant crevicular fluid ( pcf ) as a diagnostic marker of periodontal disease activity have reported conflicting results so far . have studied the levels of ntx in gcf and pcf and speculated that increased ntx levels may predict extensive bone destruction earlier than calprotectin levels . the levels of ntx along with other bone markers in chronic periodontitis patients were evaluated , and it was stated that ntx may be useful as a resorption marker in periodontal bone destruction . have estimated the gcf ntx levels in health and different periodontal diseases , and it was concluded that fluctuating ntx levels might point out the abnormal bone turnover in periodontitis . however , studies have even failed to show ntx as a bone - specific marker of bone metabolism in cyclosporine a induced gingival overgrowth . to date changes associated with plasma ntx levels in healthy , gingivitis , chronic periodontitis ( cgp ) , and after nonsurgical periodontal therapy of cgp our hypothesis states that alterations in plasma levels of ntx may be one of the systemic manifestations of periodontal bone resorption . thus , the aim of this study was to investigate whether periodontally healthy , gingivitis and cgp subjects exhibit different plasma levels of ntx , to know the levels of ntx in cgp subjects after nonsurgical periodontal therapy and to correlate the levels with the clinical parameters . the subjects enrolled in this study were fully informed about the protocol of this study and written informed consent was obtained according to the helsinki declaration . subjects were matched to eliminate age ( 25 - 50 years ) and sex as confounding factors ( table 1 ) . exclusion criteria included a history of periodontal therapy , use of antibiotics , anti - inflammatory drugs within the previous 3 months , pregnancy , or lactation , systemic diseases and smokers . patients on bisphosphonates , alendronates , hormone replacement therapy , vitamin d , and calcium supplements were also excluded . demographic distribution of the study groups patients were categorized into three groups based on probing pocket depth ( ppd ) , clinical attachment loss ( cal ) , gingival index ( gi ) scores ( loe and sillness 1986 ) and radiographic evidence of bone loss ( assuming the physiologic distance from the cemento - enamel junction to alveolar crest to be 2 mm ) . after a full mouth periodontal probing , bone loss was recorded dichotomously using intraoral periapical radiographs ( paralleling angle technique ) to differentiate patients with cgp from patients of other groups , without any delineation in the extent of alveolar bone loss . group 1 : 10 subjects with clinically healthy periodontium ( gi = 0 , ppd 3 mm , and cal = 0).group 2 : 10 subjects with gingival inflammation ( gi > 1 , ppd 3 mm , and cal = 0).group 3 : 10 subjects who showed clinical signs of gingival inflammation gi > 1 , ppd 5 mm and radiographic bone loss with cal 3 mm.group 4(after treatment ) : subjects of group 3 treated with scaling and root planing ( srp ) ( plasma samples taken 6 - 8 weeks after treatment ) . group 1 : 10 subjects with clinically healthy periodontium ( gi = 0 , ppd 3 mm , and cal = 0 ) . group 2 : 10 subjects with gingival inflammation ( gi > 1 , ppd 3 mm , and cal = 0 ) . group 3 : 10 subjects who showed clinical signs of gingival inflammation gi > 1 , ppd 5 mm and radiographic bone loss with cal 3 mm . group 4(after treatment ) : subjects of group 3 treated with scaling and root planing ( srp ) ( plasma samples taken 6 - 8 weeks after treatment ) . the skin over the antecubital fossa was disinfected , and 2 ml of blood was collected by venipuncture using a 20-gauge needle with 2 ml syringes . vacutainer previously coated with 3.2% sodium citrate was used and was centrifuged at 1000 rpm for 10 min ( 1000 g , 4c ) to separate the plasma component . the plasma was extracted within 30 min and stored at-70c until the time of the assay procedure . competitive inhibition assay procedure is often used to measure small analytes because it requires the binding of 1 antibody rather than 2 as used in standard enzyme linked immunosorbent assay ( elisa ) formats . here when the sample is added , the moab captures the free analyte out of the sample . in the next step , a known amount of analyte labeled with biotin is added . the labeled analyte will also attempt to bind to the moab absorbed onto the plates ; however , the labeled analyte is inhibited from binding to the moab by the presence of previously bound analyte from sample . this means that the labeled analyte will not be bound by the moab on the plate , if the moab has already bound unlabeled analyte from sample . the amount of unlabeled analyte in the sample is inversely proportional to the signal generated by the labeled analyte . ntx was quantitated using a commercially available competitive - inhibition elisa ( ostex , osteomark , seattle , wa , usa ) and expressed as nanomole bone collagen equivalents ( nm bces ) . sensitivity range of the elisa kit to detect ntx is 3.2 nm bce to 40 nm bce . all data were analyzed using statistical software ( spss version 10.5 , spss , chicago , il , usa ) . wilk test ; if data were normal then parametric tests were carried out otherwise the nonparametric test was carried out to compare between the groups . analysis of variance was carried out to find out if all four groups differed significantly ( table 2 ) . further , pairwise comparisons using the scheffe 's test were carried out to explore which pair or pairs differed with respect to gingival parameters ( table 3 ) . wallis test was carried out to find out the difference among four groups further mann whitney test was used to compare the pair difference . wilcoxon signed ranks test was used to compare the difference among the groups with respect to cal and paired t test was done to compare ntx levels in group iii and group iv . the spearman rho correlation coefficient test was used to find any association between the clinical parameters and gcf concentration . ntx mean difference values ( before and after treatment ) were considered to calculate the power of the study . a sample of 20 achieved 87% power to detect the mean paired difference of 1.1 with an estimated standard deviation of 0.9 and with a significance level of 0.05 . two - sided wilcoxon test was carried out assuming that the actual distribution was normal . the skin over the antecubital fossa was disinfected , and 2 ml of blood was collected by venipuncture using a 20-gauge needle with 2 ml syringes . vacutainer previously coated with 3.2% sodium citrate was used and was centrifuged at 1000 rpm for 10 min ( 1000 g , 4c ) to separate the plasma component . the plasma was extracted within 30 min and stored at-70c until the time of the assay procedure . competitive inhibition assay procedure is often used to measure small analytes because it requires the binding of 1 antibody rather than 2 as used in standard enzyme linked immunosorbent assay ( elisa ) formats . here monoclonal antibody ( moab ) when the sample is added , the moab captures the free analyte out of the sample . in the next step , a known amount of analyte labeled with biotin is added . the labeled analyte will also attempt to bind to the moab absorbed onto the plates ; however , the labeled analyte is inhibited from binding to the moab by the presence of previously bound analyte from sample . this means that the labeled analyte will not be bound by the moab on the plate , if the moab has already bound unlabeled analyte from sample . the amount of unlabeled analyte in the sample is inversely proportional to the signal generated by the labeled analyte . ntx was quantitated using a commercially available competitive - inhibition elisa ( ostex , osteomark , seattle , wa , usa ) and expressed as nanomole bone collagen equivalents ( nm bces ) . sensitivity range of the elisa kit to detect ntx is 3.2 nm bce to 40 nm bce . all data were analyzed using statistical software ( spss version 10.5 , spss , chicago , il , usa ) . wilk test ; if data were normal then parametric tests were carried out otherwise the nonparametric test was carried out to compare between the groups . analysis of variance was carried out to find out if all four groups differed significantly ( table 2 ) . further , pairwise comparisons using the scheffe 's test were carried out to explore which pair or pairs differed with respect to gingival parameters ( table 3 ) . wallis test was carried out to find out the difference among four groups further mann whitney test was used to compare the pair difference . wilcoxon signed ranks test was used to compare the difference among the groups with respect to cal and paired t test was done to compare ntx levels in group iii and group iv . the spearman rho correlation coefficient test was used to find any association between the clinical parameters and gcf concentration . ntx mean difference values ( before and after treatment ) were considered to calculate the power of the study . a sample of 20 achieved 87% power to detect the mean paired difference of 1.1 with an estimated standard deviation of 0.9 and with a significance level of 0.05 . two - sided wilcoxon test was carried out assuming that the actual distribution was normal . the mean ntx concentration was highest in group iii ( 18.77 nm bce ) and the lowest in group iv ( 16.02 nm bce ) . the values of group i and group ii fell between the highest and the lowest values ( 16.23 nm bce and 16.70 nm bce , respectively ) . statistical significance was seen between mean ntx levels of groups i , ii and iii but not between groups i , ii , and iv [ tables 46 ] however , the difference between groups iii and iv was statistically significant [ table 7 ] . there was also a positive correlation between the clinical parameters and the mean ntx levels [ tables 8 and 9 ] . results of anova comparing the mean ntx levels in plasma between groups i , ii and iii results of anova comparing the mean ntx levels in plasma between groups i , ii , and iv multiple comparison using bonferroni test for group i , ii , and iii ntx levels in plasma paired t - test to compare ntx levels in plasma in group iii and group iv wilcoxon signed ranks test to compare cal in group iii and group iv spearman rank correlation test comparing plasma ntx with gi , ppd , and cal the traditional method of assessing probing depth , gingival bleeding , and plaque score along with clinical attachment level and x - rays has been extensively used by the clinicians . however , these measurements neither give information on disease activity nor the susceptibility of patients towards disease progression . hence , this drawback has directed the clinicians to explore various markers in biofluids such as plasma , saliva , urine , and gcf . few of the extensively studied markers are the interleukins , tumor necrosis factor , matrix metalloproteinases , etc . , but , these markers are general inflammatory signals and are not specific to bone destruction . furthermore , it has even been speculated that higher levels of these markers in systemic circulation could be a result of spillover from the local tissues . biochemical monitoring of bone metabolism depends upon measurement of enzymes and proteins released during bone formation and degradation of products produced during bone resorption . however , the diagnosis of active phases of periodontal disease and the identification of patients at risk for disease presents a major challenge to the clinicians . in this , regard various biochemical markers are available that allow a specific and sensitive assessment of bone formation and bone resorption . however , these bone markers exhibit substantial short - term and long - term fluctuations related to diet , time of day , the phase of the menstrual cycle , season of the year , exercise , and anything else that alters bone remodeling . these biological factors , in addition to assay imprecision , produce significant intra- and inter - individual variability in markers . the most important biologic factors are diurnal and day - to - day variability in bone forming and bone - resorbing activities . bone turnover marker levels are highest in the early morning and lowest in the afternoon and evening . levels of urinary markers can vary 20 - 30% from the highest to the lowest value of the day . plasma markers change to a smaller degree except for carboxy terminal telopeptides ( ctx ) , which can vary by more than 60% during the day . the plasma markers of bone formation appear to vary less from day to day . in our exploratory study , the plasma ntx was assessed using a competitive inhibition elisa and all the samples showed the presence of ntx . the reference values for ntx in men and women are 14.8 and 12.6 , respectively ( as per the ostemark ntx serum kit ) and in our study the highest plasma ntx levels were seen in the periodontitis group and the lowest in after treatment group . one possible explanation for this finding is the active phase of bone resorption in periodontitis group leading to release of collagen breakdown fragments into the circulation and further reduction in the resorptive phases after srp . unlike the periodontitis group and after treatment group , healthy and gingivitis group showed no significant difference most likely , due to the absence of alveolar bone destruction or the levels below the sensitivity range of the assay kit which could not contribute necessarily to the systemic circulation . similarly , study by wilson et al . detected ntx in the serum samples and it was stated that serum represents combined bone turnover activity of both trabecular and cortical bone , and the bone turnover rate of trabecular bone is greater than the cortical bone . studies in dental literature have highlighted the use of ntx in other samples such as gcf , pcf , saliva , and the results are not confirming . have studied the levels of ntx in gcf and pcf and speculated that increased ntx levels may predict extensive bone destruction earlier than calprotectin . a study conducted by gursoy et al . failed to detect salivary ntx in periodontitis subjects concluding that high thermal denaturation of ntx at physiologic temperature in comparison with ictp and ctx explained the inability of ntx to be detected in the saliva sample moreover , study by isik et al . have even failed to detect ntx in gcf during orthodontic intrusive movement speculating that remodeling associated with orthodontic tooth movement may not generate ntx or may remain in tissues without its release into circulation . our previous study evaluated ntx levels in gcf , ntx was detected only in periodontitis and after treatment group , however , inability of ntx to be detected in healthy and gingivitis was attributed to absence of resorptive process at the sampled site . it has been proposed that patients with periodontitis may have elevated circulating levels of some inflammatory markers . monocytes , macrophages , and other cells respond to the dental plaque microorganisms by secreting a number of chemokines and inflammatory cytokines . the elevation in cytokine expression by cells within the gingival connective tissue in chronic periodontitis lesions can theoretically spill over into the circulation where it can induce or perpetuate systemic effects . furthermore , the plasma provides information about the inammatory stimulus and/or response generated in circulation toward the periodontal pathogens . although several authors have highlighted the use of these markers in systemic conditions , no studies have hypothesized their causative role on a systemic level per se further research aiming the process of resorption and explaining collagen breakdown products as mere products of resorption or their ability to perpetuating a disease process are required . plasma ntx levels can differ substantially with respect to periodontal health , disease and after treatment of chronic periodontitis subjects.ntx levels in plasma can be positively correlated with the clinical parameters.the use of biochemical markers in medical practices are controversial , as interpreting the values for individual patients are complex related to the intricacies inherent in bone metabolism.lack of standardization has led to unacceptable levels and variation . plasma ntx levels can differ substantially with respect to periodontal health , disease and after treatment of chronic periodontitis subjects . the use of biochemical markers in medical practices are controversial , as interpreting the values for individual patients are complex related to the intricacies inherent in bone metabolism .
background : to determine plasma concentrations of bone resorption marker cross - linked n - terminal telopeptide ( ntx ) of type i collagen in periodontal health , disease and after nonsurgical periodontal therapy in chronic periodontitis group . in addition , to know the association between plasma ntx levels and the different clinical parameters.materials and methods : thirty subjects were divided on the basis of their periodontal status and were categorized as group i : healthy , group ii : gingivitis , and group iii : chronic periodontitis . group iii subjects were treated with scaling and root planing , 6 - 8 weeks later blood samples were analyzed , and they constituted group iv . ntx levels in plasma were analyzed by competitive - enzyme - linked immunosorbent assay . all data were analyzed using statistical software ( spss ) ( = 0.05).results : all the samples tested positive for the presence of ntx . the mean ntx concentration was highest in group iii ( 18.77 nanomole bone collagen equivalent [ nm bce ] ) and the lowest in group iv ( 16.02 nm bce ) . the values of group i and group ii fell between the highest and the lowest values ( 16.23 nm bce and 16.70 nm bce , respectively ) . the difference in mean ntx levels in group iii and group iv were statistically significant . ntx levels in all the groups positively correlated with the clinical parameters . all data were analyzed using statistical software ( spss ) ( = 0.05).conclusion : within the limits of this study , it may be suggested that plasma ntx levels may provide distinguishing data between periodontally healthy diseased sites and after nonsurgical therapy of diseased sites .
SECTION 1. SHORT TITLE. This Act may be cited as the ``Federal Consent Decree Fairness Act''. SEC. 2. FINDINGS. Congress finds that the United States Supreme Court, in its unanimous decision in Frew v. Hawkins, 540 U.S. 431 (2004), found the following: (1) Consent decrees may ``lead to federal court oversight of state programs for long periods of time even absent an ongoing violation of federal law,''. 540 U.S. 431, 441. (2) ``If not limited to reasonable and necessary implementations of federal law, remedies outlined in consent decrees involving state officeholders may improperly deprive future officials of their designated legislative and executive powers.''. 540 U.S. 431, 441. (3) ``The federal court must exercise its equitable powers to ensure that when the objects of the decree have been attained, responsibility for discharging the State's obligations is returned promptly to the State and its officials.''. 540 U.S. 431, 442. (4) ``As public servants, the officials of the State must be presumed to have a high degree of competence in deciding how best to discharge their governmental responsibilities.''. 540 U.S. 431, 442. (5) ``A State, in the ordinary course, depends upon successor officials, both appointed and elected, to bring new insights and solutions to problems of allocating revenues and resources. The basic obligations of federal law may remain the same, but the precise manner of their discharge may not.''. 540 U.S. 431, 442. SEC. 3. LIMITATION ON CONSENT DECREES. (a) In General.--Chapter 111 of title 28, United States Code, is amended by adding at the end the following: ``Sec. 1660. Consent decrees ``(a) Definition.--In this section, the term `consent decree'-- ``(1) means any order imposing injunctive or other prospective relief against a State or local government, or a State or local official against whom suit is brought, that is entered by a court of the United States and is based in whole or part upon the consent or acquiescence of the parties; and ``(2) does not include-- ``(A) any private settlement agreement; ``(B) any order arising from an action filed against a government official that is unrelated to his or her official duties; ``(C) any order entered by a court of the United States to implement a plan to end segregation of students or faculty on the basis of race, color, or national origin in elementary schools, secondary schools, or institutions of higher education; and ``(D) any order entered in any action in which one State is an adverse party to another State. ``(b) Limitation on Duration.-- ``(1) In general.--A State or local government, or a State or local official who is a party to a consent decree (or the successor to that individual) may file a motion under this section with the court that entered the consent decree to modify or terminate the consent decree upon the earliest of-- ``(A) 4 years after the consent decree is originally entered by a court of the United States, regardless of whether the consent decree has been modified or reentered during that period; ``(B) in the case of a civil action in which a State or an elected State official is a party, the date of expiration of the term of office of the highest elected State official who is a party to the consent decree; ``(C) in the case of a civil action in which a local government or elected local government official is a party, the date of expiration of the term of office of the highest elected local government official who is a party to the consent decree; ``(D) in the case of a civil action in which the consent to the consent decree was authorized by an appointed State or local official, the date of expiration of the term of office of the elected official who appointed that State or local official, or the highest elected official in that State or local government; or ``(E) the date otherwise provided by law. ``(2) Burden of proof.-- ``(A) In general.--With respect to any motion filed under paragraph (1), the burden of proof shall be on the party who originally filed the civil action to demonstrate that the denial of the motion to modify or terminate the consent decree or any part of the consent decree is necessary to prevent the violation of a requirement of Federal law that-- ``(i) was actionable by such party; and ``(ii) was addressed in the consent decree. ``(B) Failure to meet burden of proof.--If a party fails to meet the burden of proof described in subparagraph (A), the court shall terminate the consent decree. ``(C) Satisfaction of burden of proof.--If a party meets the burden of proof described in subparagraph (A), the court shall ensure that any remaining provisions of the consent decree represent the least restrictive means by which to prevent such a violation. ``(3) Ruling on motion.-- ``(A) In general.--The court shall rule expeditiously on a motion filed under this subsection. ``(B) Scheduling order.--Not later than 30 days after the filing of a motion under this subsection, the court shall enter a scheduling order that-- ``(i) limits the time of the parties to-- ``(I) file motions; and ``(II) complete any required discovery; and ``(ii) sets the date or dates of any hearings determined necessary. ``(C) Stay of injunctive or prospective relief.--In addition to any other orders authorized by law, the court may stay the injunctive or prospective relief set forth in the consent decree in an action under this subsection if a party opposing the motion to modify or terminate the consent decree seeks any continuance or delay that prevents the court from entering a final ruling on the motion within 180 days after the date on which the motion is filed. ``(c) Other Federal Court Remedies.--The provisions of this section shall not be interpreted to prohibit a Federal court from entering a new order for injunctive or prospective relief to the extent that it is otherwise authorized by Federal law. ``(d) Available State Court Remedies.--The provisions of this section shall not prohibit the parties to a consent decree from seeking appropriate relief under State law.''. (b) Conforming Amendment.--The table of sections for chapter 111 of title 28, United States Code, is amended by adding at the end the following: ``1660. Consent decrees.''. SEC. 4. GENERAL PRINCIPLES. (a) No Effect on Other Laws Relating to Modifying or Vacating Consent Decrees.--Nothing in the amendments made by section 3 shall be construed to preempt or modify any other provision of law providing for the modification or vacating of a consent decree. (b) Further Proceedings Not Required.--Nothing in the amendments made by section 3 shall be construed to affect or require further judicial proceedings relating to prior adjudications of liability or class certifications. SEC. 5. DEFINITION. In this Act, the term ``consent decree'' has the meaning given that term in section 1660(a) of title 28, United States Code, as added by section 3 of this Act. SEC. 6. EFFECTIVE DATE. This Act and the amendments made by this Act shall take effect on the date of the enactment of this Act and apply to any consent decree regardless of-- (1) the date on which the order of the consent decree is entered; or (2) whether any relief has been obtained under the consent decree before such date of enactment.
Federal Consent Decree Fairness Act - Amends the federal judicial code to authorize any state or local government or related official (or successor) to file a motion to modify or terminate a federal consent decree upon the earliest of: (1) four years after the consent decree is originally entered; (2) in the case of a civil action in which a state or state official, or a local government or local government official, is a party, the expiration date of the term of office of the highest state or local government official who is a party to the consent decree; or (3) a date otherwise provided by law. Places the burden of proof with respect to such motions on the party originally filing the action to demonstrate that the denial of the motion to modify or terminate a consent decree (or any part of it) is necessary to prevent the violation of a federal requirement that was: (1) actionable by such party, and (2) addressed in the consent decree. Requires a court, within 30 days after the filing of a motion, to enter a scheduling order that: (1) limits the time of the parties to file motions and complete discovery, and (2) sets the date or dates of any necessary hearings. Authorizes a court to stay the injunctive or prospective relief set forth in the consent decree if a party opposing the motion to modify or terminate it seeks any continuance or delay that prevents the court from entering a final ruling on the motion within 180 days after its filing.
one motivation for studying parity - violating ( pv ) electron scattering from nuclei is to use it as a tool to extract information on the weak neutral current ( wnc ) and thereby to test the validity of the standard model in the low - energy regime @xcite ( see also @xcite ) . this possibility lies in the fact that the pv asymmetry acquires a very simple , model - independent expression in terms of basic coupling constants , with nuclear structure effects cancelling out , if certain conditions are met . the assumptions required to arrive to such a simple expression are : ( 1 ) that the focus is placed on elastic scattering from spin - zero nuclear targets , for then only coulomb - type monopole form factors enter ; ( 2 ) that strangeness content in the wnc can be neglected , for then only isoscalar and isovector matrix elements occur , with no third type of contribution ; and ( 3 ) that the nuclear ground states have isospin zero , permitting only a single coulomb monopole matrix element to occur , namely , the isoscalar one . in fact , while restriction ( 1 ) can be met with a wide range of even - even nuclei , restrictions ( 2 ) and ( 3 ) are not completely attainable . strangeness content in the wnc has been extensively studied in previous work ( see , for instance , the review article @xcite ) and is not the primary focus of the present study . instead , the present work is aimed at the last issue of potential isospin mixing in nuclear ground states and how this affects the pv asymmetry , albeit taking into account present uncertainties in the strangeness content of the nucleon . isospin mixing constituted part of the goal of a previous study by donnelly , dubach and sick ( dds ) @xcite where the effect of isospin mixing in nuclear ground states on pv electron scattering was first studied . the approach taken there was to use a simple two - level model in which the @xmath5 ground state and an excited @xmath6 state , both with angular momentum / parity 0@xmath7 , were admixed with a mixing parameter @xmath8 . the deviation in the pv asymmetry due to isospin mixing was found to be proportional to this mixing parameter @xmath8 and to the ratio between the inelastic isovector and the elastic isoscalar coulomb monopole form factors . in dds a shell - model approach with only one active shell in a spherical harmonic oscillator basis was used to obtain the spin - isospin reduced coulomb matrix elements ( isoscalar and isovector ) for @xmath1c and @xmath3si . coulomb distortion effects on the electron waves were also neglected in that earlier work . isospin dependence in pv electron scattering is crucial in attempting to determine the precision up to which the standard model constants can be deduced and to what extent strangeness effects in the wnc can be studied . at the same time , this dependence can be exploited to provide information about the spatial distribution of neutrons in the nuclear ground state . indeed , this idea was proposed as part of the original study by dds , namely that a measurement of the electron - scattering pv - asymmetry can provide a direct measurement of the fourier transform of the neutron density . in a subsequent study @xcite the idea in dds was extended to allow for coulomb distortion of the electron wave function , something also done in the present work . neutron densities are less well known than charge densities ( studied using parity - conserving electron scattering ) as most of this information comes from hadronic probes where the reaction mechanism involved is more difficult to interpret than is the case for semi - leptonic electroweak processes . thus , an electroweak measurement of the neutron density can serve to calibrate the measurements made with hadronic probes . in fact , the idea proposed by dds is now being realized in the parity radius experiment ( prex ) at jefferson laboratory which has the goal of measuring the neutron radius of @xmath9pb using pv elastic electron scattering . such a measurement has implications for astrophysics ( including the structure of neutron stars ) , for atomic parity non - conservation studies , for the structure of neutron - rich nuclei , and for determinations of neutron skin thicknesses of nuclei . in this work we extend the previous study undertaken by dds to examine the effects on the pv asymmetry induced by isospin admixtures in the nuclear ground states of n = z nuclei occurring when coulomb interactions between nucleons are taken into account and pairing correlations are included . the nuclear structure is described within a self - consistent axially - symmetric hartree - fock formalism with skyrme forces ; in fact , three different forces are used to explore the sensitivity of the pv asymmetry deviation to the nuclear dynamics . several additional extensions beyond the work of dds are made , namely , inclusion of the spin - orbit correction to the coulomb monopole operators , use of modern electroweak form factors for protons and neutrons , inclusion of strangeness contributions in the nucleon form factors , and , as noted above , incorporation of the effects caused by the coulomb distortion of the electron wave functions . the present study includes four n = z isotopes , namely , @xmath1c , @xmath2 mg , @xmath3si , and @xmath4s . with respect to nuclear structure issues , the main difference between the treatment in dds and here is that in dds coulomb effects in the nuclear ground state are considered perturbatively to find the admixture of giant isovector monopole strength ( @xmath10 , @xmath6 ) , as explained at length in @xcite . here , within the self - consistent mean - field approach with two - body density - dependent effective interactions , the collective isospin mixing effect of the coulomb force is included non - perturbatively in the isospin - non - conserving hartree - fock ( hf ) mean field , along with other collective effects such as pairing and deformation . as a result , the hf+bcs ground state is made up of quasiparticles with rather complex admixtures of harmonic oscillator wave functions in many different major shells . in discussing the results one sees that there are limited kinematical regions where the pv asymmetry is measurable at the levels required , _ i.e. , _ as characterized by the figure - of - merit ( fom ) which typically peaks at momentum transfers around 0.5 @xmath11 . for the region extending up to a little beyond 1 @xmath11 the effects of isospin mixing on the pv asymmetry will be seen later to be characteristically at the level of a few percent for light nuclei , up to more than 10% for the heavier nuclei considered . these are certainly within the scope of existing and future measurements of pv electron scattering . for example , the happex - he measurements at jlab @xcite have already been performed at low @xmath12 , _ i.e. , _ for kinematics similar to those of relevance in the present work . these yielded pv asymmetries at the 4% level ( which are statistically , not systematically limited at present ) , while the future prex measurements on pb @xcite aim for 3% precision in the pv asymmetry . in fact , in recent work ( see @xcite and references therein ) it has been noted that interpretations of results from experiments like happex and g0 are reaching the point where they are limited by effects from isospin mixing . for heavier nuclei , as discussed in the present work , the effects from isospin mixing will be seen to be even larger than for the nucleon or few - body nuclei . at higher momentum transfers the effects of isospin mixing are still larger ; however , the fom is smaller in this region and , as a consequence , our attention in the present work is focused only on the lower momentum transfer region . as far as other contributions are concerned , after discussing our basic results , and thereby setting the scale of the isospin mixing , we return briefly to show the size of potential strangeness effects and of the expected influence of meson - exchange currents at these lower momentum transfers . other effects such as parity admixtures in the nuclear ground state , dispersion corrections , or radiative corrections are expected to provide negligible modifications to the asymmetry @xcite and are not considered in this work . the outline of the paper is the following . in sect . ii we start by introducing the formalism necessary to describe pv in polarized elastic electron scattering from spin - zero n = z nuclei . the correction to the asymmetry due to isospin mixing is isolated and analyzed in terms of the various ingredients involved , such as the roles played by the nucleon form factors and by potential strangeness contributions . there we also discuss the effects expected from electron distortions . in sect . iii we present our results on the pv asymmetry for the four nuclei under study . first we explore the influence on the results of the various ingredients in the formalism ( different pairing gaps , different skyrme interactions , the spin - orbit correction ) and go on to compare the present work with past studies of isospin - mixing effects in pv electron scattering . following this , the principal results of the study are presented for the four chosen nuclei and finally , in sect . iv , we summarize the main conclusions of our work . polarized single - arm electron scattering from unpolarized nuclei can be used to study parity violation , since both electromagnetic ( em ) and weak interactions contribute to the process via @xmath13 and @xmath14 exchange , respectively . the pv asymmetry is given by @xcite @xmath15 where @xmath16 is the cross section for electrons longitudinally polarized parallel ( antiparallel ) to their momentum . keeping only the square of the photon - exchange amplitude for the spin - averaged em cross section and using the interference between the @xmath13 and @xmath17 amplitudes in the cross section difference , in plane wave born approximation ( pwba ) the asymmetry @xmath18 in the standard model can be written as @xmath19 where @xmath20 and @xmath21 are the fermi and fine - structure coupling constants , respectively , and @xmath22 is the four - momentum transfer in the scattering process . @xmath23 is the pv response and @xmath24 is the em form factor , both containing the dependence of the asymmetry on the nuclear structure . in the case of elastic electron scattering between @xmath25 states , only the coulomb - type monopole operators can induce the transition , @xmath26 where @xmath27 , @xmath28 is a kinematical factor that cancels out in the ratio , and @xmath29 is the em(wnc ) monopole coulomb form factor . then one has @xmath30 if we now consider only @xmath0 nuclei and assume that they are isospin eigenstates with isospin zero in their ground states , then only isoscalar matrix elements contribute and the weak and em form factors become proportional : @xmath31 accordingly , the pv asymmetry does not depend on the form factors : @xmath32 a_a \beta ^{(0)}_v \cong 3.22 \times 10^{-6 } |q^2|\ ; \text{(in fm}^{-2}\text { ) } \ , , \label{referencevalue}\ ] ] where , within the standard model , @xmath33 , @xmath34 being the weak mixing angle . the actual pv asymmetry deviates from this constant value by a correction @xmath35 , where @xmath36 or equivalently ( for @xmath37=0@xmath7 nuclei ) @xmath38 and it accounts , in particular , for the effects of nuclear isospin mixing and strangeness content in the pv asymmetry . the ratio between the wnc and em form factors can be written as @xmath39 where the subscript in parenthesis indicates isoscalar ( @xmath5 ) and isovector ( @xmath6 ) parts . the coulomb isoscalar ( @xmath5 ) and isovector ( @xmath6 ) multipole operators can be written in terms of the contributions of order 0 and 1 in @xmath40 ( subscripts ( 0 ) and ( 1 ) ) @xcite : @xmath41 the nucleon form factors @xmath42 and @xmath43 are included in the definitions of the two contributions as follows : @xmath44 \theta^{m_j}_j(q\textbf{x})\ , , \label{operatormso}\end{aligned}\ ] ] where the kinematical factors @xmath45 and @xmath46 contain the energy @xmath47 and momentum @xmath12 transferred to the nucleus by the electron : @xmath48 and @xmath49 are the electric ( magnetic ) nucleon form factors discussed later . the basic multipole operators @xmath50 and @xmath51 are respectively the standard zeroth - order coulomb operator and the spin - orbit first - order correction , and are defined as : @xmath52 \ , .\end{aligned}\ ] ] by using the relations between form factors in isospin space ( @xmath53 ) and form factors in charge space ( proton : p and neutron : n ) , @xmath54 one can write the ratio in eq . ( [ fft01 ] ) as @xmath55 where the operators with tildes , @xmath56 , have the same structure as in eqs . ( [ operatorm ] , [ operatorm2 ] ) , but contain the wnc nucleon form factors @xmath57 , @xmath58 to be defined in the next subsection . in the present study we are interested in the @xmath59 ( @xmath60=0 ) multipole . the charge operator @xmath61 matrix elements in a spherical harmonic oscillator ( s.h.o . ) basis are given by @xmath62 for @xmath59 the spin - orbit operator @xmath63 is just proportional to @xmath64 and one has in the same s.h.o . basis : @xmath65 \:\bigg\langle n'l'j ' \bigg| \frac{j_1(qr)}{qr } \bigg| nlj \bigg\rangle \ : \delta_{l'l } \:\delta_{j'j } = \\\nonumber & & -\frac{1}{\sqrt{4\pi } } \:\frac{1}{2 } \left[j(j+1)-l(l+1)-\frac{3}{4}\right ] \ : \int \frac{j_1(qr)}{qr } \ : r_{nl}(r)r_{n'l}(r ) \ : r^2 \ : dr \ , .\end{aligned}\ ] ] in both expressions @xmath66 is a spherical bessel function of order @xmath67 . the matrix elements of the charge monopole operators ( em and wnc ) between two s.h.o . wave functions are therefore @xmath68 \langle n'lj | \theta^0_0(q\textbf{x } ) | nlj \rangle \,\end{aligned}\ ] ] @xmath69 \langle n'lj | \theta^0_0(q\textbf{x } ) | nlj \rangle \ , , \end{aligned}\ ] ] where the proton ( @xmath70 or @xmath71 ) and neutron ( @xmath72 or @xmath73 ) form factors @xmath20 and @xmath74 will be defined in the next subsection . note that one should not confuse @xmath75 as a superscript on @xmath76 and @xmath77 meaning neutron with @xmath75 as a subscript meaning the quantum number labeling the single - particle basis states the context will also make the distinction clear . the expressions above are used once the s.h.o . expansion of the hf single - particle wave functions is performed ( see below ) . the total monopole matrix elements also contain a center - of - mass correction factor to account for the fact that the hf single - particle wave functions are referred to the center of the mean - field potential , not to the nuclear center of mass , as they should be to avoid the spurious movement of the nucleus as a whole . for a s.h.o . potential , this correction factor takes the form @xmath78 , with @xmath12 the momentum transfer ( in @xmath11 ) , @xmath79 the oscillator parameter ( in fm ) and @xmath80 the total number of nucleons . this is usually employed even when not using a s.h.o . basis ; the correction cancels out when one constructs the ratios of form factors appearing in the asymmetry @xmath18 . finally , let us conclude this section with a brief discussion of potential meson - exchange current effects on the pv asymmetry ( see also @xcite ) . such contributions enter in very different ways in cross sections and in the asymmetry . in the former they typically provide effects at the few percent level , whereas for the latter the situation is not so obvious . first , were there to be no significant isospin mixing and strangeness contributions , _ i.e. , _ with a purely isoscalar , no - strangeness ground state , then the mec effects would cancel in the ratio that gives the pv asymmetry , and , in fact , the result would be completely unaffected by any aspect of hadronic structure . this is an old result @xcite from the earliest studies of elastic pv electron scattering from nuclei . secondly , with isospin mixing the situation is more complicated , since now there are both isoscalar and isovector electromagnetic current matrix elements involved and mec effects are different for the two . however , two arguments suggest that these contributions are not significant for the present study at low momentum transfers : the multipoles involved are c0 ( isoscalar and isovector ) where mec effects , being dominantly transverse , are suppressed anyway and the effect is one of modifying the predicted isovector effect from occurring as a purely one - body matrix element to one where both one - body and typically few percent two - body contributions occur . this would modify the isospin - mixing pv results by a multiplicative ( not additive ) factor of typically a few percent , which is not significant for the present study . thirdly , with strangeness also present , now in the nucleon itself and also in mec contributions , additional effects may occur . a study was done @xcite for the important case of @xmath81he and , based on the results presented there , it may be that at higher values of @xmath12 ( beyond roughly 3 @xmath11 ) such effects could become significant . on the other hand , in the cited study such effects were shown to be negligible for the range of momentum transfers of interest in the present work and accordingly here we also neglect any contributions of this type . the electric nucleon form factor can be expressed either in terms of protons and neutrons or in terms of isoscalar and isovector parts , @xmath82 where @xmath83 is 1 for protons and @xmath84 for neutrons . in analogy , in the standard model , the wnc nucleon form factor is given by @xmath85 where the strangeness contribution to the form factor is isoscalar . this can also be written in terms of proton and neutron form factors , @xmath86 where @xmath87 a similar result applies to the nucleon magnetic form factors by simply substituting g@xmath88 by g@xmath89 in the previous expressions . the nucleon form factors @xmath42 and @xmath43 , from which @xmath90 and @xmath91 are obtained , have been computed using the parametrization by hhler @xcite . within the standard model we have , @xmath92 the strangeness form factor @xmath93 has been parametrized according to @xmath94 ( see , for instance , @xcite ) with @xmath95 the parameters @xmath96 and @xmath97 are constrained by pv electron scattering measurements on hydrogen , deuterium and helium-4 ; the values chosen as representative are discussed below and in sect . iii . the strangeness term in the wnc form factors in eq . ( [ wncff ] ) can be considered separately , giving rise to a decomposition of the pv asymmetry deviation , @xmath98 , where the isospin mixing term @xmath99 is proportional to @xmath100 and @xmath101 is proportional to @xmath102 . the isospin mixing piece is computed considering @xmath103 and @xmath104 in the wnc form factors , whereas the strangeness term is computed considering @xmath105 , @xmath106 , @xmath107 and @xmath108 in the wnc form factors . we evaluate the strangeness contribution neglecting the small differences between neutron and proton densities , assuming g@xmath109=0 , and neglecting the small spin - orbit contribution so only @xmath110 enters and not @xmath111 . this gives @xmath112 once we have introduced the nucleon form factors , it is interesting to notice that if one neglects the spin - orbit correction to the coulomb operator , eq . ( [ operatormso ] ) , one obtains the more intuitive expression @xmath113 with @xmath114 where @xmath115 are the ground - state radial densities for protons and neutrons with @xmath116 as above . if one further neglects the electric neutron form factor @xmath117 and the strangeness form factors @xmath110 , one arrives at the simple expression @xmath118 or equivalently @xmath119 which are similar to those used in dds . as a general consideration , one faces a compromise between optimizing the pv signal ( _ i.e. , _ deviations in the asymmetry ) and the figure - of - merit , discussed below . since the asymmetry generally increases with @xmath120 , while the cross section decreases , this leads to the search for a well defined region of optimal kinematics . to measure properly effects in the pv asymmetry due to isospin mixing , the deviation should be greater than or of the order of a few percent of the reference value @xmath121 given in eq . ( [ referencevalue ] ) . this condition will be fulfilled in general for some intervals of @xmath12 , but not for others . at the same time the relative error of the asymmetry should be kept as small as possible . this relative error depends on technical characteristics of the experimental setting ( detector solid angle , beam luminosity and running time ) gathered in @xmath122 , as well as on intrinsic properties of the target , the projectile and their kinematics gathered in the so called figure - of - merit ( fom ) @xmath123 ( see @xcite ) : @xmath124 clearly , in a study of the present type where the focus is placed on theoretical aspects of the hadronic and nuclear many - body physics in the problem , it is not appropriate to go into further detail on specific experimental conditions , _ i.e. , _ on the specifics of @xmath122 ( see , however , @xcite for some general discussions of experimental aspects of pv electron scattering ) . presenting the fom , as we do in the following section , provides a sufficient measure of the `` doability '' of pv elastic scattering asymmetry measurements whose goal is to explore isospin mixing in the nuclear ground state . in particular , the fom is proportional to the asymmetry itself squared and to the differential cross section of the scattering ( essentially the parity - conserving cross section ) : @xmath125 and therefore shows a diffraction pattern , but with the same negative - slope trend as the differential cross section when the momentum transfer increases , which favors low momentum transfer experiments . to generate the ground - state wave function in the hf+bcs approximation we use a skyrme type density - dependent nucleon - nucleon interaction ( sly4 force @xcite ) , allowing for axially - symmetric deformation . first , the hf equations are solved to generate the self - consistent hf mean field , and pairing correlations are taken into account at each iteration solving the bcs equations to generate the single quasiparticle wave functions . the deformed hf+bcs calculation gives a set of single - particle levels , occupation numbers , and wave functions @xmath126 . the latter are expanded in a deformed harmonic oscillator basis @xmath127 which we also expand in a s.h.o . basis @xmath128 to facilitate the comparison with previous work based on the shell - model approach , @xmath129 where a truncation of 11 major shells @xmath130 has been used in both the deformed ( @xmath131 ) and the spherical ( @xmath132 ) expansions . each hf single - particle state @xmath133 has a parity and an angular momentum projection @xmath134 which are shared by any of the basis wave functions taking part in its expansion ( @xmath135 , @xmath136 ) . the hf+bcs outputs give for every state @xmath133 the energy , the occupation probability @xmath137 and the coefficients @xmath138 . the coefficients in the s.h.o . basis expansion @xmath139 are obtained as @xmath140 with @xmath141 and @xmath142 where the @xmath143 are associated legendre polynomials and the functions @xmath144 are defined in terms of hermite and generalized laguerre polynomials that contain , respectively , the cylindrical @xmath145 and @xmath146 dependence of the eigenstates @xmath147 of the deformed harmonic oscillator potential @xcite . the contribution of each pair of s.h.o . states to the total charge monopole form factors ( em or wnc ) can be analyzed in terms of two pieces as follows : @xmath148 @xmath149 applicable to protons and neutrons ( @xmath150 ) separately . the first factor in the above equations is the matrix element of the charge monopole operator ( em or wnc ) between two s.h.o . states as defined in eq . ( [ ff_basis_em ] ) or ( [ ff_basis_wnc ] ) , which is momentum - dependent and whose @xmath151-dependence is only due to the spin - orbit correction ( eq . ( [ m_spinorbit ] ) ) . this matrix element vanishes unless both basis functions have the same @xmath152 quantum numbers . the nuclear ground - state structure information is contained in the second factor , which is the spherical part of the hf+bcs ground - state density matrix ( for protons or neutrons ) in the s.h.o . basis : @xmath153 it contains the coefficients of pairs of spherical harmonic oscillator components differing at most in the radial quantum number @xmath75 , as well as the occupation probabilities @xmath154 of each of the hf single - particle states @xmath133 . these contributions are added up so that the calculated density refers to the whole hf+bcs ground state of the nucleus . the spherical density matrix elements contain information on the nuclear ground - state structure of the target isotopes . we note that the total hf+bcs density matrix may also contain a non - spherical part , with matrix elements @xmath155 , which for deformed nuclei is nonzero and which does not contribute to the charge monopole form factors . this ensures that only the spherically symmetric ( @xmath59 ) part of the neutron and proton densities contributes and that there is no need to make any further angular momentum projection in the deformed nuclei @xcite . the analysis of the quantities defined above is especially appropriate when comparing results of our hf+bcs calculations with former shell - model results , since each quasiparticle state can always be expressed as a combination of s.h.o . basis states with different radial quantum numbers @xmath75 . this fact allows for nonzero spherical density matrix elements @xmath156 with different @xmath75 and @xmath157 , off - diagonal in the spherical harmonic oscillator basis . when the coulomb interaction is included in the generation of the self - consistent mean field , the spherical density matrix in eq . ( [ density ] ) is slightly different for protons and neutrons . this amounts to saying that the self - consistent ground - state mean field is not a pure @xmath5 isospin state , but contains isospin admixtures , mainly @xmath6 @xcite , which contribute to the pv @xmath158 amplitude . in other words , from the proton and neutron densities @xmath159 , @xmath150 , one may construct an isoscalar ground - state density @xmath160 and an isovector ground - state density @xmath161 that contribute respectively to the isoscalar and isovector parts of the pv amplitude . another aspect that must be addressed is the coulomb distortion of the incoming and outgoing electron wave functions . while we are dealing with relatively light nuclei for which coulomb distortion effects are generally small ( at least where the fom is significant ) , it is important to perform calculations which fully take into account coulomb distortion to obtain realistic predictions that can serve as a reference for future experiments . it is also interesting to compare these results with the pwba calculations described in previous sections . to this end , we follow the standard treatment of coulomb distortion for elastic pv electron scattering within a partial wave formalism . we solve the dirac equation for massless electrons in the coulomb potential generated by the nucleus , closely following the work of refs . @xcite to obtain the distorted wave ( dw ) results . in fig . [ xs ] we compare our unpolarized elastic electron scattering cross sections with experimental data for @xmath1c @xcite , for @xmath2 mg @xcite , for @xmath3si @xcite and for @xmath4s @xcite . for ease of presentation , the cross sections have been transported to a common energy of 400 mev . our calculations of cross sections have been obtained from the hf+bcs ground - state densities as described in the previous section . in obtaining these results , no experimental information on , for instance , charge radii , has been employed or fit . in spite of this fact , very good agreement is found up to a scattering angle of 60@xmath162 , _ i.e. , _ a transfer momentum up to around 2 @xmath11 , which is the region of interest according to the results on fom to be shown below . to consider the distorted - wave correction in the pv asymmetries , eq . ( [ asymmetry_sigmas ] ) has been used , together with eq . ( [ asymmetryratio ] ) for the asymmetry deviations . the effects of coulomb distortion and nuclear isospin mixing can be analyzed separately by considering different ingredients in the calculation of the asymmetries in eq . ( [ asymmetryratio ] ) . in fig . [ asym_gamma_dw_pw ] we show the typical distortion effects for 1 gev electrons in the case of @xmath3si . the left - hand panel shows isospin - mixing ( superscript i ) asymmetries in pwba and when distortions of the electron wave function are taken into account , where the smoothing of the pwba divergences appears as the most obvious effect of distortion . one must be aware of the fact that without isospin effects and distortion of the electrons , the wnc and em form factors in eq . ( [ hadronicratio ] ) are exactly proportional , and thus the asymmetry follows a very simple @xmath120 dependence ( shown by the long dashed line in the left - hand panel of the figure ) , even at the zeros of the form factor corresponding to the diffraction minima . when isospin effects are considered , the diffraction minima for the wnc and em form factors occur at slightly different values of @xmath12 and thus the asymmetry shows extreme variations at the approximate locations of these diffraction minima , as can be seen in the left - hand panel of fig . [ asym_gamma_dw_pw ] ( dashed curve ) . the main effect of the distortion of the electron waves for these light systems is to fill in or to smooth out the diffraction minima , thus severely reducing the amplitude changes of the asymmetry at these diffraction minima , as can also be seen ( solid curves ) in fig . [ asym_gamma_dw_pw ] . furthermore , the distorting potential introduces asymmetry deviations even when no isospin mixing effects are present in the calculation of the nuclear structure . in summary , when nuclear isospin mixing is not considered , _ i.e. , _ fixing the same proton and neutron densities , the pwba result shows simply a @xmath120 behaviour , while the results that take into account the distortion of the electron wave function deviate smoothly from this with a dip where the diffraction minimum occurs . when isospin mixing is included , the asymmetry is increased with respect to the non - isospin - mixing case as @xmath12 approaches the value at the diffraction minima , and it is reduced after the diffraction minima . this behavior is seen both in the pwba and in the dw calculations . in the same fig . [ asym_gamma_dw_pw ] , the plots in the right - hand panel show asymmetry deviations @xmath163 due to the nuclear isospin mixing obtained replacing @xmath164 in eq . ( [ asymmetryratio ] ) by the ratios @xmath165 and @xmath166 . in addition we show the pure effect of distortion ( ignoring nuclear isospin mixing ) obtained from the ratio @xmath167 and the combined effect of distortion and isospin mixing yielded by using the ratio @xmath168 . as one can see in the figure , the effect of distortion is to smooth out the divergences appearing at the position of diffraction minima in pwba , but anywhere else one sees that the deviations in the asymmetry introduced by isospin mixing effects are very similar both in pwba and when the distortions are fully taken into account . analogous results are obtained for @xmath1c , @xmath2 mg and @xmath4s . one should notice that , since we are plotting the absolute value of the asymmetry deviation on a logarithmic scale , zeros of this function appear as downward divergences which remain in the dw calculation . in the next section we focus the discussion on the results corresponding to the asymmetry deviation . this is defined as the ratio whose numerator is the difference between the asymmetry with isospin mixing and without , and whose denominator is simply the asymmetry with no isospin mixing , where all asymmetries are from full distorted - wave calculations . that is , one has @xmath169 this deviation may be directly compared with experiment . it can be seen that this generalized @xmath170 yields results that are similar to the pwba prediction , except for the regions deep in the minima of the cross section . taking into account that @xmath171 can be computed accurately in a model - independent way ( tables can be produced for each nucleus and different electron energies without difficulty ) , and the only inputs needed are the ( experimental ) charge distributions , which are well known for the nuclei studied here , we conclude that the distortion effects will not prevent the determination of isospin mixing effects , provided that the data are compared with a full dw calculation . indeed , the organization of the asymmetry data according to @xmath170 introduced in the equation above may be enough to permit a direct comparison with the simple pwba predictions for @xmath35 . it is worth pointing out that the filling of the minima of the cross section is important for the determination of the asymmetry and of the deviations from the non - isospin - mixing prediction , as it is precisely the region near the minima where the isospin mixing effects on the asymmetry are also more evident . the filling of the minima of the form factor and cross section induced by coulomb distortion will help to make these details of the asymmetry more easily measurable , as the cross section will be much larger in the region of the minima than predicted with the pwba calculation . four even - even , n = z isotopes have been studied in this work , namely @xmath1c , @xmath2 mg , @xmath3si and @xmath4s , which for each element are the most abundant isotopes ( 79@xmath172 abundance for @xmath2 mg , and higher than 90@xmath172 for the other isotopes ) . furthermore , all targets have reasonably large excitation energies of the first ( 2@xmath7 ) excited state , and all but sulfur are suitable in elemental form for high - current electron scattering experiments , as required for a measurement of the pv asymmetry . in our calculations the ground - state deformation is obtained self - consistently . for each nucleus the calculations have been performed using the optimum values of the axially - symmetric harmonic oscillator parameters @xmath79 and @xmath12 @xcite , which define the average length and the axes ratio of the oscillator well . the self - consistent intrinsic proton quadrupole moments are shown in table [ tabledef ] for three different sets of pairing gaps : @xmath173 = 0 mev ( absence of pairing ) ; @xmath173 = 1 mev ; and @xmath173 as obtained from experimental mass differences @xcite of neighboring nuclei through a symmetric five - term formula . these values of the pairing gaps are practically the same for protons and neutrons , namely 4.5 mev for @xmath1c , 3.13 mev for @xmath2 mg , 2.88 mev for @xmath3si , and 2.17 mev for @xmath4s . since these isotopes are relatively light , these pairing gaps are too large because shell effects are not taken into account in the mass formula . the larger pairing gaps tend to give smaller self - consistent deformations . for @xmath1c and @xmath4s we obtain spherical equilibrium shapes in agreement with their vibrational experimental spectra . table [ tabledef ] also includes experimental intrinsic charge quadrupole moments @xcite . for @xmath2 mg and @xmath3si , whose experimental spectra show a rotational band that confirms their deformed shape , the self - consistent values of the quadrupole moment are in better agreement with experiment for the smaller pairing gap values . for this reason , in what follows we focus on results obtained for @xmath173 = 1 mev . values of @xmath174 @xmath175 and @xmath176= 52.15 @xmath175 are reported for @xmath1c and @xmath4s respectively in @xcite . however , in these two isotopes a ground - state rotational band is by no means recognizable @xcite , implying that their intrinsic quadrupole ground - state values should be zero and that the ones appearing in @xcite do actually correspond to their vibrational 2@xmath7 states . the interpretation of @xmath1c in terms of a 3@xmath21 structure is gaining more and more experimental support @xcite . obviously this picture is not reducible to a single determinantal function . however , we note the fact that the hf+bcs energy which we obtain as a function of quadrupole deformation shows a shallow minimum around @xmath22 = 0 for @xmath1c , and this is consistent with that picture . in any case , the final results for pv asymmetries are not very sensitive to the nuclear deformation . this was tested by computing the pv asymmetry for two different shapes of @xmath4s , spherical ( @xmath177 0 @xmath175 , which is the self - consistent deformation ) and prolate ( @xmath177 50 @xmath175 , which has been fixed by means of a quadrupole constraint in the hf calculation ) . the results for the two cases were found to be extremely similar , which shows that our predictions are quite independent of the nuclear shape as long as the corresponding self - consistent densities are properly used in the calculations . in the figures to follow we present the results of isospin mixing for the four isotopes under study , using a sly4 hf+bcs mean field with @xmath173=1 mev . for comparison , although this is not the primary focus of the present study , we also show results with / without strangeness present , and accordingly a few words are in order concerning what should be regarded to be a sensible value for @xmath178 . the experimental value for @xmath96 in eq . ( [ str_para_1 ] ) is still evolving , given the appearance of new experiments , more detailed analyses of the combined experimental results ( see _ e.g. , _ @xcite ) and considerations of complications such as isospin mixing in the nucleon @xcite and @xmath81he @xcite and the role of two - photon exchange @xcite . presently , from pv electron scattering measurements involving @xmath179h and @xmath81he , the value of @xmath96 is consistent with zero , and accordingly in the figures we show isospin - mixing results computed with @xmath180 . the strongest constraint on electric strangeness to date comes from the happex - he experiment at jlab @xcite which yielded the result @xmath181 at @xmath1821.4 @xmath11 ( corresponding to @xmath183 ( gev / c)@xmath184 ) , which translates into a range for the electric strangeness parameter used in our parametrization ( see eqs . ( [ str_para_1 ] ) and ( [ str_para_2 ] ) ) of @xmath185 ; to be conservative we have simply added statistical and systematic errors here . accordingly , in the present study , when referencing our new results to existing estimates of electric strangeness we have adopted the two limit values @xmath186 as a rough measure when strangeness is present , and @xmath180 when it is not all three of them are essentially consistent with existing knowledge on electric strangeness . of course , if ( when ) the amount of electric strangeness is better known from pv electron scattering studies of @xmath179h and @xmath81he the present analysis can also easily be updated , both via more refined parametrizations of @xmath110 and in concert with the isospin mixing effects that provide the main focus of the present work . with reference to the results shown below , one should note that the happex - he pv measurements have been performed in comparable low-@xmath12 kinematical regions . for direct comparison with @xmath163 we use the @xmath178 of eq . ( [ gamma_s ] ) . note that when @xmath96 is negative the effects of isospin mixing and strangeness tend to cancel in the low-@xmath12 region ( @xmath187 1 @xmath11 ) , whereas the opposite would be true were @xmath96 to be positive , which is not ruled out at present . clearly , in the case where no significant electric strangeness is found in the nucleon the curves with isospin mixing and @xmath188 are the appropriate ones . it should also be noted that the strangeness contribution is not very sensitive to details of the nuclear structure ( in particular , the approximation of eq . ( [ gamma_s ] ) , which is the one plotted in the graphs , is completely independent of the nuclear structure ) , whereas the isospin contribution does depend on the details of the nuclear structure . for magnetic strangeness ( see eqs . ( [ str_para_1 ] ) and ( [ str_para_2 ] ) ) following @xcite we have assumed that @xmath189 , although the effects in this case are very small and choosing any value that is consistent with present experimental knowledge would lead to negligible differences . we now begin our discussions of some of the main results of our work , focusing here on an exploration of the basic assumptions made in the present approach , using the case of @xmath3si to illustrate the various effects under study . in fig . [ gamma_iso_strange_si28 ] the left - hand panel shows the total pv asymmetry deviation @xmath35 using the form factor ratio in eq . ( [ ffpn ] ) , together with the isospin mixing contribution @xmath163 and the strangeness contribution @xmath178 discussed above ( as well as below , where the full set of nuclei are inter - related ) . discussion of the right - hand panel , which shows the fom as well as a comparison of plane- and distorted - wave results , is also postponed to later where results for the full set of nuclei are presented . at that point the `` doability '' of measurements of elastic pv electron scattering is discussed in a little more detail . however , before proceeding with those general results , let us comment on some of the specifics , using the case of @xmath3si as prototypical . first , the effect of using different values for the pairing gap in obtaining the asymmetry deviation is shown in fig . [ gamma_pairing_si28 ] , where one can see that the results change very little , even for the larger pairing gap values , as long as the proton and neutron gap values are the same . in conclusion , our results on the pv asymmetry , as shown in the figure , are very stable against changes in the proton and neutron pairing gaps , provided both have the same value . if the proton and neutron pairing gaps were to be notably different , as in one of the examples of fig . [ gamma_pairing_si28 ] , this would lead to a very different structure of the proton and neutron distributions which translates into a different shape for the pv asymmetry deviation . secondly , let us discuss the sensitivity of the results to the use of different skyrme interactions . in fig . [ gamma_forces_si28 ] results for @xmath3si are shown for three different choices , sly4 , sk3 and sg2 . sly4 force @xcite is an example of a recent parametrization and sk3 @xcite is an example of a simple and old force . @xcite has been also widely used in the literature and provides a good description of bulk and spin - isospin nuclear properties as well . clearly in the low-@xmath12 region of interest in the present work there is very little sensitivity to the choice of force . the small spread in @xmath35 seen in the figure may be taken as a sort of theory uncertainty with which other competing effects can be compared . thirdly , consider the spin - orbit contribution which was introduced as a correction in eq . ( [ operatorm ] ) . while expected to be a small correction , it is important to make sure that this contribution does not confuse the interpretation of the pv asymmetry in terms of isospin mixing . typically when treating parity - conserving elastic electron scattering the spin - orbit contribution is dominated by isoscalar effects and is known to provide a small correction . however , since the isospin mixing involves a delicate interplay between isoscalar and isovector matrix elements and since the isovector spin - orbit contribution in particular may be sizable ( see eq . ( [ operatormso ] ) ) : the isovector / isoscalar form factor ratio there is @xmath190/[2 g_m^{(0)}-g_e^{(0 ) } ] \cong 11 $ ] ) , this contribution has also been included in the present work for completeness . results obtained for the pv asymmetry deviation in @xmath3si performed with and without the spin - orbit correction are almost indistinguishable for momentum transfers lower than about 3.5 @xmath11 , except near the diffraction minima where the spin - orbit contribution becomes ( fractionally ) significant and in the dips where @xmath191 is small anyway . one concludes that , at least in those regions away from the peaks seen in @xmath191 , one should not expect the spin - orbit contributions to lead to confusion when attempting to extract the isospin - mixing behaviour of the pv asymmetry . fourthly , a comparison between previous shell - model calculations and our results is now in order . by examining , for instance , figs . 3 or 7 in dds together with the present results we immediately see that the effects of isospin mixing are considerably larger here . in the shell - model calculation of dds , the value of the diagonal density matrix elements for those states below the active shell was the maximum possible occupation of the states in the isoscalar case and zero in the isovector case . for the states in the active shell , only the sum of the density matrix elements was fixed , but not the individual values . all the off - diagonal density matrix elements vanished in the shell - model calculation , but not in the hf calculation for the reasons mentioned above . these off - diagonal matrix elements are the main contributors to the total isovector form factors obtained in the hf case . in particular , it is of interest to trace back which contributions of the isovector spherical density matrix elements ( eq . ( [ density ] ) ) contain the largest major shell mixings . to illustrate this analysis we show in table [ multipolar ] the spherical isovector density matrix elements ( diagonal and off - diagonal ) for different ( @xmath192 ) contributions of @xmath3si . it is found that the off - diagonal isovector density between the harmonic oscillator levels 0d@xmath193 and 1d@xmath193 , _ i.e. , _ @xmath194 , has the largest value , followed by @xmath195 mixings in p@xmath196 and p@xmath197 ( @xmath198 and @xmath199 respectively ) . all of these are normalized to the value of the largest element , @xmath200 . the relative weight of each off - diagonal contribution to the total isovector form factor can change as the momentum transfer varies , and therefore it is necessary in this case to compare the quantities @xmath201 . these are also shown in the table for @xmath12 = 0.1 , 0.5 and 1 @xmath11 . since the coulomb monopole operator between two s.h.o . wave functions @xmath202 is proportional to @xmath12 for off - diagonal contributions , they tend to vanish as @xmath203 . but for @xmath204 , although still relatively small , off - diagonal contributions recover their prominent status determined by the densities . now let us turn to general results at low momentum transfers shown in figs . [ gamma_iso_strange_si28 ] and [ gamma_iso_strange_c12][comp_gammai_dw ] for the full set of four nuclei . in all cases the left - hand panels show results for the total pv asymmetry deviation @xmath191 in pwba for three values of electric strangeness : results with no electric strangeness ( solid curve ) , with @xmath205 ( labeled @xmath206 ) and with @xmath207 ( labeled @xmath208 ) . the strangeness contribution itself ( with @xmath209 ) is shown for reference as a dotted curve . in the right - hand panels the @xmath191 obtained in plane - wave and distorted - wave cases are compared ( solid dark and light lines , respectively ) and the fom obtained either at a fixed scattering angle of @xmath210 = 10@xmath211 ( in pwba ) or at fixed incident energy @xmath212 gev ( in dw ) ( solid and dashed lines , respectively ) is presented . the former gives a region around the peak where the fom reaches reasonable values for the pv deviation to be measured , whereas the latter just shows that the smaller the scattering angle , the larger the fom . when @xmath213 the fom with fixed scattering angle behaves as @xmath214 , whereas the fom with fixed incident energy behaves as a constant , this different behavior for @xmath213 being apparent in the figures . to set the scale of `` doability '' we refer back to our earlier discussions : the ability to measure the pv asymmetry is characterized ( at least ) by the fom and at constant angle this increases in going from light nuclei such as @xmath81he to a case such as @xmath4s . using basic input concerning practical beam currents , electron polarizations , detector solid angles and acceptable running times , in past work ( see dds and also the discussions in sec . 3.5.2 of @xcite ) , the lower limit was typically chosen to be 0.01 , _ i.e. , _ giving rise at least to a 1@xmath172 change with respect to the reference ( standard model ) value of the total pv asymmetry . in other words , if something affects the pv asymmetry at ( say ) the few percent level it can be regarded to be accessible with present technology . indeed , for elastic pv scattering from he 4% has already been reached @xcite and upcoming experiments such as that on pb aim for even higher precision . said another way , the peak value of the fom increases with nuclear species faster than @xmath80 , while the practical luminosity decreases roughly as @xmath215 , making the peak fom @xmath216 increase by roughly a factor of two in going from @xmath81he ( see @xcite ) to @xmath4s , and thus the `` doability '' actually increases in going from light to heavier nuclei . accordingly , in the present study in the figures we have shown where @xmath191 falls below the 1% level ( namely , a bit more challenging than measurements that have already been performed ) to indicate the rough division between what is presently within reach for pv electron scattering and what will have to wait for future advances in technology ; specifically , in the figures we have indicated with shading where the effects fall below 0.01 and are therefore likely to be inaccessible for the foreseeable future . table [ kinematicschoice ] shows the intervals of momentum transfer ( in @xmath11 ) appropriate for the measurement of the isospin - mixing part of the pv asymmetry deviations ( or , in other words , when @xmath180 ) . these intervals have been chosen so that the deviations are higher than 0.01 and at the same time the fom at fixed angle is no more than one order of magnitude lower than the maximum value reached at the peak . they are equivalently given in terms of incident electron energies ( in mev ) . in the case of fixed incident energy , the intervals transform into scattering angle ranges , also shown in the table ( in degrees ) for @xmath212 gev . these angles are never smaller than 5@xmath211 , which can be considered an experimental lower bound . as a rough estimate , one can say that in the upper half of the intervals the value of the fom is very similar no matter what the combination of incident energy and scattering angle used to get the correct transfer momentum . note , furthermore , that the fom is actually only the naive measure ( i.e. , it characterizes @xmath217 , as discussed above ) , whereas , with the isospin - mixing effects increasing with @xmath12 , a somewhat large value of momentum transfer will actually be the optimal one for experimental study . that is , a compromise must be reached to choose a momentum transfer range where both the pv asymmetry deviation and the fom are as large as possible . given the structure of the fom at fixed angle , decreasing with the momentum transfer due to its cross section dependence , the region of interest corresponds to the first bump of the fom graph , _ i.e. , _ between 0 and 1.5 @xmath11 . within this transfer momentum range , one has to find a more restrictive region where the pv asymmetry deviation is large enough to be measurable . the present work is not intended to be a detailed study of experimental possibilities and so we only use the fom for general guidance . in summary , everything above 10@xmath218 in the figures can be considered to be measurable , while the shaded region below this value will be experimentally more challenging . in figs . [ gamma_iso_strange_si28 ] and [ gamma_iso_strange_c12][gamma_iso_strange_s32 ] we show in the left - hand panel the pv asymmetry deviations for the three choices of electric strangeness , @xmath219 0 and + 1.5 , corresponding roughly to the presently known range of acceptable values . clearly for positive versus negative electric strangeness the interplay between strangeness and isospin mixing can be quite different : for negative @xmath110 the two effects tend to cancel , whereas for positive @xmath110 they interfere constructively . in the former case this makes the predicted @xmath191 small , but still potentially accessible , especially in the region at or slightly above @xmath220 @xmath11 . in contrast , for positive electric strangeness the asymmetry deviation typically rises above the 10% value in regions where the fom is peaking clearly a significant modification of the standard model pv asymmetry . one presumes that the @xmath81he case is the best suited to studies of electric strangeness , since isospin mixing is predicted to be much smaller then ; however , the interplay of isospin mixing and strangeness seen here for the four nuclei under investigation suggests that these cases could provide information not only about the former effect , but also the latter . in figs . [ gamma_iso_strange_si28 ] and [ gamma_iso_strange_c12][gamma_iso_strange_s32 ] we show in the right - hand panel and in a reduced transfer momentum range the isospin - mixing contribution to the asymmetry deviation computed within dw from eq . ( [ asymmetryratio_dw ] ) and once again in pwba . the main effect of the dw calculation is to smooth the divergence of the pwba asymmetry deviation by the appearance of a double - bumped structure . out of the region of the peaks , the differences between pwba and dw are seen to be negligible . finally , in order to emphasize how the isospin contribution to the pv asymmetry changes as the atomic number increases , we show results from a fully distorted calculation of it in fig . [ comp_gammai_dw ] for the four nuclei under study . we restrict ourselves to the momentum transfer region of most interest for future experiments , as suggested in table [ kinematicschoice ] , and the asymmetry deviation is shown in a non - logarithmic scale . as can be seen in the figure , the isospin contribution in this region of momentum transfer becomes higher as the atomic number increases and typically @xmath191 goes from a few percent for carbon to well over 10% for the heavier cases . it should be noted that this figure shows effects from isospin mixing alone : if strangeness contributions of order those discussed above were to be present then the total would be modified . for instance , if positive electric strangeness contributions consistent with present experimental limits were present in addition to the isospin - mixing effects discussed in the present work , then the total asymmetry deviations would be roughly twice those seen in the figure , namely , comparatively large effects would be observed . in the present work a new study has been undertaken of the effects expected from isospin - mixing in nuclear ground - state wave functions on elastic parity - violating electron scattering at momentum transfers extending up to about 1.5 @xmath11 . four n = z 0@xmath7 nuclei have been considered , @xmath1c , @xmath2 mg , @xmath3si , and @xmath4s , each expected to be very close to eigenstates of isospin with @xmath5 in their ground states . however , as first discussed in @xcite ( dds ) , the coulomb interaction occurs asymmetrically between pp and pn / nn nucleon pairs in the nucleus , thereby giving rise to small isospin mixing and thus the nuclear ground states considered here have small admixtures with t@xmath2210 . while such effects are essentially negligible in the parity - conserving cross section , they can play a measurable role in the parity - violating asymmetry , and accordingly , whether the focus is placed on isospin mixing itself or on how these effects may confuse interpretations of the pv asymmetry in terms of standard model tests or with respect to strangeness content in the weak neutral current , it is important to evaluate their influence . in the older work of dds a limited - model - space shell model was employed to estimate the isospin mixing and the mixing via the coulomb interaction was handled perturbatively via a simple two - level approximation . furthermore , in the study of dds the idea of using pv elastic electron scattering to determine ground - state neutron distributions as in the case of @xmath9pb ( which forms the basis of the prex experiment ) was put forward ; here the focus has been limited to a few special n = z nuclei and n@xmath221z cases such as lead have not been re - considered . in the present work a self - consistent axially - symmetric mean - field approximation with density - dependent effective two - body skyrme interactions , including coulomb interactions between pp pairs , has been used in direct determinations of the ground state wave functions . the small differences between the proton and neutron density distributions thereby obtained yield both isoscalar and isovector ground - state coulomb monopole matrix elements and produce modifications in the pv asymmetry from the model - independent result obtained in the absence of isospin - mixing and strangeness contributions . additionally , the effects of pairing in this mean - field approximation have also been investigated , as have effects from strangeness contributions in the single - nucleon form factors and from subtle spin - orbit contributions in the coulomb monopole operators . * in the present work one observes considerably larger effects from isospin mixing than were found in dds , especially since here important matrix elements both diagonal and off - diagonal are naturally included , whereas in the earlier study the restriction of the shell - model space to a single major shell yielded a special constraint on the @xmath12-dependences of the isovector matrix elements . * specific influences of nuclear dynamics ( different forces , different pairing gaps ) and of subtleties in the current operators ( spin - orbit effects ) were investigated and seen not to affect the pv asymmetry at low-@xmath12 significantly . * results using either plane or distorted electron waves were obtained and , for the relatively light nuclei considered in the present work , their differences were seen to be small at low momentum transfers except in the vicinity of a diffraction minimum where the cross section is also small . * kinematic ranges where potential future measurements might be undertaken are discussed by studying both the deviations in the pv asymmetry ( the differences seen with / without isospin mixing ) and the experimental figure - of - merit . we have shown that the isospin mixing effects considered in this work will have a measurable effect on the asymmetries , even for the light nuclei considered . * furthermore , in going from the lightest n = z nuclei to heavier cases one sees that the asymmetry deviations increase making the isospin - mixing effects all the more evident . * in exploring the interplay between the isospin - mixing effects and those effects that may arise from electric strangeness contributions one sees an interesting constructive / destructive interference scenario : with positive electric strangeness the two contributions add at low momentum transfers , whereas when negative they subtract . presently the sign of the electric strangeness form factor is unknown ( the form factor is , in fact , consistent with zero ) and so such interferences may provide information on isospin mixing , namely the main focus of the present work , but also on strangeness . this work was supported by ministerio de ciencia e innovacin ( spain ) under contracts no . fis2005 - 00640 and no . fis2008 - 01301 . thanks ministerio de ciencia e innovacin ( spain ) for financial support . j.m.u . acknowledges support from intas open call grant no 05 - 1000008 - 8272 , and ministerio de ciencia e innovacin ( spain ) under grants fpa-2007 - 62616 and fpa-2006 - 07393 , and ucm and comunidad de madrid under grant grupo de fsica nuclear ( 910059 ) . this work was also supported in part ( twd ) by the u.s . department of energy under contract no . de - fg02 - 94er40818 . twd also wishes to thank ucm - gruposantander for financial support at the universidad complutense de madrid . ccccc isotope & @xmath223 = 0 & @xmath223 = 1 mev & @xmath223 from mass diff . & @xcite @xmath1c & @xmath2240 & @xmath2240 & @xmath2240 & - @xmath2 mg & 56.67 & 54.70 & 38.54 & 58.1 @xmath3si & -45.35 & -43.41 & -28.62 & -57.75 @xmath4s & @xmath2240 & @xmath2240 & @xmath2240 & - cccccccc & & & & @xmath225 & @xmath226 & @xmath227 & @xmath228 & @xmath229 & & @xmath230 = 0.1 @xmath231 & @xmath230 = 0.5 @xmath231 & @xmath230 = 1 @xmath231 & & 0 & 0 & 0.058 & * 0.208 * & 0.229 & 0.145 & & 0 & 1 & 0.168 & 0.005 & 0.136 & * 0.346 * 0 & 1/2 & 0 & 2 & 0.052 & 0.000 & 0.005 & 0.048 & & 1 & 1 & 0.028 & 0.100 & 0.079 & 0.024 & & 1 & 2 & * 0.203 * & 0.011 & * 0.244 * & 0.304 & & 0 & 0 & 0.093 & * 0.335 * & 0.309 & 0.078 & & 0 & 1 & * 0.349 * & 0.013 & * 0.329 * & * 0.555 * 1 & 1/2 & 0 & 2 & 0.048 & 0.000 & 0.006 & 0.048 & & 1 & 1 & 0.093 & 0.330 & 0.217 & 0.047 & & 1 & 2 & 0.018 & 0.001 & 0.023 & 0.018 & & 0 & 0 & 0.184 & * 0.662 * & * 0.610 * & 0.016 & & 0 & 1 & * 0.628 * & 0.024 & 0.592 & * 1.000 * 1 & 3/2 & 0 & 2 & 0.137 & 0.000 & 0.018 & 0.137 & & 1 & 1 & 0.186 & 0.659 & 0.434 & 0.094 & & 1 & 2 & 0.031 & 0.002 & 0.040 & 0.032 & & 0 & 0 & 0.044 & * 0.156 * & * 0.119 * & 0.007 & & 0 & 1 & * 0.061 * & 0.003 & 0.061 & * 0.060 * 2 & 3/2 & 0 & 2 & 0.008 & 0.000 & 0.001 & 0.007 & & 1 & 1 & 0.001 & 0.004 & 0.002 & 0.000 & & 1 & 2 & 0.002 & 0.000 & 0.002 & 0.001 & & 0 & 0 & 0.142 & 0.506 & 0.386 & 0.024 & & 0 & 1 & * 1.000 * & 0.045 & * 1.000 * & * 0.986 * 2 & 5/2 & 0 & 2 & 0.048 & 0.000 & 0.008 & 0.044 & & 1 & 1 & 0.284 & * 1.000 * & 0.541 & 0.075 & & 1 & 2 & 0.379 & 0.003 & 0.049 & 0.021 ccisotope & for @xmath163 @xmath1c & 0.74 @xmath232 q @xmath232 1.42 & 838 @xmath232 @xmath233 @xmath232 1607 & 8.38 @xmath232 @xmath210 @xmath232 16.07 @xmath2 mg & 0.51 @xmath232 q @xmath232 1.10 & 577 @xmath232 @xmath233 @xmath232 1245 & 5.77 @xmath232 @xmath210 @xmath232 12.45 @xmath3si & 0.48 @xmath232 q @xmath232 1.08 & 543 @xmath232 @xmath233 @xmath232 1222 & 5.43 @xmath232 @xmath210 @xmath232 12.22 @xmath4s & 0.45 @xmath232 q @xmath232 1.05 & 509 @xmath232 @xmath233 @xmath232 1188 & 5.09 @xmath232 @xmath210 @xmath232 11.88
the influence of nuclear isospin mixing on parity - violating elastic electron scattering is studied for the even - even , @xmath0 nuclei @xmath1c , @xmath2 mg , @xmath3si , and @xmath4s . their ground - state wave functions have been obtained using a self - consistent axially - symmetric mean - field approximation with density - dependent effective two - body skyrme interactions . some differences from previous shell - model calculations appear for the isovector coulomb form factors which play a role in determining the parity - violating asymmetry . to gain an understanding of how these differences arise , the results have been expanded in a spherical harmonic oscillator basis . results are obtained not only within the plane - wave born approximation , but also using the distorted - wave born approximation for comparison with potential future experimental studies of parity - violating electron scattering . to this end , for each nucleus the focus is placed on kinematic ranges where the signal ( isospin - mixing effects on the parity - violating asymmetry ) and the experimental figure - of - merit are maximized . strangeness contributions to the asymmetry are also briefly discussed , since they and the isospin mixing contributions may play comparable roles for the nuclei being studied at the low momentum transfers of interest in the present work .
immunoglobulin g4-related sclerosing cholangitis ( igg4-sc ) is characterized by lymphoplasmacytic tissue infiltration with a predominance of igg4-positive plasma cells , increased igg4 production , leading to bile duct wall thickening and good response to steroid therapy . it was regarded as the most frequent extrapancreatic manifestation of type 1 autoimmune pancreatitis , present in over 70 percent of such patients . now igg4-sc belongs to the spectrum of immunoglobulin g4-related disease ( igg4-rd ) , which encompasses a large number of medical conditions that share similar histopathological features . as an independent type of igg4-rd the most often affected organs are pancreas and biliary duct . likewise , many igg4-sc patients have other organs involved , such as pancreas and major salivary glands ( submandibular gland , parotid ) . the clinical and pathological characteristics of isolate igg4-sc had been described ; however , rare investigations investigated the differences of clinical characteristics of igg4-sc between with and without other organ affected . does the igg4-sc patient without other organ involved has a mild manifestation , good response to steroid therapy ? it is still unclear . considering the susceptibility of pancreas and salivary glands and the screening facility , we focused on its differential characteristics between igg4-sc patients with or without these glands affected . a series of patients with igg4-sc in the period from january 2006 to december 2015 at our hospital and who did not receive any therapy before were included . the diagnosis were based on recognized international criteria , including the hisort criteria and the japan pancreas society criteria . each patient accepted magnetic resonance cholangiopancreatography ( mrcp ) and endoscopic retrograde cholangiopancreatography ( ercp ) for biliary and pancreatic ducts , ultrasound and thin - layer computed tomography for pancreas and major salivary glands . for mass or diffusely enlarged glands , biopsy was performed . the features of pancreas , submandibular gland , and parotid affected included a focal mass or a diffusely enlarged gland ( focal or diffuse duct stricture ) , a lymphoplasmacytic infiltrated within the tissues and a subsequent prompt response to steroid therapy . aimed to investigate the differential characteristics of igg4-sc with or without these glands affected , these patients were divided into 3 groups : single lesion group ( without extra - biliary involved ) , double lesions group ( 1 extra - biliary organ involved ) , and multiple lesions group . the initial corticosteroid therapy , which was prednisone 40 mg / day orally for 4 to 6 weeks then taper by 5 mg / week for total of 13 weeks of treatment with regular monitoring of biochemistry and repeated imaging were given to all included patients . there was no steroid - sparing agents ( azathioprine , methotremate , cyclophosphamide ) for these patients , and no use of maintenance therapy in the initial therapy in the study . ( 1 ) disease response was defined as symptomatic , biochemical and radiologic improvement after the commencement of treatment . ( 2 ) disease remission referred to the maintenance of the improvements after cessation of treatment . ( 3 ) disease relapse was defined as recurrences of disease activity after achievement of remission and cessation of treatment . ( 4 ) failed weaning was defined as an inability to wean steroids completely because of a flare of disease activity . clinical information was collected and analyzed including demographics , clinical presentations , igg4 serology levels , imaging features and treatment outcomes . the statistic difference was performed by using anova and chi - square ( spss 19 , spss , inc , chicago ) . meier curves were used to assess differences in relapse - free survival rates between groups . all procedural protocols were approved and supervised by the ethics committee of our hospital , and informed consent was signed by each patient . the study identified 72 igg4-sc patients , including 60 males and 12 females ( the ratio is 5:1 ) . the initial presentation included obstructive jaundice in 59 of 72 patients ( 81.9% ) , whereas 9 ( 12.5% ) with abdominal pain alone . in 4 patients 56 patients ( 77.8% ) had undergone surgery eventually , the other had puncture biopsy . among all the igg4-sc patients , 10 patients had only bile duct involved , and the other 62 patients had pancreas involved . in total , 36 patients had a focal pancreatic mass at presentation , and 26 patients had diffuse pancreatic enlargement . pancreatic ductal disease was seen on mrcp and/or in 39 of 62 ( 62.9% ) patients . focal pancreatic duct strictures were found in 22 of 62 ( 35.5% ) patients , whereas the diffuse structuring was seen in 17 of 62 ( 27.4% ) patients . besides the bile duct and pancreas , 12 patients had submandibular gland involved , 9 patients had parotid involved , and 1 patient had both ( table 1 ) . however , 10 patients had submandibular gland mass and 8 patients had parotid gland mass ; the other patients only had diffusely enlarged glands . comparison of clinical features between single and multiple lesions in immunoglobulin g4-related sclerosing cholangitis patients . to compared the differential characteristics between igg4-sc patients with or without other organs affected . the patients were divided into single lesion group , double lesions group , and multiple lesions group . as to the manifestation , the complaint was not more serious with more organs involved , but more complaints were given . the mean number of complaint was 2.9 kinds in the multiple lesions group , whereas the complaint was 1.4 kinds in the single lesion group ( p < 0.01 ) . of the immunoglobulins ( iga , igm , igg ) , the igg4 level is remarkably higher in 68 ( 94.4% ) igg4-sc patients . besides , serum igg4 levels were significantly higher in patients with multiple lesions ( 23458 19402.7 mg / l ) than in those with a single lesion ( 1473 546.7 mg / l , p < .05 ) . and the ratio of igg4/igg was higher in patients with multiple lesions . it was ( 24.2 6.5 ) % in the multiple lesions group and ( 19.4 5.1 ) % in the double lesions group , whereas ( 12.7 3.7)% in the single lesion group ( p < 0.05 ) . there were no significant differences in the alkaline phosphatase level of 3 groups ( p = 0.11 ) . distal bile duct involved is most common ( 97.2% ) . according to the different stricture part by imaging , type 1 patients account for 72.2% ( 52/72 ) , type 2 for 12.5% ( 9/72 ) , type 3 for 12.5% ( 9/72 ) , and type 4 for 2.8% ( 2/72 ) . the ratio of type 1 cholangiographic classification in the single lesion group was higher , whereas more type 2 and type 3 patients existed in the double lesion group or multiple lesions group ( table 1 ) . the median follow - up from the start of the initial steroid course was 12 months ( range , 632 months ) . all 72 patients exhibited a disease response within 4 to 6 weeks of starting steroids as defined . steroids were reduced and stopped after disease remission in 62 of 72 ( 86.1% ) patients after a total treatment period . the remission rate of the single lesion group was higher than that of the multiple lesions group . of the 62 patients who achieved remission , 41 ( 66.1% ) relapsed at 6th month after cessation of treatment . advanced studies indicate that the more extra - biliary organs involved or the more segments of bile duct involved the higher rate of recurrence . the recurrence rate in the multiple lesions group is 83.3% , which is higher than that in the single lesion group ( 20% , p < 0.05 , table 1 ) . the median duration of relapse - free was 5.9 months ( range , 0.325.2 months ) . the relapse - free survival was 20.0 months in the single lesion group , which is longer than that in the double lesions group ( 9.5 months ) or in the multiple lesions group ( 3.1 months , p < 0.05 , fig . 1 ) . kaplan meier curve of relapse - free survival in igg4-sc patients . the relapse - free survival was 20.0 months in the single lesion group , which is longer than that in the double lesions group ( 9.5 mo ) or in the multiple lesions group ( 3.1 mo , p < 0.05 ) . igg4-rd is an increasingly recognized immune - mediated condition , and it could involve nearly every anatomic site . patients often present with subacute development of a mass in the affected organ ( e.g. , an orbital pseudotumor , nodular lesions in the parotid gland ) or diffuse enlargement of an organ ( e.g. , the pancreas ) . it is associated with biliary obstruction and increased risk of malignancy , and also involves other organs . however , the different clinical characteristics and steroid response between igg4-sc patients with or without multiorgan affected have not been defined . we screened bile duct , pancreas , submandibular gland , and parotid because of its susceptibility and feasibility , and divided all patients into single lesion group , double lesions group , and multiple lesions group . the ages of patients in these 3 groups were similar , with means from 58.8 to 61.9 years ( ranges 2883 years ) . the clinical symptoms of igg4-sc patients were wide - ranging , manifesting in 1 or more organs synchronously or metachronously , although jaundice was the common presenting symptom . the complaint was not more serious with more organs involved , but more complaints were given . it is reasonable because when more organs were affected , a spectrum of mild localized symptoms to organ damage might occur . the elevated serum level of igg4 to some extent is the diagnostic hallmark of igg4-rd , although it is neither necessary nor sufficient for the diagnosis . in this study , 94.4% patients showed elevated serum igg4 concentrations ( 135 mg / dl ) . igg4>135 mg / dl in serum , as a cut - off value , demonstrated a sensitivity of 97% and a specificity of 79.6% in diagnosing igg4-rd . measurements of the serum igg4 concentration remain important for screening and evaluation of the disease . the more organs the disease involved , the higher serum igg4 level in our study . igg4-related sc displays segmental and long strictures of bile duct tree . in our cohort , the ratio of type 1 cholangiographic classification was 50% in triple or more lesions patients . that might indicate that the inflammatory igg4-positive plasma cells infiltrated more lesions of bile duct tree with more extra - biliary organs involved . systemic glucocorticoids , which are well known to induce nonselective apoptosis of lymphocytes , are the first - line approach for the most patients with igg4-rd . the majority of igg4-rd patients respond to glucocorticoids , particularly in early stages of disease . in some subsets of organ disease ( e.g. , pancreatitis ) , although the initial steroid response is excellent , relapses are common after early withdrawal of steroids for igg4-sc patients . considering too many interference factors in the treatment of igg4-sc , we only compared the initial response of glucocorticoids . the dosage and taper scenario were referred to the previous consensus and accordingly given to each patient . in our study , the igg4-sc patients relapse rate was 66.1% . in addition , our results reveal that the more extra - biliary organs involved or the more segments of bile duct involved , the higher rate of recurrence and the shorter relapse - free survival time . this indicates that the response of igg4-sc with multiple organs affected is poorer than that with the single lesions group . the study of hart had also shown that bile duct involvement were independent risk factors of igg4-rd disease relapse . initial experience suggested that steroid - sparing agents and maintenance therapy can maintain long - term remission . azathioprine , mycophenolate mofetil , 6-mercaptopurine , methotrexate , and tacrolimus have all been used as steroid - sparing agents . maintenance therapy refers to continuous low - dose glucocorticoids or any of the steroid - sparing agents following the achievement of remission . our study indicates that if more complaints , higher serum igg4 level and more stricture lesions of biliary tract were found , more organ lesions should be mentioned . moreover , considering the high relapse rate and short relapse - free survival , when more lesions are certified , more aggressive treatment should be given . in conclusion , the igg4-sc patients with multiple organs affected had more complaints , higher serum igg4 levels , and poor response to initial steroids .
abstractigg4-related sclerosing cholangitis ( igg4-sc ) is a rare biliary manifestation in which many other organs might be affected . the purpose of our study was to investigate the different clinical characteristics and initial steroid response between igg4-sc patients with and without other organs affected.a series of patients with igg4-sc in the period from january 2006 to december 2015 at our hospital were included . the pancreas and major salivary glands were screened , and the initial corticosteroid therapy was given . clinical information was collected and analyzed including demographics , clinical presentation , igg4 serology , imaging features , and treatment outcomes.the study identified 72 igg4-sc patients , including 60 males and 12 females . the mean age was 59.8 years old . among these igg4-sc patients , 10 patients had only bile duct involved , 42 patients had 2 organs involved and 20 patients had multiple organs involved . in patients with multiple organs involved , more complaints were given ( mean 2.9 kinds ) , higher serum igg4 levels were found ( 23458 19402.7 mg / l ) , and more stricture lesions of biliary tract were shown . all 72 patients exhibited a disease response within 4 to 6 weeks of starting steroids . the remission rate in the multiple lesions group was lower ( 60% ) , and the recurrence rate is higher ( 83.3% ) . the relapse - free survival was 20.0 months in the single lesion group , which is longer than that in the multiple lesions group ( 3.1 months , p < 0.05).the igg4-sc patients with multiple organs affected had more complaints , higher serum igg4 levels , and poor response to initial steroids .
galantamine is a long acting , selective and reversible acetylcholinesterase inhibitor that has been a licensed treatment for alzheimer s disease ( ad ) in the usa , across europe and into asia since 2000 . the main source of the pharmaceutical product galantamine ( galan / t / amine ) has been the alkaloid galanthamine ( galan / th / amine ) extracted from plants . galanthamine occurs in several species of the amaryllidaceae family , including galanthus nivalis , leucojum aestivum , lycoris radiate , and narcissus ( daffodil ) spp . however , with the exception of narcissus spp , the source plants are wild flowers not suitable for agricultural exploitation due to limitations in either resources or research , and consequently supplies have been limited . opportunities for producing synthetic galantamine have been explored [ 2 , 3 ] , but this has not proved to be a viable alternative . there is a long - established relationship between the exposure of plants to stress and the production of a vast array of secondary compounds , a proportion of which have medical or other commercial values . in many instances , secondary metabolites are implicated in plant stress amelioration and so tend to increase during exposure to stresses [ 47 ] . for example , lycoris aurea plants exposed to nitrogen stress ( no added n ) show markedly increased levels of the compound in the leaves . to date , however , there has been no attempt to evoke similar stress - induced responses under commercial growing conditions . upland areas within the uk and northern europe are characterized by poor growing conditions brought about by a combination of low temperatures , high rainfall , exposure to wind , thin soils , and a shortage of major nutrients . consequently agricultural production in these areas is generally limited to grassland - based ruminant systems that are currently heavily reliant upon government support payments to be economically viable . however , it has been reported that n. pseudonarcissus grown at altitude may yield higher concentrations of galantamine compared to bulbs grown under lowland conditions . thus growing narcissus spp . for galanthamine production in marginal areas could impose sufficient stress to increase galanthamine synthesis and so offer a novel solution to the issue of constrained galantamine supplies . this approach would simultaneously increase the economic resilience and social sustainability of less favored rural areas . legislative constraints surrounding plowing of long - term grassland ( the land cover accounting for by far the greatest proportion of upland farms ) limit options for traditional cropping however . furthermore , to date it has been the bulbs of narcissus plants that have been used as material for extraction of galanthamine , again requiring soil disturbance . our proof - of - principle study tested the feasibility of an innovative dual - cropping approach to producing plant - derived galanthamine based on integrating n. pseudonarcissus growing into existing marginal pasture and harvesting green above - ground vegetative plant materials rather than bulbs . by cultivating the crop in marginal growing conditions , we seek to exploit the previously observed increase in endogenous galanthamine biosynthesis of n. pseudonarcissus grown under such conditions but also to enhance biosynthesis further through the imposition of interspecies competition with forage grassland plants . such an approach could offer a win - win - win scenario whereby i ) ad patients have increased access to a proven treatment , ii ) environmental impacts are minimized , and iii ) traditional farming systems within marginal areas are maintained , with their economic viability increased . lines of n. pseudonarcissus cv . carlton ( size < 10 ; grampian growers , montrose , uk ) were sown into pasture at each of four different sites at the pwllpeiran upland research centre , wales from 253 m a.s.l . to 430 m a.s.l . at each site bulbs , were planted at three different intervals : 5 cm , 10 cm , and 15 cm apart . each line of n. pseudonarcissus was 8 m long , and lines were spaced 1 m apart . planting lines were created using a single bolt - on tooth ( 15 cm10 cm wide ) on the front bucket of a mini - digger ( 8026 cts ; jcb ltd , rocester , staffordshire , uk ) . bulbs were planted at the prescribed densities by hand , with the tops of the bulb the treatment distance apart . a 500 g soil sample was collected for each site by bulking 10 soil cores collected at random between the lines of n. pseudonarcissus . sampling of the n. pseudonarcissus biomass was undertaken when the majority of flowers at a site reached the gooseneck growth stage , i.e. , were bent downwards to an angle of approximately 45 but were unopened . the number of n. pseudonarcissus plants growing was counted along a 6 m length in the center of each line . the corresponding growth was then harvested to a height of 3 cm using grass shears . the material cut from each line was weighed to determine fresh matter ( fm ) weight . fifteen leaves and 15 flower stems were then selected from each bag at random for length measurements . to determine dry matter ( dm ) content , a sub - sample was taken from each bag and oven dried to constant weight at 60c . a separate sub - sample of approximately 100 g was taken for subsequent analysis to determine alkaloid concentrations . leaf sections of approximately 100 mg fm were homogenized in 500 l of methanol adjusted to ph 8 with 25% of ammonia added , and then a further 500 l methanol added . the samples were left for at least 5 h and then centrifuged at 13,000 r.p.m . for 1 min . the dry extract was dissolved in 500 l mobile phase a ( see below ) prior to analysis by high - performance liquid chromatography . a betasil c18 column ( 1504.6 mm ; particle size 5 m ) was used ( fisher scientific uk ltd , loughborough , uk ) . analyses were conducted with ultra - violet monitoring at 298 nm using a gradient method . the mobile phase consisting of 0.1% trifluoroacetic acid in pure water ( mobile phase a ) and acetonitrile ( mobile phase b ) was filtered through a membrane filter , degassed for 4 min before use and pumped to the column at the rate of 1 ml min . the data were collected and analyzed using the chrom quest 5.0 hplc database program ( thermo fisher scientific , cramlington , uk ) . data were analyzed using general analysis of variance with altitude and planting distance as treatment effects ( genstat ( 16th edition ) ; vsn international ltd , hemel hempstead , uk ) . in this context , altitude was used as a collective term for the combination of factors relating to soil characteristics , climatic conditions , and exposure , which potentially influence the degree of environmental stress experienced . lines of n. pseudonarcissus cv . carlton ( size < 10 ; grampian growers , montrose , uk ) were sown into pasture at each of four different sites at the pwllpeiran upland research centre , wales from 253 m a.s.l . to 430 m a.s.l . at each site bulbs , were planted at three different intervals : 5 cm , 10 cm , and 15 cm apart . each line of n. pseudonarcissus was 8 m long , and lines were spaced 1 m apart . planting lines were created using a single bolt - on tooth ( 15 cm10 cm wide ) on the front bucket of a mini - digger ( 8026 cts ; jcb ltd , rocester , staffordshire , uk ) . bulbs were planted at the prescribed densities by hand , with the tops of the bulb the treatment distance apart . a 500 g soil sample was collected for each site by bulking 10 soil cores collected at random between the lines of n. pseudonarcissus . sampling of the n. pseudonarcissus biomass was undertaken when the majority of flowers at a site reached the gooseneck growth stage , i.e. , were bent downwards to an angle of approximately 45 but were unopened . the number of n. pseudonarcissus plants growing was counted along a 6 m length in the center of each line . the corresponding growth was then harvested to a height of 3 cm using grass shears . the material cut from each line was weighed to determine fresh matter ( fm ) weight . fifteen leaves and 15 flower stems were then selected from each bag at random for length measurements . to determine dry matter ( dm ) content , a sub - sample was taken from each bag and oven dried to constant weight at 60c . a separate sub - sample of approximately 100 g was taken for subsequent analysis to determine alkaloid concentrations . leaf sections of approximately 100 mg fm were homogenized in 500 l of methanol adjusted to ph 8 with 25% of ammonia added , and then a further 500 l methanol added . the samples were left for at least 5 h and then centrifuged at 13,000 r.p.m . the dry extract was dissolved in 500 l mobile phase a ( see below ) prior to analysis by high - performance liquid chromatography . a betasil c18 column ( 1504.6 mm ; particle size 5 m ) analyses were conducted with ultra - violet monitoring at 298 nm using a gradient method . the mobile phase consisting of 0.1% trifluoroacetic acid in pure water ( mobile phase a ) and acetonitrile ( mobile phase b ) was filtered through a membrane filter , degassed for 4 min before use and pumped to the column at the rate of 1 ml min . the data were collected and analyzed using the chrom quest 5.0 hplc database program ( thermo fisher scientific , cramlington , uk ) . data were analyzed using general analysis of variance with altitude and planting distance as treatment effects ( genstat ( 16th edition ) ; vsn international ltd , hemel hempstead , uk ) . in this context , altitude was used as a collective term for the combination of factors relating to soil characteristics , climatic conditions , and exposure , which potentially influence the degree of environmental stress experienced . soil nutrient status across the four sites was variable ( table 1 ) . in terms of the key minerals , the concentrations recorded equate to moderate or high indices for potassium and magnesium , but low or very low indices for phosphorus . the plant counts prior to harvest showed over 80% of the bulbs had successfully established ( table 2 ) , demonstrating that planting under long - term pasture on comparatively poor soils is feasible . planting distance inevitably had a significant effect on the biomass of herbage harvested ( table 2 ) , but we found no effect of altitude on total dm yield or dm yield per bulb planted . between - altitude differences in leaf and stem length followed a similar pattern to fm yield . it has been shown that the concentration of galanthamine in n. pseudonarcissus can vary between different varieties . the variety carlton is considered to have potential as a commercial source of galantamine due to relatively high concentrations of galanthamine in the bulbs , a large bulb size , and good availability of large volumes of planting stock . galanthamine concentrations in n. pseudonarcissus leaves have been found to be steady until flowering , before decreasing . although higher concentrations of alkaloids could potentially be obtained from leaves at an earlier growth stage than the gooseneck stage , we judged that the total amount of biomass , and thus total yield of galanthamine , would not be so high . the galanthamine concentrations achieved during the current experiment were substantially higher than those recorded during the earlier study focused on bulbs , and higher than concentrations previously reported for above - ground n. pseudonarcissus biomass . these findings are in keeping with the imposition of greater plant - plant competition when growing in grassland eliciting a greater stress response , but further research is required to verify this relationship . by cutting green material , there is potential for a single planting of bulbs to deliver harvests over multiple years these results concur with those from an earlier study which found the concentration of galanthamine in other narcissus cultivars to be unaffected by planting depth and density , bulb size , or flower bud removal . thus , overall , the results suggest that higher planting densities which would favor biomass yield would maximize galanthamine yield , although monitoring over multiple harvest years would be beneficial to determine whether further nutrient depletion from already poor quality soils becomes a factor over time . in summary , this study has verified the feasibility of establishing n. pseudonarcissus under permanent pasture in upland areas as a means of producing plant - derived galanthamine . further research is now required to verify the commercial viability of this supply route and develop management guidelines that maximize galanthamine yield .
many secondary plant compounds are synthesized in response to stressed growing conditions . we tested the feasibility of exploiting this feature in a novel strategy for the commercial production of the plant alkaloid galanthamine . experimental lines of narcissus pseudonarcissus were established under marginal upland permanent pasture at four different sites . over 80% of bulbs successfully established at each site . there was no effect of altitude or planting density on galanthamine concentrations within vegetative tissues , which were higher than anticipated . the results confirm that planting n. pseudonarcissus under grass competition in upland areas could offer a novel and sustainable source of plant - derived galanthamine .
Amateur video captured the chaos from many angles Thursday night when snipers opened fire on police in downtown Dallas on Thursday. In one video a hidden gunman emerged from behind the pillars of a building and firing on officers. In another, officers take cover behind their squad cars as gunfire rings out. When the killers were done, five officers were dead, seven more were wounded, and witnesses began sharing the bloodshed over social media. Some of the videos contain graphic content. Witness Ismael Dejesus shot video that was broadcast by FOX and CNN that shows the ambush of several officers. "It looked like an execution, honestly," Dejesus said. "He stood over him after he was already down and shot him three or four more times." WARNING GRAPHIC. Pray for this officers family. He died trying to protect others. Not all cops are bad man ðŸ™.#Dallas pic.twitter.com/P1Diqp2mWY — matt shapiro (@Mattshapiro17) July 8, 2016 Thursday's protest was not a Black Lives Matter event, according to the Rev. Dominique Alexander, president of the Next Generation Action Network. Alexander said his organization, in partnership with Jeff Hood, organized the march through downtown Dallas as a call for justice for black victims of police shootings. "There is no local chapter of the Black Lives movement," Alexander said. "That's just national rhetoric." He said police were five to 10 feet in front of him when the first shots rang out. "A photographer was shot in the leg," Alexander said. "I don't think she was a professional because she didn't have credentials, but she fell down screaming." As the crowd sprinted away from the scene, Alexander said a Dallas police sergeant charged in to assist the wounded photographer. Other witnesses in downtown Dallas heard multiple gunshots ring out and captured videos of the chaos during a protest. Witness Ismael Dejesus joins @donlemon to share the video he captured of the #Dallas shooting https://t.co/oBmEIv09rk — CNN (@CNN) July 8, 2016 Calvin Johnson, 36, said he knew witnesses near El Centro who said the shooter told them, "Somebody's fixing to get killed tonight -- an officer. " They said the shooter then put on a bulletproof vest and shot the DART officer in the back. About 1 a.m, the mayor and police chief arrived at Parkland Memorial Hospital. Shortly after that a procession of motorcycle police officers arrived in formation, honor guards wearing white gloves. Garland resident Shetamia Taylor, 38, was shot in the back of the leg. She'd gone downtown with her two older sons -- Kavion, 17, and Andrew, 15 -- to educate them about how to peacefully protest. Taylor threw herself on top of Andrew. He was covered in his mother's blood, said older sister Theresa Williams of Dallas. Taylor is in surgery and is expected to make a full recovery. Her teenage sons were together at Hyatt House Dallas/Uptown, where police had urged to stay put most of the night. Conspiracy theories are rippling through the Black Lives Matter crowd. Several activists say at least one of the shooters was white, prompting widespread speculation that the violence was intended to paint the movement as dangerous. Cherelle Blazer, a Yale-educated environmentalist, said her husband, Walter "Changa" Higgins, and two teenage sons got separated in the chaos that followed the sniper shots. "The boys are in the basement of the Omni Hotel," Blazer said. "Changa went out to try to help people ... after the shots began. He's still out there trying to make sure the other protesters are OK." Blazer said the black man, whose image was being flashed on local television stations, is an Open Carry protester, not a shooter. His gun has not been fired and he has a permit. Brianna Mason, 18, was at the protests. She and her friends had to run inside the Greyhound bus station for shelter. "I just hope I don't lose my life tonight," Mason said. RAW VIDEO: Our cameras captured the panic after the first shots were fired in downtown #Dallas Thursday nighthttps://t.co/JXcapjFIHv — WFAA-TV (@wfaachannel8) July 8, 2016 While in the station Mason said she was thinking about her young daughter at home. Mason and others said they saw police officers shoot into the crowd. "I thought it was unacceptable," she said. "That needs to be brought to attention of people. But the police -- they're just going to change the topic." Outgoing transport from the Greyhound bus station downtown was temporarily halted with the heavy police presence outside. "I'm not going to send a bus out and make it a moving target in all this mess," a voice announced over the loudspeaker. Inside, Shantay Johnson was in tears. She'd left home at 4 this morning and just wanted to get back there. She said she'd been using the WiFi at El Centro College when the shots rang out -- "pop, pop, pop," she said. "Then everybody just ran." She did, too. Police had their guns drawn, just waiting, at Lamar and Commerce, across from the bus station. Inside, nerves were jittery -- someone dropped a heavy camera on the hard floor and people instinctively ducked at the sound Lasylvia Knight, 27, sitting on a ledge near the George Allen Courts Building, pointed at Dealey Plaza when asked why someone might do this. "Why did they kill JFK at Dealey Plaza?" Knight said. "It's Dallas. It doesn't have to make sense." Lynn Mays was at the scene when the shooting started. He said an officer saved his life. pic.twitter.com/Kc4aXlg9GT — Caleb Downs (@Calebjdowns) July 8, 2016 Jahi Bakari, whose daughter and goddaughter were at last year's McKinney pool party, was about a block away from the federal building when he heard the shots. "It sounded like the Fourth of July," said Bakari, whose daughter was pushed by a police officer at the pool party. He ducked between two parked cars while other protestors ran away. He said Thursday's night shooting could escalate racial issues in the city. "It's going to be race manifested to the ultimate, right here in Dallas," he said. Ben Nelson, a 29-year-old Army veteran, attended such a protest for the first time. He headed for safety as people started running on Commerce Street near Houston Street. Nelson, who served a tour in Afghanistan, said the recent deaths of Sterling and Castille and others "brought back memories of the war." Nelson didn't see the Dallas shooter or any others hurt, he said. DeKanni Smith walked away from an increasingly rowdy crowd on Main Street and Murphy that was marching toward a group of officers shouting about 11 p.m. Thursday. "I'm embarrassed," Smith said. "I came out here to the protest to support in hopes that we could come together. ...The officers who were killed were probably walking with us to keep us safe. I'm disgusted." It was the first protest for Adiryah Esaw, 21, as well. Looking shaken, Esaw said she came after multiple viewings of the recent killings on Facebook. "It is very hard to see that," Esaw said. Of the violence this night in Dallas, Esaw said, "I hate that this happened. It's not a surprise. People are angry. They are tired. They feel there us no other resort. People are trying to make change. There's another black body, another hashtag. Esaw, who is black, turned to the three white protesters she met in the run for safety. "He could have been a good cop," she said of the report an officer had been taken to the hospital. Added Cameron Young, "We can still be against this and not be anti-cop." More than an hour later, Young, Esaw and the others decided to head to safety. In the inky might, they headed to a police car with flashing red and blue lights. Jamaal Johnson says he witnessed bullets from parking garage next to BOA bldg, ricochet off concrete. pic.twitter.com/shhnnW7MKa — Sarah Mervosh (@smervosh) July 8, 2016 The commotion spread to Parkland Hospital where Gary Demerson was in the emergency waiting area to check his mother in for an unknown bite mark. Police pulled up banging on windows yelling "officer down." "Everyone started looking around be use we didn't know what was happening," Demerson said. Staff began yelling "code yellow" as the officer was being pulled from a police car, he said. "He looked limp and was falling off the stretcher while a nurse was on him doing CPR," his wife Megan Demerson said. An officer in a bullet proof vest guided a steady arrival of police in patrol cars and civilian cars to a secure area at Parkland just before 1 a.m. It was unclear how many officers were taken to Parkland to be treated. Rashida Ivey said people started running in the opposite direction of the gunshots as soon as they started. pic.twitter.com/FaPod2Dvu4 — Caleb Downs (@Calebjdowns) July 8, 2016 Dallas County hospital district police stood guard at the entrance of the Parkland Hospital emergency room and blocked anyone but patients from entering. Police were also screening visitors going through an alternative entrance. Carlos Benitez came to the Parkland ER with his sister at about 6 p.m. after she was hit by a car. Two hours later, police and a nurse kicked out everyone who wasn't a patient, Benitez said. "Whoever didn't have a wristband was let out," he said. A family friend, Sulma Campos, clutched a bag of Burger King that she was trying to deliver to Benitez's sister. But after she queued for about 20 minutes to see her friend, police denied her access to the waiting room. Hugo Lazo walked away from the hospital in frustration. His pregnant 39-year-old wife was in the ER waiting room experiencing complications. "I want to be there, but I can't," he said. More on the Dallas Ambush Updates: 5 officers slain, 7 wounded, 1 suspect dead after shooting at Dallas rally Large swath of downtown Dallas' west side, including two major DART stations, will be closed Friday Obama: Dallas shootings 'vicious, calculated, despicable attack on law enforcement' Before shooting, marchers stressed progress, peace Protester cleared after police identify him as a suspect Reaction DMN's Jacquielynn Floyd: This is Dallas, this is our city, and we don't let terrorism win DMN's Robert Wilonsky: The shattering of peace in Dallas leaves our city in pain DMN's Mike Hashimoto: Snipers take aim at an entire city and score a direct hit Editorial: Dallas must wake from this nightmare united Organizers of anti-brutality rally say they're devastated police died Dallas athletes, coaches, teams reach out on Twitter in response to attack on police: 'Pray for Dallas' John Legend, NYPD, Bachelorette's JoJo Fletcher among Twitter reactions to #Dallas shooting Political leaders pledge support for officers after fatal attack in downtown Dallas Images / video Graphic video shows ambush, shooting of officers in Dallas Photos: Terror in downtown Dallas as sniper shootings kill, injure police officers, civilian at rally Photos: Peaceful but emotionally charged rally precedes sniper shootings of police officers in Dallas On Twitter: @liz_farmer ||||| Brent Thompson, of Dallas Area Rapid Transit, one of five officers killed. Brent Thompson via LinkedIn/Handout via Reuters Michael Krol is pictured in this handout photo. Krol, a former employee of the Wayne County Sheriff's Office, is one of the officers killed in an attack in which five police officers were shot dead at a protest decrying police shootings of black men according to local media. Wayne County Sheriff's Office/Handout via Reuters Patrick Zamarripa is pictured in this undated family handout photo. Zamarripa, an U.S. Navy veteran, is one of the officers killed in an attack in which five police officers were shot dead at a protest decrying police shootings of black men according to media reports and his friends on social media. Hector Zamarripa/Handout via Reuters Police cars remain parked with the pavement marked by spray paint, in an aerial view of the crime scene of a shooting attack in downtown Dallas, Texas, U.S. July 8, 2016. REUTERS/Brandon Wade A NYPD tee shirt is seen on a makeshift memorial near the crime scene two days after a lone gunman ambushed and killed five police officers at a protest decrying police shootings of black men, in Dallas, Texas, U.S., July 9, 2016. REUTERS/Shannon Stapleton Arthur 'Silky Slim' Reed, with the group Stop the Killing Inc., demands the resignation of Baton Rouge Mayor Kip Holden during a news conference at the Triple S convenience store where Alton Sterling was shot dead by police in Baton Rouge, Louisiana, U.S. July 9, 2016. REUTERS/Jonathan Bachman A demonstrator protesting the shooting death of Alton Sterling is detained by law enforcement near the headquarters of the Baton Rouge Police Department in Baton Rouge, Louisiana, U.S. July 9, 2016. REUTERS/Jonathan Bachman A man protesting the shooting death of Alton Sterling is detained by law enforcement near the headquarters of the Baton Rouge Police Department in Baton Rouge, Louisiana, U.S. July 9, 2016. REUTERS/Jonathan Bachman People take part in a protest against police brutality and in support of Black Lives Matter in New York July 9, 2016. REUTERS/Eduardo Munoz Demonstrators protest the shooting death of Alton Sterling near the headquarters of the Baton Rouge Police Department in Baton Rouge, Louisiana, U.S. July 9, 2016. REUTERS/Jonathan Bachman Demonstrators with Black Lives Matter hold up signs before a protest march in Washington, U.S., July 9, 2016. REUTERS/Joshua Roberts Alex Brown carries his two-month-old son Noah as his wife Lakeisha carries a sign during a Black Lives Matter protest in Washington, U.S., July 9, 2016. REUTERS/Joshua Roberts Jennifer Whitson cries during a service at the Potter's House church during Sunday service following the multiple police shootings in Dallas, Texas, U.S., July 10, 2016. REUTERS/Carlo Allegri A woman prays at the Potter's House church during Sunday service following the multiple police shootings in Dallas, Texas, U.S., July 10, 2016. REUTERS/Carlo Allegri A woman protests the shooting death of Alton Sterling near the headquarters of the Baton Rouge Police Department in Baton Rouge, Louisiana, U.S. July 9, 2016. REUTERS/Jonathan Bachman Protesters hold signs ahead of a march against police brutality in Manhattan, New York, U.S., July 9, 2016. REUTERS/Bria Webb Demonstrators protest the shooting death of Alton Sterling near the headquarters of the Baton Rouge Police Department in Baton Rouge, Louisiana, U.S. July 9, 2016. REUTERS/Jonathan Bachman The makeshift memorial is pictured at Dallas Police Headquarters following the multiple police shootings in Dallas, Texas, U.S., July 9, 2016. REUTERS/Carlo Allegri A Dallas Police wears a mourning band as he pays respects at a makeshift memorial at Dallas Police Headquarters following the multiple police shootings in Dallas, Texas, U.S., July 9, 2016. REUTERS/Carlo Allegri A Dallas police officer bows her head at the Joy Tabernacle A.M.E. church during Sunday service following the multiple police shootings in Dallas, Texas, U.S., July 10, 2016. REUTERS/Carlo Allegri A Dallas police officer hugs a man following a prayer circle after a Black Lives Matter protest following the multiple police shootings in Dallas, Texas, U.S., July 10, 2016. REUTERS/Carlo Allegri DALLAS The U.S. military veteran who fatally shot five Dallas police officers was plotting a larger assault, authorities said on Sunday, disclosing how he also taunted negotiators and wrote on a wall in his own blood before being killed. Micah X. Johnson improvised instead and used "shoot-and-move" tactics to gun down the officers during a demonstration on Thursday evening, Dallas Police Chief David Brown told CNN. It was the deadliest day for U.S. law enforcement since the Sept. 11, 2001, attacks. Brown said a search of Johnson's home showed the gunman had practiced using explosives, and that other evidence suggested he wanted to use them against law enforcement. "We're convinced that this suspect had other plans," he said, adding that last week's fatal police shootings of two black men in Minnesota and Louisiana led the 25-year-old Texas shooter to "fast-track" his attack plans. Johnson, a black veteran who served in Afghanistan, took advantage of a spontaneous march that began toward the end of the protest over those killings. Moving ahead of the rally in a black Tahoe SUV, he stopped when he saw a chance to use "high ground" to target police, Brown said. Before being killed by a bomb-equipped robot, Johnson sang, laughed at and taunted officers, according to Brown, telling them he wanted to "kill white people" in retribution for police killings of black people. "He seemed very much in control and very determined to hurt other officers," the police chief said. SURPRISE ATTACK Brown said police had been caught off guard when some protesters broke away from Thursday's demonstration, and his officers were exposed as they raced to block off intersections ahead of the marchers. Johnson's military training helped him to shoot and move rapidly, "triangulating" his fire with multiple rounds so that police at first feared they were facing several shooters. Brown defended the decision to use a robot to kill him, saying that "about a pound of C4" explosive was attached to it. He added Johnson scrawled the letters "RB" in his own blood on a wall before dying. "We're trying to figure out through looking at things in his home what those initials mean," the police chief said. The U.S. Department of Defense and a lawyer who represented Johnson did not return requests for information on his military history or the status of his discharge. Several members of Johnson's former Army unit, the 420th Engineer Brigade, exchanged comments on Facebook. "Makes me sick to my stomach," wrote one, Bryan Bols. Speaking at a local hospital, wounded mother Shetamia Taylor sobbed as she thanked police who shielded her and her son as the bullets began to fly. At the Cathedral Shrine of the Virgin of Guadalupe in downtown Dallas, Roman Catholic parishioners gathered on Sunday for their weekly service and to remember the fallen officers. "I would like you to join me in asking: 'Who is my neighbor?'" the Rev. Eugene Azorji, who is black, told the congregation. "Those who put their lives on the line every day to bring a security and peace, they represent our neighbor." A candlelight vigil is due to be held at 8 p.m. on Monday in Dallas City Hall plaza. PROTESTS AND ARRESTS The mass shooting amplified a turbulent week in the United States, as the issues of race, gun violence and use of lethal force by police again convulsed the country. Even as officials and activists condemned the shootings and mourned the slain officers, hundreds of people were arrested on Saturday as new protests against the use of deadly force by police flared in several U.S. cities. Particularly hard hit was St. Paul, Minnesota, where 21 officers were injured as police were pelted with rocks, bottles and fireworks, officials said. Protesters faced off with police officers wearing gas masks on Sunday evening in Baton Rouge, Louisiana. Three countries have warned their citizens to stay on guard when visiting U.S. cities rocked by the protests. Speaking in Madrid during a European tour, U.S. President Barack Obama said attacks on police over racial bias would hurt Black Lives Matter, a civil rights movement that emerged from the recent police killings of African-Americans but has been criticized for vitriolic social media postings against police, some of them sympathetic to Johnson. "Whenever those of us who are concerned about failures of the criminal justice system attack police, you are doing a disservice to the cause," the United States' first black president told a news conference. (Additional reporting by Ernest Scheyder, Jason Lange, David Bailey, Ruthy Munoz and Lisa Garza; Writing by Daniel Trotta and Daniel Wallis; Editing by Paul Simao and Peter Cooney) ||||| Police and others gather at the emergency entrance to Baylor Medical Center in Dallas, where several police officers were taken after shootings Thursday, July 7, 2016.. (AP Photo/Emily Schmall) (Associated Press) Police and others gather at the emergency entrance to Baylor Medical Center in Dallas, where several police officers were taken after shootings Thursday, July 7, 2016.. (AP Photo/Emily Schmall) (Associated Press) DALLAS (AP) — Snipers opened fire on police officers in Dallas, killing four officers and injuring seven others during protests over two recent fatal police shootings of black men, police said. Three people are in custody and a fourth suspect was exchanging gunfire with authorities, Dallas Police Chief David Brown said early Friday morning. The suspect is not cooperating and has told negotiators he intends to hurt more law enforcement officials, the chief said. The gunfire broke out around 8:45 p.m. Thursday while hundreds of people were gathered to protest fatal police shootings this week in Baton Rouge, Louisiana, and suburban St. Paul, Minnesota. Brown told reporters the snipers fired "ambush style" upon the officers. Mayor Mike Rawlings said one member of the public was wounded in the gunfire. Protests were also held in several other cities across the country Thursday night after a Minnesota officer on Wednesday fatally shot Philando Castile while he was in a car with a woman and a child. The aftermath of the shooting was livestreamed in a widely shared Facebook video. A day earlier, Alton Sterling was shot in Louisiana after being pinned to the pavement by two white officers. That, too, was captured on a cellphone video. Video footage from the Dallas scene showed protesters were marching along a street in downtown, about half a mile from City Hall, when the shots erupted and the crowd scattered, seeking cover. Brown said that it appeared the shooters "planned to injure and kill as many officers as they could." The search for the shooters stretched throughout downtown, an area of hotels, restaurants, businesses and some residential apartments. The scene was chaotic, with helicopters hovering overhead and officers with automatic rifles on the street corners. "Everyone just started running," Devante Odom, 21, told The Dallas Morning News. "We lost touch with two of our friends just trying to get out of there." One woman was taken into custody in the same parking garage where the standoff was ongoing, Brown said. Two others were taken into custody during a traffic stop. Brown said police don't have a motivation for the attacks or any information on the suspects. He said they "triangulated" in the downtown area where the protesters were marching and had "some knowledge of the route" they would take. He said authorities have not determined whether any protesters were involved with or were complicit in the attack. Police were not certain early Friday that all suspects have been located, Brown said. The FBI's Dallas division is providing "all possible assistance," spokeswoman Allison Mahan said. Carlos Harris, who lives downtown told the newspaper that the shooters "were strategic. It was tap, tap pause. Tap, tap pause." Demonstrator Brittaney Peete told The Associated Press that she didn't hear the gunshots, but she "saw people rushing back toward me saying there was an active shooter." Peete said she saw a woman trip and nearly get trampled. Late Thursday, Dallas police in uniform and in plainclothes were standing behind a police line at the entrance to the emergency room at Baylor Medical Center in Dallas. It was unclear how many injured officers were taken there. The hospital spokeswoman, Julie Smith, had no immediate comment. Three of the officers who were killed were with the Dallas Police Department. One was a Dallas Area Rapid Transit officer. Texas Gov. Greg Abbott released a statement saying he has directed the Texas Department of Public Safety director to offer "whatever assistance the City of Dallas needs at this time." "In times like this we must remember — and emphasize — the importance of uniting as Americans," Abbott said. Other protests across the U.S. on Thursday were peaceful. In midtown Manhattan, protesters first gathered in Union Square Park where they chanted "The people united, never be divided!" and "What do we want? Justice. When do we want it? Now!" In Minnesota, where Castile was shot, hundreds of protesters marched in the rain from a vigil to the governor's official residence. Protesters also marched in Atlanta, Chicago and Philadelphia. ||||| This is terrorism. This is what terrorists do. It isn't a war -- not black against blue, or us against them. When a reported two gunmen opened fire Thursday night on police officers, on our police officers, they attacked us all. Stunned witnesses described a peaceful march with about 800 protesters gathered in downtown Dallas to register sorrow and concern over police-involved shootings this week in Baton Rouge and suburban Minneapolis. They were escorted by about 100 police officers, our officers, there to act as protectors. "The cops were peaceful. They were taking pictures with us and everything," one marcher told KXAS-TV Channel 5. "It was a lovely, peaceful march," said another. The peace was shattered by terrorism, a well planned and barbaric attack on that peace. In purposely murdering police officers, these gunmen did not further any cause or make any kind of statement that civilized people can understand. They opened fire, literally, on human decency. In the days ahead, some apologists -- not many, but a few -- will take the line that these murdered officers are casualties of war. They'll say this act was the culmination of countless provocations and frustrations; they'll try to frame some kind of context around this chaos. They'll say, "It's terrible, but ..." No. It's just terrible. It's sickening, cowardly. It furthers no cause; it accomplishes nothing but misery and grief. It's violence for the deranged love of violence itself, disguised beneath a political veneer. To employ a Texas colloquialism, it's chickens**t with a gun. City officials and police departments train and drill and try to anticipate disaster, but how can you anticipate determined inhumanity? You can't, not if you're human. Mayor Mike Rawlings looked frazzled and stunned during a brief television appearance late Thursday. "At 8:58, our worst nightmare happened," he said wearily. The cliche is justified because it's factual. This is a nightmare: The nightmare of a terrorist attack at a peaceful, familiar street intersection where we have all been a thousand times. This is one of those moments that is going to test all of us. Dallas, our home, is about to become famous again for all the wrong reasons. It's going to be a shorthand for chaos. We've been there before. Today, we have a choice. We can let somebody else write the script, tell us that some of us are victims, and some are perpetrators; tell us which camps we're in and whose side we're supposed to be on. Or we can unite in our shared shock and sorrow. We can all choose the side of decency and humanity. We can reject home-grown terrorism, mourn our fallen and support our wounded. We can wrap our arms around the community we share, and make our message clear: Terrorism can hurt us. On Thursday night, it did. But it doesn't win. This is our city, and we won't let it. More on the ambush of Dallas officers Live updates DMN's Jacquielynn Floyd: This is Dallas, this is our city, and we don't let terrorism win DMN's Robert Wilonsky: The shattering of peace in Dallas leaves our city in pain DMN's Mike Hashimoto: Snipers take aim at an entire city and score a direct hit Editorial: Dallas must wake from this nightmare united Before shooting, marchers stressed progress, peace Click here for more stories, images, video here On Twitter: @jfloyd_dmn
– Three suspects are now in custody after the killing of five police officers during a protest Thursday night, and a fourth suspect is still exchanging fire with police, Dallas Police Chief David Brown says. The chief says the suspect has told negotiators he plans to hurt more officers and has planted bombs all over the area, the AP reports. He says the suspects in custody include a woman who was arrested in the same parking garage where the standoff continues. Another two suspects were taken into custody after police stopped their vehicle. Brown told reporters early Friday that the officers were killed by shooters with rifles who were working together and that it's not clear whether there are more suspects out there, the Dallas Morning News reports. In other developments: A "person of interest" whose photo was tweeted by Dallas police turned himself in and was released after it was determined he had no connection to the shootings, the Guardian reports. Police announced around 2am that another officer had died from his injuries, bringing the death toll to five, with another six officers injured. Clay Jenkins, Dallas County's chief executive, says the shooters' motives are unknown, apart from the fact that they fired on police, the New York Times reports. WFAA rounds up some videos posted on social media after the shootings, including one that shows marchers panicking as shots ring out. Rev. Dominique Alexander, president of the Next Generation Action Network, tells the Morning News that the protest was a call for justice after recent police shootings, but it was not organized by the Black Lives Matter movement. He says he saw a photographer shot in the leg in front of him and was told that at least one of the shooters was white. A Morning News op-ed calls the shootings terrorism that shattered a peaceful march. It was a "well planned and barbaric attack on that peace. In purposely murdering police officers, these gunmen did not further any cause or make any kind of statement that civilized people can understand. They opened fire, literally, on human decency." Reuters reports that other protests around the country remained peaceful, with crowds gathering in cities such as Chicago, New York, and St. Paul, Minn., to protest the police shootings of Alton Sterling and Philando Castile.
the use of generic medicines , compared to their branded counterparts , has the potential to substantially reduce out - of - pocket expenditure on drugs for patients with chronic diseases . generic substitution of brand prescriptions is an accepted practice in many parts of the world , and this is often done for economic reasons . in india , however , generic substitution is not a universally accepted practice . this results from various factors including nonavailability of generic formulations , distrust of generic medicines by practitioners often due to perceived inferior quality and counterfeiting of drugs . implementation of generic prescribing policy is however ongoing in institutional settings , where drugs can be procured in bulk and dispensed from the institutional inventory with appropriate quality control measures . with the idea of promoting generic prescriptions in public sector hospitals , the ministry of health and family welfare , west bengal , has been implementing a fair price medicine shop ( fpms ) scheme run through public - private - partnership ( ppp ) from 2012 . the government provides space and physical infrastructure for the medicine outlets within the hospital premises while the private partner undertakes procurement and dispensing activities under mutually agreed terms . there is a mandatory list of items that are to be stocked for supply to hospital patients as well as many other items that are supplied under the supervision of local fpms monitoring committees . in addition to this state government initiative , the department of pharmaceuticals , ministry of chemicals and fertilizers , government of india , has , since 2008 , opened dedicated outlets called jan aushadhi stores where generic medicines are sold at low prices . so far , 157 jan aushadhi stores have been opened across 12 states of india including west bengal . through these initiatives , more and more patients , are now getting exposed to the generic drugs concept and can compare its advantages and disadvantages to purchase of branded drugs from the open market . the experience and attitude toward generic drugs are not uniform among physicians across countries as reported by investigators from india and abroad . the scientific data regarding experience and attitude of consumers toward generic drugs is necessary for sustaining a generic drug use policy but have been explored to a limited extent . reports on consumer attitudes and preferences are mostly available from countries where generic drug substitution in retail pharmacies is an accepted practice , unlike in india . in particular , recent reports on consumer behavior , since the introduction of above initiatives , are lacking . we , therefore , undertook this study to evaluate the experiences and attitudes of patients toward generic drugs purchased from fpms of a state government sponsored public hospital , comparing the same with the experience regarding branded medicines purchased from retail medicine shops in the same or adjoining localities . this questionnaire - based cross - sectional study was conducted among patients attending internal medicine outpatient department ( opd ) of a government medical college hospital located in suburban kolkata between may and july 2015 . patients with chronic diseases who consumed generic medicines purchased from fpms for at least 3 months were included after obtaining written informed consent . the same questionnaire was administered to an equal number of patients attending the opd of a private hospital and using branded medicines purchased from the open market . patients suffering from simultaneous acute medical problems , cognitive impairment , or psychiatric diseases were excluded from the study . approval from the institutional ethics committees was obtained before the initiation of the data collection . the first part captured data pertaining to the sociodemographic , morbidity , and medication profile of the patient . the second part consisted of eight structured closed - ended items assessing the experience and attitudes toward generic or branded drug usage . the last part was a validated questionnaire evaluating medication adherence , called drug attitude inventory-10 ( dai-10 ) . this instrument , condensed from its original 30-item version , contains ten closed - ended items with binary response to assess medication adherence based on psychometric profiling . a total score of 510 indicates a perfectly adherent patient , 05 a moderately adherent patient , and a negative total score , a completely nonadherent patient . the entire questionnaire , including dai-10 rating , was translated into regional languages ( hindi and bengali ) followed by validation through independent back - translation . data were analyzed using ibm spss statistics version 20 new york , united states , . summary statistics were expressed using mean and standard deviation ( sd ) for numerical variables ( median and interquartile ranges [ iqrs ] when skewed ) and counts and percentages for categorical variables . numerical variables were compared between generic and branded drug users using student 's independent samples t - test when normally distributed and mann whitney u - test when skewed . comparisons were two - sided , and p < 0.05 was taken to be statistically significant . we approached 116 generic medicine users of whom 100 agreed to participate , giving a nonresponder rate of 7.41% . the nonresponder rate among branded medicine users was 9.91% to achieve 100 who agreed to participate . the sociodemographic , morbidity , and drug use profile of the two study groups are presented and compared in table 1 . evidently , age and gender distributions and primary disease duration were comparable . however , branded medicine users were better educated , had a higher per capita monthly family income and used more drugs and doses per day . the medication adherence score , as measured through dai-10 , indicated moderate to good adherence in both groups with the mean score showing no statistically significant difference . branded drug users had a mean dai-10 score of 6.3 ( sd 2.61 ) while the value for generic drug users was 6.3 ( sd 2.80 ) . sociodemographic , disease- and drug - related variables compared between generic and branded drug users the primary diagnosis for patients is presented in figure 1 . apart from common chronic diseases such as hypertension , type 2 diabetes , and chronic airway diseases , there were also patients with dyslipidemia , ischemic heart disease , hyperuricemia , epilepsy , osteoporosis , and chronic dyspepsia who have been clubbed into the others category . primary diagnosis of the patients with chronic diseases using either generic or branded drugs across the 200 prescriptions analyzed , 45 brands and 29 generic drugs were prescribed by 17 prescribers . the prescription frequency of 15 selected branded , and generic drugs are graphically compared in figure 2 . interestingly , multivitamin formulations were prescribed to thirty patients among branded drug users ; however , none from the other group received such a prescription . among the total number of prescribed drugs 65% and 56% were from national essential drug list , prescription frequency of selected branded and generic drugs the experience and attitude of the patients consuming branded and generic drugs are presented in table 2 . in our sample , 93% of generic drug users and 87% branded drug users believed that their medicines were sufficiently effective ( p = 0.238 ) for controlling the disease condition . in spite of encouragement from policy - makers , generic drug use in india is yet to gain widespread popularity , and the practice so far has remained confined mostly to institutional settings in small pockets of the country . the limited availability of quality generic formulations appears to be an important hindrance to the widespread adoption of generic prescribing and dispensing activity . since 2012 , ministry of health and family welfare , government of west bengal , has started implementing the policy of mandatory generic drug use in state government - funded hospitals . simultaneously , to ensure availability of generic drugs which are seldom available in the open market , the fpms scheme on a ppp model has been launched in larger public hospitals all across the state . out of 121 proposed fpms , 93 have become operational providing generic medicines at low retail price . this initiative has the potential to create public awareness and to increase faith to generic medicines . this prompted us to carry out a pilot study for evaluation of the experience of generic drug usage among patients of chronic diseases attending government hospitals . in this study , we observed that over 90% of the patients believed that generic drugs were as effective as branded drugs . for instance , in a focus group interview conducted in alabama , usa , among african american citizens multiple concerns regarding the use of generic medications were voiced . the participants thought that generics might be less potent than branded medications . a perception that generics are not real medicines and thus only appropriate for mild ailments also prevailed . however , poor people are forced to settle for generics due to low therapeutic cost . contrarily , in finnish patients , it was observed that 81% of the participants opined that cheaper generics were effective . palagyi and lassanova ( 2008 ) reported that 17% of the study populations considered generics inferior to brand - name drugs in terms of quality among patients from slovakia . another study from a nationwide survey conducted among 5000 individuals from brazil reported that 30.4% of the respondents considered generic drugs to be less effective than branded medicines . himmel et al . conducted a survey among primary care patients in germany about their thoughts on generic drug use . almost a third of the respondents thought the relatively inexpensive generic drugs to be qualitatively inferior than , or altogether different from , branded drugs . this view was more frequently expressed by patients who were more than 60 years of age , chronically ill , and/or without higher education . in this study , patients attending the public hospital were socioeconomically as well as educationally constrained , but they still believed that generic drugs available from fpms were effective . the extent of side effect reported by patients after consuming both generic and branded drugs was around 10% with no significant difference . this contrasts with the observation it was reported that 13% of primary care patients in germany suffered from new side effects after generic substitution . in an american survey , it is reported that about 10% believed that generic drugs could cause more side effects than brand - name drugs . another study among the finnish patients observed that 85% did not consider generics substitution unsafe . however , in contrast to the global picture , over 80% of participants believed that generics are relatively less safe for use than branded equivalent in a recent survey conducted in maharashtra , india . in the brazilian survey , 28.1% of the entire sample believed that generics cause more side effects compared to branded drugs . the proportion of perfectly adherent patients in our sample was 82% and 77% for generic and branded drugs , respectively , with no statistically significant difference . in contrast , in a mixed - method study among australian seniors , generic drug usage emerged as an important factor in nonadherence to medication . several investigators also reported that generic substitution is a source of confusion and therefore medication error among patients . however , in this study , 89% of the generic drug users perceived that they were confident about their drug regimen and 95% were satisfied regarding the instruction provided to them for use of the medicines . in addition to the three major areas of concern regarding generics , namely , perception regarding effectiveness , safety , and medication adherence , we also evaluated some other perspectives of drug use . it was noted that in generic prescriptions the median number of drugs ( 3 [ iqr 24 ] drugs / day vs. 4 [ iqr 35 ] drugs / day , p < 0.001 ) and number of drug doses ( 4 [ iqr 25 ] doses / day vs. 4 [ iqr 38 ] doses / day , p < 0.001 ) were lower compared to branded prescriptions . although the exact reason for this difference was not apparent from this study , the high usage of multivitamin formulations might have contributed to higher numbers of branded medicine use . the total cost and unit cost of medication was found to be significantly low among generic compared to branded drug users . apart from the direct cost of the medication , the usage of newer congeners within a therapeutic class tends to increase cost in brand prescriptions . for instance , newer drugs such as rosuvastatin , olmesartan , and telmisartan , was observed exclusively among brand prescriptions , whereas generic drug prescribers used atorvastatin , losartan , etc . this difference in prescription pattern between prescriptions generated from government and private hospitals in india was earlier detected in a prescription audit conducted in rajasthan , india . consistent with our observation , researchers had identified that average number of drugs prescribed per prescription and number of multivitamin formulations were more in private hospital opds compared to government hospital opds . the usage of generic drugs and essential medicine were significantly higher among doctors from government hospitals . the availability of medication and their storage were comparable between the generic and brand groups . however , 46% of branded users and 40% of generic drug users faced problem while purchasing due to unavailability of the medicines concerned . this is an issue of concern and might lead to ineffectiveness of the policy in future . however , it is likely that as of now all fpms are not yet fully able to cope with the demand for various drugs leading to stock - out situations . in the public interest , the government must keep the fpms retailers informed of the drugs that need to be stocked and their optimum inventory . it is hospital based , addresses only chronic diseases and the control group selected is not identical regarding the major determinants of drug usage , as branded and generic prescribing are not practiced in the same setting in west bengal , india . the public hospitals are the major source of generic prescription , and they primarily cater to a population with lower income and education , compared to the private setting ( source of branded prescription ) . hence , further studies with probability sampling and appropriate stratification are necessary to ensure the generalization of our observations . nevertheless , the results are encouraging regarding the future of generic prescribing policy in the state and indicate that it would be worthwhile to pursue its full implementation . the perceptions of patients regarding effectiveness , safety , and adherence needs of generic drugs were comparable to branded medicines . therefore , the availability of generic drugs should be ensured in fpms by the policy - makers . attempts should also be made to adopt the same model in other states of the country to enable wider penetration of generic medicine benefits . the government of west bengal , india has initiated exclusive generic drug outlets called fair price medicine shop ( fpms ) inside the government hospital premises in a the policy appeared to be promising in terms of perceived effectiveness , safety and adherence to treatment for the patients who acquire generic drugs from fpms compared to drugs purchased from open market retailers . therefore , this study might act as an impetus for the policy - makers to initiate similar models across the country . this study has received a funding of inr 10,000 for short - term studentship program by the indian council of medical research . the authors are either employee of government of west bengal or students of medical college under the same administration . however , no tangible or intangible support was sought from the concerned department during the designing , conduct , or publication of the study . this study has received a funding of inr 10,000 for short - term studentship program by the indian council of medical research . the authors are either employee of government of west bengal or students of medical college under the same administration . however , no tangible or intangible support was sought from the concerned department during the designing , conduct , or publication of the study .
background : the concept of generic prescription is widely accepted in various parts of the world . nevertheless , it has failed to gain popularity in india due to factors such as nonavailability and distrust on the product quality . however , since 2012 , the government of west bengal , india , has initiated exclusive generic drug outlets called fair price medicine shop ( fpms ) inside the government hospital premises in a public - private - partnership model . this study was undertaken to evaluate the experience and attitude of patients who were consuming generic drugs purchased from these fpms.materials and methods : it was a questionnaire - based cross - sectional study where we have interviewed 100 patients each consuming generic and branded drugs , respectively . the perceived effectiveness , reported safety , medication adherence , cost of therapy , and availability of drugs was compared between two mentioned groups . medication adherence was estimated through drug attitude inventory-10.results:93% of generic and 87% branded drug users believed that their drugs were effective ( p = 0.238 ) in controlling their ailments . no significant difference ( 9% generic , 10% branded drug users , p = 1.000 ) was observed in reported adverse effects between generic and branded drug users . 82% and 77% of patients were adherent generic and branded drugs , respectively ( p = 0.289 ) . as expected , a significantly lower cost of generic drugs was observed compared to its branded counterpart.conclusion:the policy of fpms implemented by the government of west bengal , india appeared to be promising in terms of perceived effectiveness , safety , and adherence of generic drugs from fpms compared to drugs purchased from open market retailers . therefore , this study might act as an impetus for the policy - makers to initiate similar models across the country .
Susan and Dylan Klebold celebrating Dylan's fifth birthday. Since the day her son participated in the most devastating high school shooting America has ever seen, I have wanted to sit down with Susan Klebold to ask her the questions we've all wanted to ask—starting with "How did you not see it coming?" and ending with "How did you survive?" Over the years, Susan has politely declined interview requests, but several months ago she finally agreed to break her silence and write about her experience for O. Even now, many questions about Columbine remain. But what Susan writes here adds a chilling new perspective. This is her story. — Oprah Just after noon on Tuesday, April 20, 1999, I was preparing to leave my downtown Denver office for a meeting when I noticed the red message light flashing on my phone. I worked for the state of Colorado, administering training programs for people with disabilities; my meeting was about student scholarships, and I figured the message might be a last-minute cancellation. But it was my husband, calling from his home office. His voice was breathless and ragged, and his words stopped my heart. "Susan—this is an emergency! Call me back immediately!"The level of pain in his voice could mean only one thing: Something had happened to one of our sons. In the seconds that passed as I picked up the phone and dialed our house, panic swelled within me; it felt as though millions of tiny needles were pricking my skin. My heart pounded in my ears. My hands began shaking. I tried to orient myself. One of my boys was at school and the other was at work. It was the lunch hour. Had there been a car accident?When my husband picked up the phone, he shouted, "Listen to the television!"—then held out the receiver so I could hear. I couldn't understand the words being broadcast, but the fact that whatever had happened was big enough to be on TV filled me with terror. Were we at war? Was our country under nuclear attack? "What's happening?" I shrieked.He came back on the line and poured out what he'd just learned during a distraught call from a close friend of our 17-year-old son, Dylan: There was some kind of shooting at the high school…gunmen in black trenchcoats were firing at people…the friend knew all the kids who wore trenchcoats, and all were accounted for except Dylan and his friend Eric…and Dylan and Eric hadn't been in class that morning…and no one knew where they were.My husband had told himself that if he found the coat, Dylan couldn't be involved. He'd torn the house apart, looking everywhere. No coat. When there was nowhere left to look, somehow he knew the truth. It was like staring at one of those computer-generated 3-D pictures when the abstract pattern suddenly comes into focus as a recognizable image.I barely got enough air in my lungs to say, "I'm coming home." We hung up without saying goodbye.My office was 26 miles from our house. All I could think as I drove was that Dylan was in danger. With every cell in my body, I felt his importance to me, and I knew I would never recover if anything happened to him. I seesawed between impossible possibilities, all of them sending me into paroxysms of fear. Maybe no one knew where Dylan was because he'd been shot himself. Maybe he was lying in the school somewhere injured or dead. Maybe he was being held hostage. Maybe he was trapped and couldn't get word to us. Maybe it was some kind of prank and no one was hurt. How could we think for even a second that Dylan could shoot someone? Shame on us for even considering the idea. Dylan was a gentle, sensible kid. No one in our family had ever owned a gun. How in the world could he be part of something like this? ||||| I've always wanted to hear from the parents of the Columbine shooters exactly what their experience was like. This book goes into great detail about Dylan's life growing up in Littleton leading up to the massacre. I was completely engrossed to hear from Sue Klebold what it was like to unwittingly raise a killer. One of the most overlooked aspects of the Columbine tragedy that this book illuminated for me is the fact that Eric went to the school that day to kill while Dylan went there to die. Similar to the point that Dave Cullen makes in his great book "Columbine" though the boys ultimately committed murder at the school together what got them to that terrible conclusion was quite different. The most telling thing that I got from this book was that to his mother Dylan seemed like a perfectly normal teenager. He did not display any signs that would, for most parents, raise any red flags. He was involved, he had friends, he held jobs, he participated in activities at school, and his grades were good. I think for most parents we cling to the notion that those boy’s parents had to know that something was terribly wrong with their sons. This thinking helps us believe that what Dylan and Eric did could never happen with anyone we know. The terrible realization came when I started to understand that what Dylan did could happen to anybody's child. When we put Dylan Klebold into the safe little box where he was an evil person to the core it makes us feel safer because our own child could never do something like what he did, could they? Much like other famous tragedies that ended in death Columbine is easier to deal with when we can easily explain what happened and why it happened. The chilling thing that I've come to realize is that what happened on April 20, 1999 at Columbine High School has the potential to happen practically anywhere. The book was at times very repetitive and sometimes I did feel like Sue was trying to drive home the fact that she was an amazing parent to Dylan. Her liberal ideals did get on my nerves at times, just because they are very different from my own, but I can still appreciate her views without agreeing with them. At one place in particular in the book she tells us about an incident only days before the shootings that I have a really hard time believing is true. It feels more like she included this story to make herself look better. On the other hand, if she is telling the truth, then Dylan was truly unbelievably manipulative and cruel in the way he lied to his mother that morning. Maybe it's just that it is hard to believe that a person could be so cold and deceptive. Ultimately this book is a much needed chapter in the Columbine tragedy. The suicide of Dylan Klebold is so tragic because he was a teenager on the brink of graduation, he already had a college picked out and a dorm room paid for, he had a potentially bright future working with computers, and a family who loved him dearly. How could this boy make the horrific decision to kill himself and take innocent lives in the process? That is the question that will haunt me for years.
– It's been 17 years since the Columbine massacre, and on Friday night's 20/20, ABC will air the first TV interview with the mom of killer Dylan Klebold. In her sit-down with Diane Sawyer, Sue Klebold reveals her feelings about the 1999 Colorado school attack in which her son, 17, and his friend Eric Harris, 18, took the lives of 12 students and one teacher and wounded 24 others before killing themselves, ABC News reports. "There is never a day that goes by where I don't think of the people that Dylan harmed," she tells Sawyer, adding that she uses the word "harmed" because it's still difficult for her to say "killed" and that it's "very hard to live with the fact that someone you loved and raised has brutally killed people in such a horrific way." Klebold tells Sawyer that she never saw the tragedy coming and that she thought her relationship with her son was solid. "I felt that I was a good mom … that he would, he could talk to me about anything," she says. "Part of the shock of this was learning that what I believed and … how I parented was an invention in my own mind. [It] was a completely different world [than] he was living in." Klebold has a memoir about her experiences coming out Monday; she'll donate the profits to research and foundations dedicated to mental health issues, per ABC. (Read Sue Klebold's heart-wrenching 2009 essay in O Magazine.)
Lawyers for Kentucky county clerk Kim Davis say the marriage licenses issued by her office are binding — and no further action is required by a federal judge to change them. "Marriage licenses are being issued in Rowan County, which [Kentucky Gov. Steven Beshear] and Kentucky attorney general have approved as valid, which are recognized by the Commonwealth of Kentucky, and which are deemed acceptable by the couples who received them," her lawyers said in court filings Tuesday. Play Facebook Twitter Embed Kentucky Clerk Kim Davis' Secret Meeting With Pope Francis 0:19 autoplay autoplay Copy this code to your website or blog After Davis spent five days in jail last month for contempt of court, she returned to work but said she would play no role in issuing the licenses. She did, however, make several changes to the license forms. That prompted the couples who originally sued her to ask the judge in late September for an order directing her office to revert back to the original forms. Related: Kentucky Governor: Kim Davis' Statements Show 'Absurdity' One of her deputy clerks said she deleted all mention of Rowan County, removed her name, and omitted references to deputy clerks. Only a deputy's name is on the form — and not his title — with a place for him to initial rather than to use his signature. Those changes, the couples told the judge, were inconsistent with state rules for marriage licenses, raising potential questions about their validity. But Davis' lawyers said Tuesday not to worry. "The Kentucky governor and Kentucky attorney general both inspected the new licenses and publicly stated that they were valid and will be recognized as valid by the Commonwealth of Kentucky," the lawyers said. In a flurry of back-and-forth court filings since the firestorm erupted, Davis' team also filed suit against Gov. Beshear and another state official for allegedly violating her religious freedom. Related: Kim Davis Is Meddling and Altering Marriage Licenses, ACLU Says Lawyers for the American Civil Liberties Union, which represented the gay Kentucky couples, also want Davis' office to be placed under a receivership — essentially allowing another person to oversee the marriage license process. In a statement Tuesday, Davis' attorney said that since the county is now offering marriage licenses to all couples, there's no reason for Davis to be removed. "It has never really been about a marriage license — Rowan County has issued the licenses — it is about [the plaintiffs] forcing their will on a Christian woman through contempt of court charges, jail, and monetary sanctions," attorney Mat Staver said. ||||| Kentucky county clerk Kim Davis speaks during an interview on Fox News Channel's 'The Kelly File' in New York September 23, 2015. LOUISVILLE, Ky. The Kentucky county clerk who was jailed for refusing to issue marriage licenses to gay couples has taken reasonable steps to comply substantially with a judge's orders and should not face further contempt citations, her attorneys said on Tuesday. Lawyers for couples suing Rowan County Clerk Kim Davis have argued that she made material changes to the marriage forms upon her return from jail in September that put her out of compliance with U.S. District Judge David Bunning's orders. They asked Bunning to impose fines or a limited receivership on the clerk's office in the dispute that has become the latest focal point in a long-running debate over gay marriage in the United States. Lawyers for Davis, whose meeting with Pope Francis during his trip to the United States last month sparked widespread debate when it became public, said in a court filing that state-elected officials see the licenses as valid. Bunning's order said nothing about the details the licenses must contain and he had already permitted alterations, the lawyers said. Davis has cited her beliefs as an Apostolic Christian to deny marriage licenses to gay couples even as she has been married herself four times and has had some children out of wedlock. She refused to issue any licenses after a U.S. Supreme Court ruling in June made gay marriage legal across the United States and she was sued by gay couples. Davis was jailed for five days in September for refusing to issue the licenses and has been under the threat of returning to jail if she interferes in the issuance of licenses. Opponents say Davis is abdicating her duties by refusing to issue marriage licenses. "It has never really been about a marriage license - Rowan County has issued the licenses - it is about forcing their will on a Christian woman through contempt of court charges, jail, and monetary sanctions," Mat Staver, a lawyer representing Davis, said in a statement. Davis said when she returned to work that she removed her name, title and personal authorization on the licenses. A deputy clerk has issued licenses since her jailing in early September. Davis has asked Governor Steve Beshear, a Democrat, state lawmakers and Bunning to accommodate her beliefs. She has appealed the orders to the Sixth Circuit U.S. Court of Appeals. Kentucky's attorney general believes the forms altered by Davis are valid licenses, a spokesman said on Tuesday. (Reporting by Steve Bittenbender; Writing by David Bailey; Editing by Sandra Maler) ||||| FRANKFORT, Ky. (AP) — Lawyers for Rowan County Clerk Kim Davis say the altered marriage licenses her office issued to same-sex couples are valid because they have been recognized by Kentucky's highest elected officials. Davis stopped issuing marriage licenses after a U.S. Supreme Court ruling in June effectively legalized same-sex marriage nationwide. She spent five days in jail for refusing to obey a federal judge's ruling ordering her to issue the licenses. When Davis got out of jail, she changed the marriage licenses to remove her name and to say they were issued under the authority of a federal court order. Last month, lawyers for the American Civil Liberties Union questioned the validity of these altered licenses and asked U.S. District Judge David Bunning to order Davis and her employees to reissue them. The ACLU asked the judge that if Davis refused, he fine her or appoint someone else to issue the licenses for her. Tuesday, Davis' lawyers responded in a new court filing calling the ACLU's request "extreme, unnecessary and improper." They noted Kentucky Gov. Steve Beshear and Attorney General Jack Conway have both recognized the licenses as valid. And they said the ACLU was trying to "needlessly create controversy where it does not currently exist." "Davis has taken reasonable steps and good faith efforts to substantially comply with this Court's orders, and marriage licenses are being issued in Rowan County that are authorized, approved, and recognized as valid by Kentucky's highest elected officials," attorney Jonathan Christman wrote for the Liberty Counsel, the Florida-based law firm that represents Davis. The ACLU called the altered marriage licenses "a stamp of animus against the LGBT community, signaling that, in Rowan County, the government's position is that LGBT couples are second-class citizens unworthy of official recognition and authorization of their marriage licenses but for this Court's intervention and Order." But Christman said the ACLU cannot "allege that Rowan County has issued or is issuing marriage licenses to the 'LGBT community' that are in any way different from licenses issued to the 'non-LGBT community.'" Bunning has not yet ruled on the ACLU's request. ___ This story has been corrected to reflect that Davis' office, not Davis herself, issued the altered licenses.
– Kim Davis didn't violate a judge's order when she made a host of changes to marriage license forms in Kentucky's Rowan County, her lawyers say. Couples who sued Davis argued the changes—including the removal of her name and a note that the licenses were issued under authority of a federal court order—didn't follow state rules and made the forms invalid, reports NBC News. They've asked a judge to issue fines or place Davis' office under a receivership, meaning Davis would no longer oversee the issuing of marriage licenses. In court filings, Davis' lawyers say the Kentucky attorney general and Gov. Steven Beshear "both inspected the new licenses and publicly stated that they were valid and will be recognized as valid by the Commonwealth of Kentucky." A rep confirms Kentucky's attorney general believes the forms are valid, per Reuters. Lawyers representing the couples who sued Davis, however, say they are "a stamp of animus against the LGBT community, signaling that, in Rowan County, the government's position is that LGBT couples are second-class citizens unworthy of official recognition," per the AP. Davis' lawyers say the judge's order didn't mention details the licenses needed to contain and say there are no grounds for a receivership since Rowan County is now issuing licenses to all. "It has never really been about a marriage license—Rowan County has issued the licenses—it is about [the plaintiffs] forcing their will on a Christian woman through contempt of court charges, jail, and monetary sanctions," an attorney says.
incidentally , approximately 75% of the mortality due to burn wounds is associated with infections ( 1 ) . an opportunistic , nosocomial pathogen of immunocompromised patients , pseudomonas aeruginosa , typically infects the burn wounds . a study revealed that p. aeruginosa was isolated from 73.9% of the burn patients in iran ( 2 , 3 ) . in addition , many studies showed that colonization of these bacteria in burn wounds is a main cause of mortality in burn patients in burn treatment centers in iran ( 2 ) . to treat such infections , carbapenems such as imipenem are identified as the most effective agent against p. aeruginosa ( 4 ) . although imipenem is known as a potential last treatment option for patients with burn lesions , it was revealed that one of the most concerning characteristics of p. aeruginosa species is their high resistance to imipenem in hospitalized burn patients in recent years . although metallo--lactamases ( mbls ) are one of the most important mechanisms resulting in treatment failure in carbapenem therapy of p. aeruginosa infections , these isolates also produce -lactamases of class a and d. class a extended - spectrum -lactamases ( esbls ) including tem , shv , ges , per , veb , ctx - m and ibc families are also found in p. aeruginosa strains . extended - spectrum -lactamases from the class d , oxa - type enzymes are also detected in p. aeruginosa ( 6 ) . also recently , the number of p. aeruginosa isolates producing kpc - type carbapenemases raised significantly ( 7 ) . plasmid acquired ambler class a esbls such as ges , per , shv and tem types were normally found in entrobacteriaceae . class a esbls are recently identified in p. aeruginosa but they are reported in a limited areas ( 8) . compared with the enterobacterial species , in which tem and shv esbls are found most frequently , oxa and pse types are the most common -lactamases in p. aeruginosa ( 9 ) . to the best of the authors ` knowledge , there is not enough information regarding the prevalence of class a and d esbls and the type of involved genes in imipenem resistant p. aeruginosa in burn patients in iran . the current study aimed to investigate the prevalence of ges , per , shv , tem and oxa-10 esbls as well as kpc carbapenemases among imipenem resistant p. aeruginosa strains isolated from the burn patients hospitalized in several burn treatment centers in tehran , iran ( 2 ) . fifty imipenem resistant p. aeruginosa species were isolated from hospitalized burn patients from 2009 to 2010 in tehran , iran . the isolates were identified as p. aeruginosa by conventional biochemical tests such as oxidase and catalase production , growth at 42c , and production of pigment on mueller - hinton agar ( merck , germany ) . the isolates were stored at -20c in brain - heart infusion broth ( merck , germany ) with 15% glycerol for future investigations . antimicrobial susceptibility to various antibiotics was carried out using the disc - diffusion method recommended by the clinical and laboratory standards institute ( clsi ) guidelines ( 10 ) . eleven antibacterial discs including cefotaxime ( ctx ; 30 g ) , ceftazidime ( caz ; 30 g ) , aztreonam ( atm ; 30 g ) , gentamicin ( gm ; 10 g ) , ciprofloxacin ( cip ; 5 g ) , amikacin ( an ; 30 g ) , tobramycin ( tob ; 10 g ) , cefepime ( cpm ; 30 g ) , pipracillin ( pip ; 100 g ) , cefixime ( cfm ; 5 g ) , and ticarcillin ( tic ; 75 g ) ( mast , uk ) were used . esbls phenotypic confirmatory tests were performed by combined disc tests , using ctx and ctx / clavulanic acid ( 30/10 g ) and caz and caz / clavulanic acid ( 30/10 g ) ( 11 ) . the inhibitory zone diameter 5 mm was considered positive for esbls production . dna was extracted by the boiling method ( 12 ) and pcr analysis was carried out on all 50 isolates to evaluate the prevalence of the blages , blaper , blashv , blatem , blaoxa-10 esbls and blakpc carbapenemases genes . the dna amplification program and the specific primers ( takapouzist , iran ) used for standard pcr amplification procedures are shown in table 1 ( 11 , 13 - 17 ) . to detect dna fragments fifty imipenem resistant p. aeruginosa species were isolated from hospitalized burn patients from 2009 to 2010 in tehran , iran . the isolates were identified as p. aeruginosa by conventional biochemical tests such as oxidase and catalase production , growth at 42c , and production of pigment on mueller - hinton agar ( merck , germany ) . the isolates were stored at -20c in brain - heart infusion broth ( merck , germany ) with 15% glycerol for future investigations . antimicrobial susceptibility to various antibiotics was carried out using the disc - diffusion method recommended by the clinical and laboratory standards institute ( clsi ) guidelines ( 10 ) . eleven antibacterial discs including cefotaxime ( ctx ; 30 g ) , ceftazidime ( caz ; 30 g ) , aztreonam ( atm ; 30 g ) , gentamicin ( gm ; 10 g ) , ciprofloxacin ( cip ; 5 g ) , amikacin ( an ; 30 g ) , tobramycin ( tob ; 10 g ) , cefepime ( cpm ; 30 g ) , pipracillin ( pip ; 100 g ) , cefixime ( cfm ; 5 g ) , and ticarcillin ( tic ; 75 g ) ( mast , uk ) were used . esbls phenotypic confirmatory tests were performed by combined disc tests , using ctx and ctx / clavulanic acid ( 30/10 g ) and caz and caz / clavulanic acid ( 30/10 g ) ( 11 ) . the inhibitory zone diameter 5 mm was considered positive for esbls production . dna was extracted by the boiling method ( 12 ) and pcr analysis was carried out on all 50 isolates to evaluate the prevalence of the blages , blaper , blashv , blatem , blaoxa-10 esbls and blakpc carbapenemases genes . the dna amplification program and the specific primers ( takapouzist , iran ) used for standard pcr amplification procedures are shown in table 1 ( 11 , 13 - 17 ) . to detect dna fragments the results of antibiogram testing showed that all isolates ( 100% ) were resistant to ctx , cpm , pip , cfm and tic . meanwhile , the findings revealed that 49 ( 98% ) isolates were resistant to caz , cip and 48 ( 96% ) isolates were resistant to atm , an , gm and tob . most of the isolates were resistant to various classes of antibacterial agents ( table 2 ) . among the 50 isolates , 44 ( 88% ) were multi - drug resistant and phenotypic tests indicated that 27 ( 54% ) strains were presumed esbls producers . among the 27 strains , 23 ( 85.18% ) were determined with zone diameters of 5 - 14 mm in combination with caz / clavulanic acid and 17 ( 62.96% ) were determined with zone diameters of 5 - 12 mm in combination with ctx / clavulanic acid . abbreviations : an , amikacin ; atm , aztreonam ; caz , ceftazidime ; cfm , cefixime ; cip , ciprofloxacin ; cpm , cefepime ; ctx , cefotaxime ; gm , gentamicin ; pip , pipracillin ; r , resistant ; tic , ticarcillin ; tob , tobramycin . pcr analysis was carried out for all 50 isolates using primers specific to blaper , blages , blaoxa-10 , blatem , blashv and blakpc . four of six genes were detected alone or in various combinations ; 7 ( 14% ) , 18 ( 36% ) , 18 ( 36% ) , 18 ( 36% ) strains of a total of 50 isolates were amplified as blaper , blaoxa-10 , blatem and blashv , respectively ( figure 1 ) . whilst among the 27 isolates indicated as presumed esbls producers phenotypically , 3 ( 11.11% ) , 12 ( 42.85% ) , 14 ( 50% ) and 14 ( 50% ) strains were detected as blaper , blaoxa-10 , blatem and blashv genes , respectively . in addition , among isolates , 10 ( 20% ) carried blatem , blaoxa-10 and blashv genes , simultaneously . also 98% of shv , tem and oxa-10 producing strains were resistant to all of the antibiotics used in this research . table 3 shows that approximately all of the isolates harboring multiple antimicrobial resistance genes were 100% resistant to different classes of antibiotics except cip . no 1 , dna marker ( 100 kb ) . no 2 , blashv ( 231 bp ) , no 3 , blatem ( 858 bp ) , no 4 , blaoxa-10 ( 276 ) , no 5 , positive control for blages ( 864 bp ) , no 6 , positive control for blakpc ( 893 bp ) , no 7 , blaper ( 925 bp ) . abbreviations : an , amikacin ; atm , aztreonam ; caz , ceftazidime ; cip , ciprofloxacin ; cfm , cefixime ; cpm , cefepime ; ctx , cefotaxime ; gm , gentamicin ; tob , tobramycin ; pip , pipracillin ; tic , ticarcillin . the application of antibiotics is one of the most important scientific attainments of the 20th century . however , prevalent antibiotic use increases antibiotic - resistant pathogens , including multidrug resistant isolates ( 18 ) . antibiotic resistance of pathogenic bacteria is a major global danger and realization of the resistance mechanisms is critical to development of novel therapeutic options ( 18 ) . the resistance of this microorganism to antibiotics is a worrisome problem in hospitalized patients with burn injuries . extended spectrum -lactamases , carbapenemases , metallo -lactamases and ampc -lactamases producing organisms are the main problems to treat the infected burn patients in burn centers ( 1 ) . one of the most important ways to select an effective method to reduce such infections is specifying the relationship between genotype and drug susceptibility ( 1 ) . multidrug - resistant p. aeruginosa isolates re a worrying matter from burn patients in iran ( 19 ) . several classes of esbls such as oxa , per , tem , shv and ges are newly detected in p. aeruginosa . also recently , the number of p. aeruginosa isolates producing kpc - type carbapenemases has significantly raised ( 7 ) . the findings of the present study explained that high levels of resistance to many antimicrobial antibiotics existed among p. aeruginosa isolated from the infected wounds of burn patients and the majority of isolates ( 88% ) were multi - drug resistant . all isolates were totally resistant to ctx , cpm , pip , cfm and tic ; whereas the minimum resistance rate ( 96% ) was demonstrated for atm , gm , tob and an . results of the previous studies approved resistance to a large number of antibiotics usually used to treat burn injuries caused by p. aeruginosa in the iranian hospitals ( 19 ) . for instance , shahcheraghi et al . reported that nosocomial p. aeruginosa isolates were resistant to cefotaxime ( 56% ) and ceftazidime ( 25% ) ( 20 ) . in another study , the rates of resistance were as follows : ceftazidime ( 74.8% ) and cefotaxime ( 50.4% ) ( 19 ) . moreover , among 27 isolates , phenotypically positive for esbls production , 0 ( 0% ) , 12 ( 42.85% ) , 14 ( 50% ) and 14 ( 50% ) strains were detected as blaper , blaoxa-10 , blatem and blashv genes , respectively . reported that the frequency of blaper , blatem , blashv and blages were 17% , 9% , 22% and 0% , respectively ( 20 ) . to find more information about the prevalence and type of the relevant genes involved in the multi - drug resistance of nosocomial p. aeruginosa isolates , of the 50 isolates , 7 ( 14% ) , 18 ( 36% ) , 18 ( 36% ) and18 ( 36% ) strains were positive for blaper , blaoxa-10 , blatem and blashv alone or in various combinations , respectively . in addition , earlier reports from iran also showed that the prevalence of blaper-1 and blaoxa-10 were 49.25% and 74.62% , respectively ( 2 ) . the observations suggest that the prevalence of blaoxa-10 , blatem and blashv among all of the 50 isolates were relatively high in the present work . except in turkey , in which per ( 86% ) and oxa-10 ( 55% ) as well as saudi arabia in which oxa-10 ( 56% ) and ges ( 20% ) producing p. aeruginosa strains were reported ( 21 , 22 ) , no current data existed on the real prevalence of these genes in the countries neighboring iran . it is possible that the distribution of per and oxa-10 probably associated with the immigration and traveling between iran and turkey . a study conducted in taiwan showed that tem , shv-18 and oxa-10 genes exist in 100% , 91.3% and 21.7% of the total p. aeruginosa strains , respectively ( 23 ) . based on the results of pcr assay , there were no blages and blakpc genes in the 50 tested isolates of p. aeruginosa in the current study . there are several case reports on the isolation of kpc - producing p. aeruginosa and klebsiella species in burned patients in iran ( 24 , 25 ) . there seems to be a gradual increase in kpc - producing p. aeruginosa in the burn centers of iran . from the clinical point of view , occurrence of kpc - producer gram - negative bacteria among the burned patients causes a much higher degree of resistance to many antibacterial agents including -lactams , quinolones and aminoglycosides ( 26 ) . these findings increase the concern about the future of antibiotic therapy for kpc - producing p. aeruginosa strains . in conclusion , the current study described that the high rates of resistance to different antibacterial agents and a gradual increase in the degree of per , oxa-10 , shv and tem esbls among the majority of imipenem resistant p. aeruginosa isolated from the burn patients with infected wounds is an enormous threat in burn centers of iran . therefore , molecular epidemiologic studies play a significant role in the evaluation of transmission ways of the pathogen for infection control .
background : pseudomonas aeruginosa remains a leading cause of severe wound infection and mortality in burn patients.objectives:the current study aimed to determine the prevalence of ambler class a and d -lactamases among p. aeruginosa isolated from infected burn injuries in tehran , iran.patients and methods : bacteriological samples were taken from burn patients with clinical symptoms of burn infection . fifty gram - negative , oxidase - positive , catalase- positive bacilli , grown at 42c and production of pigment on mueller - hinton agar were identified as p. aeruginosa . all of the 50 isolates were examined for antibiotic susceptibility via disk diffusion method , and production of ambler class a and and d -lactamases by phenotypic screening test . the presence of ambler class a and d -lactamases was confirmed by polymerase chain reaction technique.results:the results showed that the majority of isolates ( 88% ) were multi - drug resistant . out of these 50 imipenem resistant isolates , 7 ( 14% ) , 18 ( 36% ) , 18 ( 36% ) and 18 ( 36% ) strains were positive for blaper , blaoxa-10 , blatem and blashv genes alone or in combination , respectively . none of the isolates possessed blakpc or blages genes.conclusions:the current study highlights that the high level of resistance to many antibacterial agents and a gradual increase in the degree of per , oxa-10 , shv and tem esbls among the majority of imipenem resistant p. aeruginosa isolated from patients with burn infection is an enormous threat in burn centers in iran .
SECTION 1. SHORT TITLE; TABLE OF CONTENTS. (a) Short Title.--This Act may be cited as the ``Export-Import Bank Reauthorization Act of 1997''. (b) Table of Contents.-- Sec. 1. Short title; table of contents. Sec. 2. Extension of authority. Sec. 3. Tied aid credit fund authority. Sec. 4. Extension of authority to provide financing for the export of nonlethal defense articles or services the primary end use of which will be for civilian purposes. Sec. 5. Clarification of procedures for denying credit based on the national interest. Sec. 6. Administrative Counsel. Sec. 7. Advisory Committee for sub-Saharan Africa. Sec. 8. Increase in labor representation on the Advisory Committee of the Export-Import Bank. Sec. 9. Outreach to companies. Sec. 10. Clarification of the objectives of the Export-Import Bank. Sec. 11. Including child labor as a criterion for denying credit based on the national interest. Sec. 12. Prohibition relating to Russian transfers of certain missiles to the People's Republic of China. SEC. 2. EXTENSION OF AUTHORITY. (a) In General.--Section 7 of the Export-Import Bank Act of 1945 (12 U.S.C. 635f) is amended by striking ``until'' and all that follows through ``but'' and inserting ``until the close of business on September 30, 2001, but''. (b) Effective Date.--The amendment made by this section shall take effect on September 30, 1997. SEC. 3. TIED AID CREDIT FUND AUTHORITY. (a) Expenditures From Fund.--Section 10(c)(2) of the Export-Import Bank Act of 1945 (12 U.S.C. 635i-3(c)(2)) is amended by striking ``through'' and all that follows through ``1997''. (b) Authorization.--Section 10(e) of such Act (12 U.S.C. 635i-3(e)) is amended by striking the first sentence and inserting the following: ``There are authorized to be appropriated to the Fund such sums as may be necessary to carry out the purposes of this section.''. SEC. 4. EXTENSION OF AUTHORITY TO PROVIDE FINANCING FOR THE EXPORT OF NONLETHAL DEFENSE ARTICLES OR SERVICES THE PRIMARY END USE OF WHICH WILL BE FOR CIVILIAN PURPOSES. Section 1(c) of Public Law 103-428 (12 U.S.C. 635 note; 108 Stat. 4376) is amended by striking ``1997'' and inserting ``2001''. SEC. 5. CLARIFICATION OF PROCEDURES FOR DENYING CREDIT BASED ON THE NATIONAL INTEREST. Section 2(b)(1)(B) of the Export-Import Bank Act of 1945 (12 U.S.C. 635(b)(1)(B)) is amended-- (1) in the last sentence, by inserting ``, after consultation with the Committee on Banking and Financial Services of the House of Representatives and the Committee on Banking, Housing, and Urban Affairs of the Senate,'' after ``President''; and (2) by adding at the end the following: ``Each such determination shall be delivered in writing to the President of the Bank, shall state that the determination is made pursuant to this section, and shall specify the applications or categories of applications for credit which should be denied by the Bank in furtherance of the national interest.''. SEC. 6. ADMINISTRATIVE COUNSEL. Section 3(e) of the Export-Import Bank Act of 1945 (12 U.S.C. 635a(e)) is amended-- (1) by inserting ``(1)'' after ``(e)''; and (2) by adding at the end the following: ``(2) The General Counsel of the Bank shall ensure that the directors, officers, and employees of the Bank have available appropriate legal counsel for advice on, and oversight of, issues relating to personnel matters and other administrative law matters by designating an attorney to serve as Assistant General Counsel for Administration, whose duties, under the supervision of the General Counsel, shall be concerned solely or primarily with such issues.''. SEC. 7. ADVISORY COMMITTEE FOR SUB-SAHARAN AFRICA. (a) In General.--Section 2(b) of the Export-Import Bank Act of 1945 (12 U.S.C. 635(b)) is amended by inserting after paragraph (8) the following: ``(9)(A) The Board of Directors of the Bank shall take prompt measures, consistent with the credit standards otherwise required by law, to promote the expansion of the Bank's financial commitments in sub-Saharan Africa under the loan, guarantee, and insurance programs of the Bank. ``(B)(i) The Board of Directors shall establish and use an advisory committee to advise the Board of Directors on the development and implementation of policies and programs designed to support the expansion described in subparagraph (A). ``(ii) The advisory committee shall make recommendations to the Board of Directors on how the Bank can facilitate greater support by United States commercial banks for trade with sub-Saharan Africa. ``(iii) The advisory committee shall terminate 4 years after the date of enactment of this subparagraph.''. (b) Reports to Congress.--Within 6 months after the date of enactment of this Act, and annually for each of the 4 years thereafter, the Board of Directors of the Export-Import Bank of the United States shall submit to Congress a report on the steps that the Board has taken to implement section 2(b)(9)(B) of the Export-Import Bank Act of 1945 and any recommendations of the advisory committee established pursuant to such section. SEC. 8. INCREASE IN LABOR REPRESENTATION ON THE ADVISORY COMMITTEE OF THE EXPORT-IMPORT BANK. Section 3(d)(2) of the Export-Import Bank Act of 1945 (12 U.S.C. 635a(d)(2)) is amended-- (1) by inserting ``(A)'' after ``(2)''; and (2) by adding at the end the following: ``(B) Not less than 2 members appointed to the Advisory Committee shall be representative of the labor community, except that no 2 representatives of the labor community shall be selected from the same labor union.''. SEC. 9. OUTREACH TO COMPANIES. Section 2(b)(1) of the Export-Import Bank Act of 1945 (12 U.S.C. 635(b)(1)) is amended by adding at the end the following: ``(I) The President of the Bank shall undertake efforts to enhance the Bank's capacity to provide information about the Bank's programs to small and rural companies which have not previously participated in the Bank's programs. Not later than 1 year after the date of enactment of this subparagraph, the President of the Bank shall submit to Congress a report on the activities undertaken pursuant to this subparagraph.''. SEC. 10. CLARIFICATION OF THE OBJECTIVES OF THE EXPORT-IMPORT BANK. Section 2(b)(1)(A) of the Export-Import Bank Act of 1945 (12 U.S.C. 635(b)(1)(A)) is amended in the first sentence by striking ``real income'' and all that follows to the end period and inserting: ``real income, a commitment to reinvestment and job creation, and the increased development of the productive resources of the United States''. SEC. 11. INCLUDING CHILD LABOR AS A CRITERION FOR DENYING CREDIT BASED ON THE NATIONAL INTEREST. Section 2(b)(1)(B) of the Export-Import Bank Act of 1945 (12 U.S.C. 635(b)(1)(B)), as amended by section 5, is amended in the next to the last sentence by inserting ``(including child labor)'' after ``human rights''. SEC. 12. PROHIBITION RELATING TO RUSSIAN TRANSFERS OF CERTAIN MISSILES TO THE PEOPLE'S REPUBLIC OF CHINA. Section 2(b) of the Export-Import Bank Act of 1945 (12 U.S.C. 635(b)) is amended by adding at the end the following: ``(12) Prohibition relating to russian transfers of certain missile systems.--If the President of the United States determines that the military or Government of the Russian Federation has transferred or delivered to the People's Republic of China an SS-N- 22 missile system and that the transfer or delivery represents a significant and imminent threat to the security of the United States, the President of the United States shall notify the Bank of the transfer or delivery as soon as practicable. Upon receipt of the notice and if so directed by the President of the United States, the Board of Directors of the Bank shall not give approval to guarantee, insure, extend credit, or participate in the extension of credit in connection with the purchase of any good or service by the military or Government of the Russian Federation.''. Speaker of the House of Representatives. Vice President of the United States and President of the Senate.
Export-Import Bank Reauthorization Act of 1997 - Amends the Export-Import Bank Act of 1945 to extend the authority of the Export-Import Bank of the United States through FY 2001. Reauthorizes the Bank's tied aid credit program. (Sec. 4) Extends from FY 1997 through 2001 Bank authority to provide financing for the export of nonlethal defense articles or services whose primary end use will be for civilian purposes. (Sec. 5) Revises Bank procedures governing the denial of the extension of credit to foreign countries based on the national interest to: (1) require the President to consult with specified congressional committees before determining that such a denial is in the U.S. national interest; and (2) require written notification to the President of the Bank of such determination, including the applications or categories of applications for credit which should be denied. (Sec. 6) Directs the General Counsel of the Bank to designate an attorney to serve as Assistant General Counsel for Administration, whose duties shall include oversight of and advice to Bank directors, officers, and employees on personnel and other administrative law matters. (Sec. 7) Requires the Board of Directors of the Bank to: (1) take prompt measures to promote the expansion of its loan, guarantee, and insurance programs in sub-Saharan Africa; (2) establish an advisory committee to advise it on the implementation of policies and programs to support such expansion; and (3) report annually to the Congress on steps it has taken to implement such policies and programs and any advisory committee recommendations. (Sec. 8) Revises the composition of the Advisory Committee of the Bank to include the appointment of not fewer than two members from the labor community. (Sec. 9) Directs the President of the Bank to: (1) enhance the Bank's capacity to provide information about its programs to small and rural companies which have not previously participated in them; and (2) report to the Congress on such activities within one year of enactment of this Act. (Sec. 11) Includes child labor as a human rights criterion that could serve as the basis for a presidential determination that an application for Bank credit should be denied for nonfinancial or noncommercial considerations. (Sec. 12) Requires the President, if the Russian military or Government has transferred an SS-N-22 missile system to China and such transfer represents a threat to U.S. security, to notify the Bank as soon as practicable. Directs the Bank Board of Directors to deny any guarantee, insurance, or extension of credit in connection with purchases of Russian goods or services if so directed by the President.
SECTION 1. SHORT TITLE. This Act may be cited as the ``CBP Hiring and Retention Act of 2016'' or the ``CBP HiRe Act''. SEC. 2. RETENTION INCENTIVES. (a) In General.--Chapter 97 of title 5, United States Code, is amended by adding at the end the following: ``Sec. 9702. U.S. Customs and Border Protection retention incentives ``(a) Definitions.--In this section-- ``(1) the term `bonus percentage rate' means the bonus percentage rate for a covered CBP employee established in accordance with subsection (d); ``(2) the term `covered CBP employee' means an employee of U.S. Customs and Border Protection performing activities that are critical to border security, as determined by the Secretary; and ``(3) the term `Secretary' means the Secretary of Homeland Security. ``(b) Authority.--The Secretary may pay a retention bonus to a covered CBP employee if the Secretary determines that, in the absence of a retention bonus, the covered CBP employee would be likely to leave-- ``(1) Federal service; or ``(2) for a different position in the Federal service, including a position in another agency or component of the Department of Homeland Security. ``(c) Written Agreement.-- ``(1) In general.--Payment of a retention bonus under this section is contingent upon the covered CBP employee entering into a written service agreement with U.S. Customs and Border Protection to complete a period of employment with U.S. Customs and Border Protection. ``(2) Terms and conditions.--A written agreement under this section shall include-- ``(A) the length of the required service period; ``(B) the amount of the bonus; ``(C) the method of payment; ``(D) other terms and conditions under which the bonus is payable, subject to the requirements of this section and regulations of the Secretary, which shall include-- ``(i) the conditions under which the agreement may be terminated before the agreed- upon service period has been completed; and ``(ii) the effect of the termination. ``(d) Amount.--A retention bonus under this section-- ``(1) shall be stated as a percentage of the basic pay of the covered CBP employee for the service period associated with the bonus; and ``(2) may not exceed 25 percent of the basic pay of the covered CBP employee. ``(e) Form of Payment.-- ``(1) In general.--A retention bonus may be paid to a covered CBP employee in installments after completion of specified periods of service or in a single lump sum at the end of the full period of service required by the written service agreement. ``(2) Installment payments.-- ``(A) Calculation of installments.--An installment payment is derived by multiplying the amount of basic pay earned in the installment period by a percentage not to exceed the bonus percentage rate established for the covered CBP employee. ``(B) Lump sum final payment.--If the installment payment percentage established for the covered CBP employee under subparagraph (A) is less than the bonus percentage rate established for the covered CBP employee, the accrued but unpaid portion of the bonus is payable as part of the final installment payment to the covered CBP employee after completion of the full service period under the terms of the written service agreement. ``(f) Exclusion From Basic Pay.--A retention bonus under this section is not part of the basic pay of an employee for any purpose.''. (b) Technical and Conforming Amendment.--The table of sections for chapter 97 of title 5, United States Code, is amended by adding at the end the following: ``9702. U.S. Customs and Border Protection retention incentives.''. SEC. 3. PILOT PROGRAMS FOR U.S. CUSTOMS AND BORDER PROTECTION. (a) Definitions.--In this section-- (1) the term ``covered area'' means a geographic area that the Secretary determines-- (A) is in a remote location; or (B) is an area for which it is difficult to find employees willing to accept the area as a permanent duty station; (2) the term ``covered CBP employee'' has the meaning given that term in section 9702 of title 5, United States Code, as added by section 2; (3) the term ``Department'' means the Department of Homeland Security; and (4) the term ``Secretary'' means the Secretary of Homeland Security. (b) Special Rates of Pay.-- (1) Authority.--The Secretary may establish one or more special rates of pay for covered CBP employees whose permanent duty station is located in a covered area. (2) Maximum amount.--A special rate of pay established under this subsection may not provide a rate of basic pay for any covered CBP employee that exceeds 125 percent of the otherwise applicable rate of basic pay for the covered CBP employee. (3) Sunset.-- (A) In general.--Subject to subparagraph (B), on and after the first day of the first pay period that begins more than 2 years after the date of enactment of this Act, the Secretary may not pay a covered CBP employee under a special rate of pay established under this subsection. (B) Extension.--If the Secretary determines the program of special rates of pay under this subsection is performing satisfactorily, the Secretary may extend the period during which the Secretary may pay covered CBP employees under such special rates of pay through the day before the first pay period that begins more than 4 years after the date of enactment of this Act. (4) Savings provision.--For any covered CBP employee being paid at a special rate of pay established under this subsection on the day before the date the pilot program terminates under paragraph (3), effective on the date the pilot program terminates under paragraph (3) the rate of pay for the covered CBP employee shall be the rate of pay that would have been in effect for the covered CBP employee had this section never been enacted, including any periodic step-increase or other adjustment that would have taken effect if the covered CBP employee had not been paid at a special rate of pay. (c) Limitation on Use of Polygraphs.-- (1) In general.--Subject to paragraph (2), during the 1- year period beginning on the date of enactment of this Act, if an applicant for a position in U.S. Customs and Border Protection does not successfully complete a polygraph examination required for appointment to that position-- (A) U.S. Customs and Border Protection may not disclose the results of the polygraph examination to any other Federal agency or any other agency or component of the Department; and (B) another Federal agency or another agency or component of the Department may not use the results of the polygraph examination, in whole or in part, in determining whether to appoint the individual to a position in the agency or component. (2) Extension.--If the Secretary determines that the limitation on the use of polygraphs under paragraph (1) is performing satisfactorily, the Secretary may extend the limitation until the end of the 2-year period beginning on the date of enactment of this Act. (3) Disclosures.-- (A) In general.--The Secretary shall provide each applicant for a position in U.S. Customs and Border Protection who will be required to successfully complete a polygraph examination before appointment to the position a list of actions or conduct of, or events relating to, the applicant that could disqualify the applicant from being appointed to the position. (B) List requirements.--When providing the list required under subparagraph (A), the Secretary shall-- (i) provide applicants as complete a list as is possible of potential disqualifying actions, conduct, or events; and (ii) clearly inform all applicants that the list provided under subparagraph (A) does not constitute the complete list of potential disqualifying actions, conduct, or events. (4) Use of polygraphs.--Paragraph (1) shall not-- (A) restrict the authority of U.S. Customs and Border Protection to report or refer an admission of criminal activity made by an applicant during a polygraph examination; (B) limit the authority of U.S. Customs and Border Protection to use the results of a polygraph examination administered as a requirement for appointment to a position in U.S. Customs and Border Protection, in whole or in part, in determining whether to appoint the individual to the position; or (C) limit the authority of another Federal agency or another agency or component of the Department to use the results of a polygraph examination administered to an individual by a Federal agency other than U.S. Customs and Border Protection, in whole or in part, in determining whether to appoint the individual to a position in the agency or component.
CBP Hiring and Retention Act of 2016 or the CBP HiRe Act This bill authorizes the Department of Homeland Security (DHS) to pay a retention bonus to U.S. Customs and Border Protection (CBP) employees performing activities that are critical to border security upon determining that such an employee would otherwise likely leave federal service or leave for a different position. A retention bonus service agreement shall include the length of required service and the amount and method of payment of the bonus. DHS may pay a special rate of pay (up to 125% of basic pay) to such CBP employees whose permanent duty stations are located in remote locations or other geographic areas for which finding employees is difficult. Such special pay authority terminates after two years but may be extended for another two years. If an applicant for a CBP position does not successfully complete a required polygraph examination, the CBP may not disclose the polygraph results to any other federal agency or DHS component and such other agency or component may not use the results in determining whether to appoint such individual. Such limitations terminate after one year but may be extended for another year. DHS shall provide each CBP applicant who will be required to successfully complete a polygraph examination before his or her appointment a list of disqualifying actions or conduct of, or events relating to, the applicant.
describes the array of chemical modifications on dna and histone proteins that dynamically regulate gene expression . the complex set of reversible post - translational modifications ( ptms ) that decorate histones ultimately determines the overall state of chromatin , giving rise to the so - called histone code , a cellular language involving specific proteins that introduce ( writers ) , remove ( erasers ) , or recognize ( readers ) ptms . -n - acetylation of lysine residues is a widespread post - translational mark , deposited on proteins throughout the entire proteome by lysine acetyl transferases ( kats ) , removed by lysine deacetylases ( kdacs ) , and recognized by evolutionary conserved protein modules such as bromodomains ( named after the drosophila melanogasterbrahma gene ) as well as the more recently discovered yeats domains . the human proteome encodes 61 bromodomains ( brds ) found on 42 diverse proteins , including histone acetyl transferases ( hats ) and hat - associated proteins such as gcn5 , pcaf , and bromodomain 9 ( brd9 ) , transcriptional coactivators such as the taf and trim / tif proteins , atp - dependent chromatin remodeling complexes such as baz1b , helicases such as smarca , nuclear scaffolding proteins such as the polybromo pb1 as well as transcriptional regulators , such as the bromo and extra terminal ( bet ) proteins . the family of human brd modules has almost been completely structurally characterized resulting in a recent classification of eight distinct structural subfamilies . the bet subfamily of brds has attracted a lot of attention , as its members ( brd2 , brd3 , brd4 , and the testis specific brdt ) play a central role in cell cycle progression , cellular proliferation , and apoptosis . bets contain two n - terminal brd modules that interact with acetylated histones , transcription factors or other acetylated transcriptional regulators , an extra terminal ( et ) recruitment domain , and a c - terminal motif ( in the case of brd4 and brdt ) responsible for the recruitment of the positive transcription elongation factor b ( p - tefb ) , ultimately controlling transcriptional elongation . importantly , bet brds have been successfully targeted by small molecule inhibitors , such as the triazolothienodiazepine ( + ) -jq1 and the triazolobenzodiazepine ibet762 , which were identified employing phenotypic screening and have in the past few years consolidated the emerging role of brds as viable therapeutic targets . indeed , bet inhibition suppresses tumor growth in diverse mouse models of cancer , such as nut midline carcinoma , acute myeloid and mixed lineage leukemia , multiple myeloma , glioblastoma , melanoma , burkitt s lymphoma , neuroblastoma and prostate cancer , leading to a number of clinical trials seeking to modulate bet function in diverse tumor settings . the initial success targeting bet proteins and the availability of robust recombinant systems of expression as well as biophysical assays to probe bet ligand interactions have spawned a number of medicinal chemistry efforts seeking to identify novel scaffolds that can block binding of acetylated lysines to these protein interaction modules . phenotypic screening , molecular docking , and fragment - based approaches have emerged as successful tools for discovering other kac mimetics , leading to the identification of a number of new chemotypes , including 3,4-dimethylisoxazoles , 3-methyl-3,4-dihydroquinazolinones , indolizinethanones , n - phenylacetamides , and n - acety-2-methyltetrahydroquinolines triazolopyrimidines , methylquinoline , and chloropyridones , thiazolidinones , 4-acylpyrroles , and triazolophtalazines ( summerized in chart 1 ) . starting from fragment hits , highly potent and selective bet inhibitors have also emerged , suggesting that it is possible to access new chemical space for inhibitor development via initial fragment screening . druggability analysis based on the available structural information suggests that it is possible to target most brd structural classes . hard to target , such as baz2b , have recently been successfully targeted by highly selective and potent inhibitors and fragment based approaches have now yielded chemotypes targeting brds outside the bet family , including crebbp / p300 , atad2 , baz2b , and brpf1 . ( a ) 3,4-dimethylisoxazole ; ( b ) 3-methyl-3,4-dihydroquinazolinone ; ( c ) indolizinethanone ; ( d ) n - phenylacetamide ; ( e ) n - acetyl-2-methyltetrahydroquinoline ; ( f ) triazolopyrimidine ; ( g ) methylquinoline ; ( h ) chloropyridone ; ( i ) thiazolidinone ; ( j ) 4-acylpyrrole ; ( k ) triazolophtalazine ; ( l ) methyltriazoles . ( a ) 3,4-dimethylisoxazole ; ( b ) 3-methyl-3,4-dihydroquinazolinone ; ( c ) indolizinethanone ; ( d ) n - phenylacetamide ; ( e ) n - acetyl-2-methyltetrahydroquinoline ; ( f ) triazolopyrimidine ; ( g ) methylquinoline ; ( h ) chloropyridone ; ( i ) thiazolidinone ; ( j ) 4-acylpyrrole ; ( k ) triazolophtalazine ; ( l ) methyltriazoles . an unexpected but interesting finding that recently attracted attention was the identification of clinically approved kinase inhibitors and tool compounds that exhibited binding with high affinity , and selectivity across the human brd family , to bet brds . crystal structures with the first bromodomain of brd4 revealed acetyllysine mimetic binding of the plk inhibitor bi-2536 and the jak inhibitor fedratinib without any significant distortion of the inhibitors when compared to kinase complexes , suggesting the potential to develop polypharmacology targeting both brd and kinases at the same time . interestingly , the cyclin - dependent kinase inhibitor dinaciclib was also identified as a binder of brd4 , suggesting that other inhibitor classes might be good starting points for developing inhibitors for brds . in light of the successful fragment - based programs and the reliability for discovering brds inhibitors purine is a privileged chemical template , as it is one of the most abundant n - based heterocycles in nature , and a number of purine - based drugs are currently approved and used for the treatment of cancer ( 6-mercaptopurine , 6-thioguanine ) , viral infections such as aids and herpes ( carbovir , abacavir , acyclovir , ganciclovir ) , hairy cell leukemia ( cladribine ) , and organ rejection ( azathioprine ) . moreover , purine based compounds have emerged as reliable chemical biology tools , since they can interact with a variety of biological targets involved in a number of diseases . some such examples include their activity as microtubule ( myoseverin ) , 90-heat shock protein ( pu3 ) , sulfotransferase ( ng38 ) , adenosine receptor ( kw-6002 ) , and cyclin - dependent kinase ( olomoucine , figure 1a ; roscovitine ) inhibitors . ( a ) structure of olomoucine , a potent cyclin - dependent kinase inhibitor and numbering of the 2-amine-9h - purine core scaffold . ( c ) docking pose of 1 ( yellow stick representation ) onto the bromodomain of brd4(1 ) positions the bulky halogen group on the top of the binding pocket . the protein is shown as a white ribbon ( starting model , pdb code 4men ) or in magenta ( docked model ) with characteristic residues shown as sticks in the same color scheme . ( d ) alternative binding of compound 1 into the bromodomain of brd4(1 ) with the 6-chloro substituent adopting an acetyllysine mimetic pose . ( e ) docking of compound 2a positions the 2-amine-9h - purine ring system of the ligand within the kac cavity , sterically packing between v87 and i146 , suggesting that this scaffold topology can be further utilized to target bromodomains . the same color scheme as in ( c ) and ( d ) is used with the ligand shown as orange sticks . ( f ) fragments were tested in a thermal shift assay against bromodomains of the bet subfamily as well as representative members from other families . computational analyses followed by in vitro evaluation of purine - based fragments identified this scaffold as a novel effective kac mimetic . interestingly , initial purine fragment hits also demonstrated affinity outside the bet subfamily of brds , toward the less explored brds of pb1 , crebbp , and brd9 . to our knowledge , the only known brd9 inhibitors today show cross - reactivity toward bet brds and crebbp employing a triazolophthalazine template . the precise biological function of brd9 remains elusive , although it has been identified as a component of the swi / snf complex and has been associated with a number of different cancer types , including non - small - cell lung cancer , cervical , and hepatocellular carcinoma . notably , its brd reader module has been frequently found mutated in lung squamous cell carcinoma , prostate adenocarcinoma , and uterine corpus endometrial carcinoma . we chose a small purine fragment , compound 2a ( chart 2 ) , from our fragment hits for structural optimization and found that some of its 6-aryl derivatives exhibited nanomolar affinity toward brd9 , with lower activity toward brd4 . importantly , the developed inhibitors induced an appreciable change in the three - dimensional shape of the receptor binding cavity , indicating that their binding occurs through an previously an induced fit binding was observed when a dihydroquinoxalinone was shown to insert under an arginine residue of the crebbp brd , resulting in restructuring of the kac binding site of this bromodomain . in our case the rearrangement of the binding site of brd9 was more extensive , with several side chains rotating and shifting to accommodate the small purine ligand . the optimized compound 11 ( chart 2 ) exhibited nanomolar affinity for brd9 with weaker micromolar affinity for brd4 in vitro , displaced the brd9 brd from chromatin without affecting brd4 binding to histones , and did not show any cytotoxicity toward human embryonic kidney cells . our work establishes the proof - of - concept of using 2-amino-9h - purines as a starting point to develop kac mimetic compounds targeting brds outside the bet family , with compound 11 representing a promising low nanomolar starting point toward the discovery of selective brd9 ligands to be used as chemical probes for deep biological and pharmacological investigation of brd9 . protein 3d models of brd4 ( first bromodomain , pdb code 4men(30 ) ) and brd9 ( apo form , pdb code 3hme(15 ) ) were prepared using the schrdinger protein preparation wizard workflow ( schrdinger , llc , new york , ny , 2013 ) . briefly , water molecules that were found 5 or more away from heteroatom groups were removed and cap termini were included . chemical structures of investigated compounds were built with maestro s build panel ( version 9.6 ; schrdinger , llc , new york , ny , 2013 ) and subsequently processed with ligprep ( version 2.8 ; schrdinger , llc , new york , ny , 2013 ) in order to generate all the possible stereoisomers , tautomers , and protonation states at a ph of 7.4 1.0 ; the resulting ligands were finally minimized employing the opls 2005 force field . binding sites for the initial glide ( version 6.1 , schrdinger , llc , new york , ny , 2013 ) docking phases of the induced fit workflow ( induced fit docking protocol 2013 - 3 , glide version 6.1 , prime version 3.4 , schrdinger , llc , new york , ny , 2013 ) were calculated on the brd4(1 ) and brd9 structures , considering the centroid of the cocrystallized ligand ( n,5-dimethyl - n-(4-methylbenzyl)triazolo[1,5-a]pyrimidin-7-amine for brd4(1 ) , from pdb code 4men ) , or cocrystallized ligand 7d ( from the brd9/7d complex ) , or tyr106 ( for brd9 , pdb code 3hme ) for grid generation . in all cases , cubic inner boxes with dimensions of 10 were applied to the proteins , and outer boxes were automatically detected . ring conformations of the investigated compounds were sampled using an energy window of 2.5 kcal / mol ; conformations featuring nonplanar conformations of amide bonds were penalized . side chains of residues close to the docking outputs ( within 8.0 of ligand poses ) were reoriented using prime ( version 3.4 , schrdinger , llc , new york , ny , 2013 ) , and ligands were redocked into their corresponding low energy protein structures ( glide standard precision mode ) , considering inner boxes dimensions of 5.0 ( outer boxes automatically detected ) , with resulting complexes ranked according to glidescore . calculations were performed using the glide software package ( sp mode , version 6.1 , schrdinger package ) in order to determine the binding mode of compound 11 into the acetyllysine cavity of brd9 . first the receptor grid was generated focused on the brd9 binding site taking as a reference structure the experimentally determined complex structure of compound 7d with brd9 , with inner- and outer - box dimensions of 10 10 10 and 21.71 21.71 21.71 , respectively . standard precision ( sp ) glide mode was employed accounting for compound flexibility in the sampling of compound 11 . the sampling step was set to expanded sampling mode ( 4 times ) , keeping 10 000 ligand poses for the initial phase of docking , followed by 800 ligand poses selection for energy minimization . a maximum number of 50 output structures were saved for each ligand , with a scaling factor of 0.8 related to van der waals radii with a partial charge cutoff of 0.15 . a postdocking optimization of the obtained docking outputs was performed , accounting for a maximum of 50 poses based on a 0.5 kcal / mol rejection cutoff for the obtained minimized poses . all commercially available starting materials were purchased from sigma - aldrich and were used as received . solvents used for the synthesis were of hplc grade and were purchased from sigma - aldrich or carlo erba reagenti . compounds were dissolved in 0.5 ml of meod , cdcl3 , or dmso - d6 . coupling constants ( j ) are reported in hertz , and chemical shifts are expressed in parts per million ( ppm ) on the scale relative to the solvent peak as internal reference . electrospray mass spectrometry ( esi - ms ) was performed on a lcq deca termoquest ( san jose , ca , usa ) mass spectrometer . chemical reactions were monitored on silica gel 60 f254 plates ( merck ) , and spots were visualized under uv light . analytical and semipreparative reversed - phase hplc were performed on an agilent technologies 1200 series high performance liquid chromatography system using jupiter proteo c18 reversed - phase columns ( ( a ) 250 mm 4.60 mm , 4 m , 90 , flow rate = 1 ml / min ; ( b ) 250 mm 10.00 mm , 10 m , 90 , flow rate = 4 ml / min respectively , phenomenex ) . the binary solvent system ( a / b ) was as follows : 0.1% tfa in water ( a ) and 0.1% tfa in ch3cn ( b ) . the purity of all tested compound ( > 98% ) was determined by hplc analysis . microwave irradiation reactions were carried out in a dedicated cem - discover focused microwave synthesis apparatus , operating with continuous irradiation power from 0 to 300 w utilizing the standard absorbance level of 300 w maximum power . the discover system also included controllable ramp time , hold time ( reaction time ) , and uniform stirring . after the irradiation period , reaction vessels were cooled rapidly ( 60120 s ) to ambient temperature by air jet cooling . 2-amino-6-bromopurine ( 50.0 mg , 0.23 mmol ) , commercially available boronic acids ( 0.29 mmol ) , pd(oac)2 ( 2.70 mg , 0.012 mmol ) , p(c6h4so3na)3 ( 34.0 mg , 0.06 mmol ) , and cs2co3 ( 228.0 mg , 0.70 mmol ) were added to a 10 ml microwave vial equipped with a magnetic stirrer . degassed acetonitrile ( 0.5 ml ) and degassed water ( 1.0 ml ) were added by means of an airtight syringe . after irradiation , the vial was cooled to ambient temperature by air jet cooling and a mixture of cold water and hcl ( 1.5 m ) was added ( 5.0 and 2.0 ml , respectively ) . the mixture was subsequently poured into crushed ice and then left at 4 c overnight . the resulting precipitate was filtered and purified by hplc to give the desired product in good yields ( 5390% ) . hplc purification was performed by semipreparative reversed - phase hplc using the gradient conditions reported below for each compound . the final products were obtained with high purity ( > 95% ) detected by hplc analysis and were fully characterized by esi - ms and nmr spectroscopy . 2-amino-6-arylpurine ( 0.1 mmol ) was dissolved in 0.4 ml of thf at room temperature . to this mixture 0.2 ml ( 0.2 mmol ) of tbaf ( 1.0 m solution in thf , aldrich ) and iodomethane ( 12.5 l , 0.2 mmol ) or chloroacetone ( 16.0 l , 0.2 mmol ) were added . water was added , and the aqueous layer was extracted three times with dichloromethane . the combined organic layers were washed with water , dried with anhydrous na2so4 and concentrated under vacuum . the crude mixture was purified by semipreparative reversed - phase hplc using the gradient conditions reported below for each compound . compounds were obtained in good yields ( 5088% ) and high purity ( > 98% ) and were fully characterized by esi - ms and nmr spectroscopy . a three - necked flask was charged with the 2-amino-6-arylpurine derivative ( 7b d ) ( 0.5 mmol ) and 50% h2so4 ( 2.0 ml ) . the mixture was stirred at room temperature for 30 min and then cooled to 5 c . a solution of nano2 ( 48.3 mg , 0.7 mmol ) in h2o ( 200 l ) was added dropwise , and the release of nitrogen gas was immediately observed . the reaction mixture was then stirred at 10 c for 2 h , and urea ( 24.0 mg , 0.4 mmol ) was added to decompose the excess of nano2 . the mixture was then stirred at 50 c for 1 h and neutralized with 50% naoh solution , diluted with water , and extracted three times with etoac . the crude mixture was purified by semipreparative reversed - phase hplc to get the pure products in good yields ( 4763% ) . 2b was obtained from commercially available 2a following general procedure b as a yellow powder in 85% yield . rp - hplc : tr = 12.4 min , gradient condition from 5% b to 100% b in 95 min , flow rate of 4 ml / min , = 240 nm . h nmr ( 600 mhz , meod ) : = 3.74 ( s , 3h ) , 8.25 ( s , 1h ) . c nmr ( 150 mhz , meod ) : = 30.6 , 126.3 , 142.5 , 149.4 , 155.62 , 160.8 . esi - ms , calculated for c6h6brn5 227.0 ; found m / z = 228.1 [ m + h ] . 3a was obtained following general procedure a as a white powder in 90% yield from 2a and phenylboronic acid 13 . rp - hplc tr = 12.1 min , gradient condition from 5% b to 100% b in 60 min , flow rate of 4 ml / min , = 240 nm . 3b was obtained following general procedure a as a pale yellow powder in 86% yield from 2a and 4-methoxyphenylboronic acid 14 . rp - hplc tr = 14.6 min , gradient condition from 5% b to 100% b in 65 min , flow rate of 4 ml / min , = 240 nm . 3c was obtained following general procedure b as a yellow powder in 88% yield from 3b . rp - hplc tr = 15.5 min , gradient condition from 5% b to 100% b in 65 min , flow rate of 4 ml / min , = 240 nm . h nmr ( 300 mhz , cdcl3 ) : = 3.79 ( s , 3h ) , 3.94 ( s , 3h ) , 7.12 ( d , j = 8.6 hz , 2h ) , 7.95 ( s , 1h ) , 8.55 ( d , j = 8.5 hz , 2h ) . esi - ms , calculated for c13h13n5o 255.1 ; found m / z = 256.3 [ m + h ] . 3d was obtained following general procedure a as a pale yellow powder in 90% yield from 2a and 4-phenoxyphenylboronic acid 27 . rp - hplc tr = 24.1 min , gradient condition from 5% b to 100% b in 65 min , flow rate of 4 ml / min , = 240 nm . h nmr ( 300 mhz , meod ) : = 7.107.20 ( m , 4h ) , 7.25 ( t , j = 7.1 hz , 1h ) , 7.46 ( t , j = 7.5 hz , 2h ) , 8.36 ( br s , 3h ) . esi - ms , calculated for c17h13n5o 303.1 ; found m / z = 304.3 [ m + h ] . 3e was obtained from 3d following general procedure b as a yellow powder in 82% yield . rp - hplc tr = 27.5 min , gradient condition from 5% b to 100% b in 75 min , flow rate of 4 ml / min , = 240 nm . h nmr ( 600 mhz , cdcl3 ) : = 3.77 ( s , 3h ) , 7.057.18 ( m , 5h ) , 7.37 ( t , j = 7.8 hz , 2h ) , 7.79 ( s , 1h ) , 8.70 ( br s , 2h ) . c nmr ( 150 mhz , cdcl3 ) : = 30.6 , 118.3 , 121.6 , 124.8 , 125.7 , 127.1 , 131.2 , 133.9 , 142.8 , 147.6 , 150.7 , 156.3 , 157.8 , 163.3 . esi - ms , calculated for c18h15n5o 317.1 ; found m / z = 318.2 [ m + h ] . 3f was obtained following general procedure a as a yellow powder in 77% yield from 2a and 4-(benzyloxy)phenylboronic acid 30 . rp - hplc tr = 17.4 min , gradient condition from 5% b to 100% b in 40 min , flow rate of 4 ml / min , = 240 nm . h nmr ( 300 mhz , meod ) : = 5.28 ( s , 2h ) , 7.30 ( d , j = 8.8 hz , 2h ) , 7.367.46 ( m , 3h ) , 7.50 ( br s , 2h ) , 8.378.45 ( m , 3h ) . esi - ms , calculated for c18h15n5o 317.1 ; found m / z = 318.1 [ m + h ] . 3 g was obtained following general procedure a as a yellow powder in 79% yield from 2a and 4-(3-(trifluoromethyl)phenoxymethyl)phenylboronic acid 31 . rp - hplc tr = 25.0 min , gradient condition from 5% b to 100% b in 50 min , flow rate of 4 ml / min , = 240 nm . h nmr ( 300 mhz , meod ) : = 5.31 ( s , 2h ) , 7.227.33 ( m , 3h ) , 7.48 ( t , j = 7.9 hz , 1h ) , 7.73 ( d , j = 8.1 hz , 2h ) , 8.37 ( br s , 3h ) . esi - ms , calculated for c19h14f3n5o 385.1 ; found m / z = 386.1 [ m + h ] . 3h was obtained following general procedure a as a yellow powder in 77% yield from 2a and 4-((4-(2-methoxyethyl)phenoxy)methyl)phenylboronic acid 32 . rp - hplc tr = 17.9 min , gradient condition from 5% b to 100% b in 40 min , flow rate of 4 ml / min , = 240 nm . h nmr ( 300 mhz , meod ) : = 2.80 ( t , j = 6.9 hz , 2h ) , 3.33 ( s , 3h ) , 3.57 ( t , j = 6.9 hz , 2h ) , 5.22 ( s , 2h ) , 6.96 ( d , j = 8.7 hz , 2h ) , 7.16 ( d , j = 8.6 hz , 2h ) , 7.70 ( d , j = 8.1 hz , 2h ) , 8.33 ( s , 1h ) , 8.40 ( d , j = 8.3 hz , 2h ) . esi - ms , calculated for c21h21n5o2 375.2 ; found m / z = 376.1 [ m + h ] . 4a was obtained following general procedure a as a yellow powder in 84% yield from 2a and 4-benzyloxy-3-chlorophenylboronic acid 28 . rp - hplc tr = 20.4 min , gradient condition from 5% b to 100% b in 40 min , flow rate of 4 ml / min , = 240 nm . h nmr ( 300 mhz , meod ) : = 5.32 ( s , 2h ) , 7.337.44 ( m , 4h ) , 7.50 ( br s , 2h ) , 8.34 ( br s , 2h ) , 8.50 ( s , 1h ) . esi - ms , calculated for c18h14cln5o 351.1 ; found m / z = 352.1 [ m + h ] . 4b was obtained following general procedure a as a yellow powder in 86% yield from 2a and 3-chloro-4-(3,5-dimethoxybenzyloxy)phenylboronic acid 33 . rp - hplc tr = 20.5 min , gradient condition from 5% b to 100% b in 40 min , flow rate of 4 ml / min , = 240 nm . h nmr ( 600 mhz , meod ) : = 3.76 ( s , 6h ) , 5.28 ( s , 2h ) , 6.48 ( s , 1h ) , 6.68 ( s , 2h ) , 7.32 ( d , j = 8.8 hz , 1h ) , 8.18 ( s , 1h ) , 8.47 ( br s , 1h ) , 8.61 ( s , 1h ) . c nmr ( 150 mhz , meod ) : = 56.2 , 71.6 , 100.7 , 106.2 , 115.6 , 123.2 , 127.0 , 128.7 , 132.1 , 139.7 , 143.1 , 149.5 , 150.3 , 155.7 , 157.2 , 160.4 , 161.8 . esi - ms , calculated for c20h18cln5o3 411.1 ; found m / z = 412.1 [ m + h ] . 4c was obtained following general procedure a as a pale yellow powder in 84% yield from 2a and 1,4-benzodioxane-6-boronic acid 26 . min , gradient condition from 5% b to 100% b in 50 min , flow rate of 4 ml / min , = 240 nm . h nmr ( 300 mhz , meod ) : = 4.344.40 ( m , 4h ) , 7.09 ( d , j = 8.5 hz , 1h ) , 7.897.98 ( m , 2h ) , 8.34 ( s , 1h ) . esi - ms , calculated for c13h11n5o2 269.1 ; found m / z = 270.2 [ m + h ] . 4d was obtained following general procedure a as a yellow powder in 78% yield from 2a and 3-(4-chlorobenzyloxy)phenylboronic acid 29 . rp - hplc tr = 29.8 min , gradient condition from 5% b to 100% b in 70 min , flow rate of 4 ml / min , = 240 nm . h nmr ( 300 mhz , meod ) : = 5.19 ( s , 2h ) , 7.29 ( d , j = 7.6 hz , 1h ) , 7.40 ( d , j = 8.4 hz , 2h ) , 7.457.58 ( m , 3h ) , 7.90 ( d , j = 6.9 hz , 1h ) , 8.01 ( s , 1h ) , 8.37 ( s , 1h ) . esi - ms , calculated for c18h14cln5o 351.1 ; found m / z = 352.2 [ m + h ] . 4e was obtained following general procedure b as a yellow powder in 63% yield from 4d . rp - hplc tr = 28.3 min , gradient condition from 5% b to 100% b in 60 min , flow rate of 4 ml / min , = 240 nm . h nmr ( 300 mhz , cdcl3 ) : = 3.77 ( s , 3h ) , 5.20 ( s , 2h ) , 7.13 ( d , j = 7.8 hz , 1h ) , 7.36 ( d , j = 8.2 hz , 2h ) , 7.417.51 ( m , 3h ) , 7.83 ( s , 1h ) , 8.318.42 ( m , 2h ) . esi - ms , calculated for c19h16cln5o 365.1 ; found m / z = 366.2 [ m + h ] . 5a was obtained following general procedure a as a white powder in 83% yield from 2a and 3-bromo-5-butoxyphenylboronic acid 21 . rp - hplc tr = 26.7 min , gradient condition from 5% b to 100% b in 50 min , flow rate of 4 ml / min , = 240 nm . h nmr ( 600 mhz , dmso - d6 ) : = 0.93 ( t , j = 7.3 hz , 3h ) , 1.391.51 ( m , 2h ) , 1.681.76 ( m , 2h ) , 4.06 ( t , j = 6.3 hz , 2h ) , 6.46 ( s , 2h ) , 7.28 ( s , 1h ) , 8.16 ( s , 1h ) , 8.37 ( s , 1h ) , 8.50 ( s , 1h ) , 12.72 ( s , 1h ) . c nmr ( 150 mhz , dmso - d6 ) : = 14.9 , 20.1 , 32.1 , 69.5 , 115.8 , 120.9 , 124.4 , 125.8 , 138.7 , 142.2 , 148.1 , 151.3 , 157.2 , 161.3 . esi - ms , calculated for c15h16brn5o 361.1 ; found m / z = 362.3 [ m + h ] . 5b was obtained following general procedure b as a yellow powder in 85% yield from 5a . rp - hplc tr = 27.9 min , gradient condition from 5% b to 100% b in 45 min , flow rate of 4 ml / min , = 240 nm . h nmr ( 300 mhz , cdcl3 ) : = 0.97 ( t , j = 7.3 hz , 3h ) , 1.461.55 ( m , 2h ) , 1.731.83 ( m , 2h ) , 3.79 ( s , 3h ) , 4.11 ( t , j = 6.2 hz , 2h ) , 7.29 ( s , 1h ) , 7.92 ( s , 1h ) , 7.97 ( s , 1h ) , 8.33 ( s , 1h ) . esi - ms , calculated for c16h18brn5o 375.1 ; found m / z = 376.2 [ m + h ] . 6a was obtained following general procedure a as a white powder in 53% yield from 2a and 2,6-dimethoxyphenylboronic acid 22 . rp - hplc tr = 13.0 min , gradient condition from 5% b to 100% b in 80 min , flow rate of 4 ml / min , = 240 nm . h nmr ( 300 mhz , meod ) : = 3.82 ( s , 6h ) , 6.87 ( d , j = 8.5 hz , 2h ) , 7.60 ( t , j = 8.5 hz , 1h ) , 8.44 ( s , 1h ) . esi - ms , calculated for c13h13n5o2 271.1 ; found m / z = 272.2 [ m + h ] . 6b was obtained following general procedure a as a white powder in 62% yield from 2a and 2-isopropoxy-6-methoxyphenylboronic acid 23 . rp - hplc tr = 18.1 min , gradient condition from 5% b to 100% b in 40 min , flow rate of 4 ml / min , = 240 nm . h nmr ( 300 mhz , meod ) : = 1.18 ( s , 6h ) , 3.80 ( s , 3h ) , 4.604.71 ( m , 1h ) , 6.806.88 ( m , 2h ) , 7.56 ( t , j = 8.5 hz , 1h ) , 8.43 ( s , 1h ) . esi - ms , calculated for c15h17n5o2 299.1 ; found m / z = 300.1 [ m + h ] . 6c was obtained following general procedure a as a white powder in 76% yield from 2a and 2-isobutoxy-6-methoxy - phenylboronic acid 24 . rp - hplc tr = 14.2 min , gradient condition from 5% b to 100% b in 40 min , flow rate of 4 ml / min , = 240 nm . h nmr ( 600 mhz , dmso - d6 ) : = 0.69 ( s , 6h ) , 1.721.79 ( m , 1h ) , 3.71 ( br s , 5h ) , 6.816.86 ( m , 2h ) , 7.52 ( t , j = 8.1 hz , 1h ) , 8.47 ( s , 1h ) . c nmr ( 150 mhz , meod ) : = 20.3 , 29.0 , 57.6 , 76.1 , 106.2 , 118.6 , 125.8 , 134.7 , 137.4 , 142.7 , 154.9 , 159.3 , 160.8 , 173.0 . esi - ms , calculated for c16h19n5o2 313.2 ; found m / z = 314.1 [ m + h ] . 7a was obtained following general procedure a as a pale yellow powder in 78% yield from 2a and 2-methoxyphenylboronic acid 15 . rp - hplc tr = 14.9 min , gradient condition from 5% b to 100% b in 80 min , flow rate of 4 ml / min , = 240 nm . h nmr ( 300 mhz , meod ) : = 3.97 ( s , 3h ) , 7.21 ( t , j = 7.5 hz , 1h ) , 7.30 ( d , j = 8.4 hz , 1h ) , 7.67 ( t , j = 7.8 hz , 1h ) , 8.10 ( d , j = 7.4 hz , 1h ) , 8.45 ( s , 1h ) . esi - ms , calculated for c12h11n5o 241.1 ; found m / z = 242.2 [ m + h ] . 7b was obtained following general procedure a as a pale yellow powder in 70% yield from 2a and 5-fluoro-2-methoxyphenylboronic acid 16 . rp - hplc tr = 14.0 min , gradient condition from 5% b to 100% b in 60 min , flow rate of 4 ml / min , = 240 nm . h nmr ( 300 mhz , meod ) : = 3.95 ( s , 3h ) , 7.29 ( d , j = 8.9 hz , 1h ) , 7.41 ( dd , j = 8.9 , 2.4 hz , 1h ) , 7.92 ( br s , 1h ) , 8.54 ( s , 1h ) . esi - ms , calculated for c12h10fn5o 259.1 ; found m / z = 260.1 [ m + h ] . 7c was obtained following general procedure a as a pale yellow powder in 78% yield from 2a and 5-chloro-2-methoxyphenylboronic acid 17 . rp - hplc tr = 18.1 min , gradient condition from 5% b to 100% b in 80 min , flow rate of 4 ml / min , = 240 nm . h nmr ( 300 mhz , meod ) : = 3.95 ( s , 3h ) , 7.28 ( d , j = 8.9 hz , 1h ) , 7.63 ( dd , j = 8.9 , 2.5 hz , 1h ) , 8.03 ( br s , 1h ) , 8.53 ( s , 1h ) . esi - ms , calculated for c12h10cln5o 275.1 ; found m / z = 276.1 [ m + h ] . 7d was obtained following general procedure a as a yellow powder in 77% yield from 2a and 5-bromo-2-methoxyphenylboronic acid 18 . rp - hplc tr = 17.3 min , gradient condition from 5% b to 100% b in 60 min , flow rate of 4 ml / min , = 240 nm . h nmr ( 600 mhz , meod ) : = 3.95 ( s , 3h ) , 7.23 ( d , j = 9.0 hz , 1h ) , 7.76 ( dd , j = 8.9 , 2.4 hz , 1h ) , 8.14 ( br s , 1h ) , 8.53 ( s , 1h ) . c nmr ( 150 mhz , meod ) : = 56.3 , 113.3 , 114.4 , 123.7 , 126.4 , 135.4 , 135.9 , 142.6 , 149.5 , 155.1 , 157.8 , 160.9 . esi - ms , calculated for c12h10brn5o 319.0 ; found m / z = 320.3 [ m + h ] . 7e was obtained following general procedure a as a yellow powder in 82% yield from 2a and 5-bromo-2-ethoxyphenylboronic acid 19 . rp - hplc tr = 21.6 min , gradient condition from 5% b to 100% b in 80 min , flow rate of 4 ml / min , = 240 nm . h nmr ( 300 mhz , meod ) : = 1.31 ( t , j = 6.9 hz , 3h ) , 4.134.24 ( m , 2h ) , 7.19 ( d , j = 8.9 hz , 1h ) , 7.72 ( dd , j = 8.9 , 2.3 hz , 1h ) , 7.98 ( brs , 1h ) , 8.53 ( s , 1h ) . esi - ms , calculated for c13h12brn5o 333.0 ; found m / z = 334.1 [ m + h ] . 8a was obtained following general procedure a as a yellow powder in 68% yield from 1 and 5-bromo-2-methoxyphenylboronic acid 18 . rp - hplc tr = 24.3 min , gradient condition from 5% b to 100% b in 45 min , flow rate of 4 ml / min , = 240 nm . h nmr ( 300 mhz , cdcl3 ) : = 3.91 ( s , 3h ) , 7.00 ( d , j = 8.8 hz , 1h ) , 7.23 ( br s , 1h ) 7.63 ( d , j = 8.1 hz , 1h ) , 8.04 ( s , 1h ) . esi - ms , calculated for c12h8brcln4o 338.0 ; found m / z = 339.2 [ m + h ] . 8b was obtained following general procedure b as a yellow powder in 87% yield from 7d . rp - hplc tr = 18.9 min , gradient condition from 5% b to 100% b in 70 min , flow rate of 4 ml / min , = 240 nm . h nmr ( 300 mhz , cdcl3 ) : = 3.79 ( s , 3h ) , 3.87 ( s , 3h ) , 6.947.01 ( m , 1h ) , 7.24 ( br s , 1h ) , 8.02 ( s , 1h ) . esi - ms , calculated for c13h12brn5o 333.0 ; found m / z = 334.2 [ m + h ] . 8c was obtained following general procedure b as a yellow powder in 50% yield from 7d . rp - hplc tr = 22.9 min , gradient condition from 5% b to 100% b in 80 min , flow rate of 4 ml / min , = 240 nm . h nmr ( 600 mhz , cdcl3 ) : = 2.40 ( s , 3h ) , 4.01 ( s , 3h ) , 5.00 ( s , 2h ) , 7.03 ( d , j = 8.9 hz , 1h ) , 7.70 ( dd , j = 8.9 , 2.4 hz , 1h ) , 8.02 ( s , 1h ) , 8.19 ( s , 1h ) . c nmr ( 150 mhz , cdcl3 ) : = 27.3 , 52.5 , 57.1 , 114.3 , 118.8 , 125.1 , 126.9 , 136.3 , 138.4 , 143.3 , 148.2 , 154.3 , 156.7 , 158.4 , 199.2 . esi - ms , calculated for c15h14brn5o2 375.0 ; found m / z = 376.1 [ m + h ] . 8d was obtained following general procedure c as a white powder in 47% yield from 7b . rp - hplc tr = 13.9 min , gradient condition from 5% b to 100% b in 50 min , flow rate of 1 ml / min , = 240 nm . h nmr ( 300 mhz , dmso - d6 ) : = 3.93 ( s , 3h ) , 7.31 ( d , j = 8.9 hz , 1h ) , 7.40 ( dd , j = 8.9 , 2.4 hz , 1h ) , 7.89 ( br s , 1h ) , 8.54 ( s , 1h ) . esi - ms , calculated for c12h9fn4o2 260.1 ; found m / z = 261.1 [ m + h ] . 8e was obtained following general procedure c as a white powder in 63% yield from 7c . rp - hplc tr = 14.8 min , gradient condition from 5% b to 100% b in 50 min , flow rate of 4 ml / min , = 240 nm . h nmr ( 300 mhz , dmso - d6 ) : = 3.94 ( s , 3h ) , 7.26 ( d , j = 8.9 hz , 1h ) , 7.63 ( dd , j = 8.9 , 2.5 hz , 1h ) , 8.03 ( br s , 1h ) , 8.53 ( s , 1h ) . esi - ms , calculated for c12h9cln4o2 276.0 ; found m / z = 277.1 [ m + h ] . 8f was obtained following general procedure c as a white powder in 55% yield from 7d . rp - hplc tr = 15.8 min , gradient condition from 5% b to 100% b in 50 min , flow rate of 1 ml / min , = 240 nm . h nmr ( 300 mhz , dmso - d6 ) : = 3.97 ( s , 3h ) , 7.25 ( d , j = 9.0 hz , 1h ) , 7.76 ( dd , j = 8.9 , 2.4 hz , 1h ) , 8.14 ( br s , 1h ) , 8.53 ( s , 1h ) . esi - ms , calculated for c12h9brn4o2 320.0 ; found m / z = 321.1[m + h ] . 9a was obtained following general procedure a as a pale yellow powder in 70% yield from 6-chloro-8-methyl-9h - purine 12 and 5-bromo-2-methoxyphenylboronic acid 18 . rp - hplc tr = 21.7 min , gradient condition from 5% b to 100% b in 70 min , flow rate of 4 ml / min , = 240 nm . h nmr ( 300 mhz , dmso - d6 ) : = 2.69 ( s , 3h ) , 3.87 ( s , 3h ) , 7.20 ( d , j = 9.0 hz , 1h ) , 7.71 ( dd , j = 8.9 , 2.4 hz , 1h ) , 7.82 ( s , 1h ) , 8.95 ( s , 1h ) . esi - ms , calculated for c13h11brn4o 318.0 ; found m / z = 319.2 [ m + h ] . 9b was obtained following general procedure a as a pale yellow powder in 81% yield from 6-chloro-8-methyl-9h - purine 12 and 5-fluoro-2-methoxyphenylboronic acid 16 . rp - hplc tr = 15.6 min , gradient condition from 5% b to 100% b in 60 min , flow rate of 4 ml / min , = 240 nm . h nmr ( 300 mhz , meod ) : = 2.76 ( s , 3h ) , 3.90 ( s , 3h ) , 7.27 ( d , j = 9.0 hz , 1h ) , 7.39 ( br s , 1h ) , 7.67 ( dd , j = 8.9 , 2.4 hz , 1h ) , 9.04 ( s , 1h ) . c nmr ( 75 mhz , meod ) : = 14.1 , 56.2 , 113.5 , 117.9 , 118.3 , 119.2 , 126.9 , 147.7 , 151.3 , 154.5 , 155.9 , 158.2 , 159.2 . esi - ms , calculated for c13h11fn4o 258.1 ; found m / z = 259.1 [ m + h ] . 10 was obtained following general procedure a as a yellow powder in 82% yield from 2a and 1-thianthrenylboronic acid 25 . rp - hplc tr = 18.5 min , gradient condition from 5% b to 100% b in 40 min , flow rate of 4 ml / min , = 240 nm . h nmr ( 300 mhz , meod ) : = 7.227.37 ( m , 3h ) , 7.51 ( t , j = 7.5 hz , 2h ) , 7.65 ( d , j = 7.4 hz , 1h ) , 7.75 ( d , j = 7.7 hz , 1h ) , 8.37 ( s , 1h ) . esi - ms , calculated for c17h11n5s2 349.0 ; found m / z = 350.1 [ m + h ] . 11 was obtained following general procedure a as a pale yellow powder in 79% yield from 2a and 5-bromo-2,3-dihydrobenzo[b]furan-7-boronic acid 20 . rp - hplc tr = 15.2 min , gradient condition from 5% b to 100% b in 40 min , flow rate of 4 ml / min , = 240 nm . h nmr ( 600 mhz , meod ) : = 3.353.42 ( m , 2h ) , 4.874.93 ( m , 2h ) , 7.65 ( s , 1h ) , 8.50 ( s , 1h ) , 8.57 ( s , 1h ) . c nmr ( 150 mhz , meod ) : = 29.1 , 74.3 , 113.6 , 121.2 , 125.8 , 131.5 , 132.1 , 134.0 , 146.1 , 154.9 , 158.8 , 159.9 , 160.7 . esi - ms , calculated for c13h10brn5o 331.0 ; found m / z = 332.2 [ m + h ] . thermal melting experiments were carried out using an mx3005p real time pcr machine ( stratagene ) . proteins were buffered in 10 mm hepes , ph 7.5 , 500 mm nacl and assayed in a 96-well plate at a final concentration of 2 m in 20 l volume . sypro orange ( molecular probes ) was added as a fluorescence probe at a dilution of 1 in 1000 . excitation and emission filters for the sypro - orange dye were set to 465 and 590 nm , respectively . the temperature was raised with a step of 3 c per minute from 25 to 96 c , and fluorescence readings were taken at each interval . the temperature dependence of the fluorescence during the protein denaturation process was approximated by the equationwhere ug is the difference in unfolding free energy between the folded and unfolded state , r is the gas constant , and yf and yu are the fluorescence intensity of the probe in the presence of completely folded and unfolded protein , respectively . the baselines of the denatured and native states were approximated by a linear fit . the observed temperature shifts , tm , were recorded as the difference between the transition midpoints of sample and reference wells containing protein without ligand in the same plate and determined by nonlinear least - squares fit . temperature shifts ( tm ) for three independent measurements per proteins / compound are summarized in supporting information table s1 . experiments were carried out on an itc200 titration microcalorimeter from microcal , llc ( ge healthcare ) equipped with a washing module , with a cell volume of 0.2003 ml , and a 40 l microsyringe . experiments were carried out at 15 c while stirring at 1000 rpm , in itc buffer ( 50 mm hepes , ph 7.5 ( at 25 c ) , 150 mm nacl ) . the microsyringe was loaded with a solution of the protein sample ( 300740 m protein for brd9 and brd4(1 ) , respectively , in itc buffer ) and was carefully inserted into the calorimetric cell which was filled with an amount of the ligand ( 0.2 ml , 2025 m in itc buffer ) . the system was first allowed to equilibrate until the cell temperature reached 15 c , and an additional delay of 60 s was applied . all titrations were conducted using an initial control injection of 0.3 l followed by 38 identical injections of 1 l with a duration of 2 s ( per injection ) and a spacing of 120 s between injections . the titration experiments were designed in such a fashion as to ensure complete saturation of the proteins before the final injection . the heat of dilution for the proteins was independent of their concentration and corresponded to the heat observed from the last injection , following saturation of ligand binding , thus facilitating the estimation of the baseline of each titration from the last injection . the collected data were corrected for protein heats of dilution ( measured on separate experiments by titrating the proteins into itc buffer ) and deconvoluted using the microcal origin software supplied with the instrument to yield enthalpies of binding ( h ) and binding constants ( kb ) in the same fashion as that previously described in detail by wiseman and co - workers . thermodynamic parameters were calculated using the basic equation of thermodynamics ( g = h ts = rt ln kb , where g , h , and s are the changes in free energy , enthalpy , and entropy of binding , respectively ) . in all cases a single binding site model was employed , supplied with the microcal origin software package . dissociation constants and thermodynamic parameters are listed in tables 1 , 2 , and 3 . titrations were carried out in 50 mm hepes , ph 7.5 ( at 25 c ) , 150 mm nacl , and 15 c while stirring at 1000 rpm . in both cases the protein was titrated into the ligand solution ( reverse titration ) . ligand efficiencies ( le ) have also been calculated where g values were available ( le = g / n where n is the number of non - hydrogen atoms ) . titrations were carried out in 50 mm hepes , ph 7.5 ( at 25 c ) , 150 mm nacl , and 15 c while stirring at 1000 rpm . in both cases the protein was titrated into the ligand solution ( reverse titration ) . ligand efficiencies ( le ) have also been calculated where g values were available ( le = g / n where n is the number of non - hydrogen atoms ) . hek293 cells ( 8 10 ) were plated in each well of a 6-well plate and co - transfected with histone h3.3-halotag ( nm_002107 ) and nanoluc - brd9 ( q9h8m2-brd amino acids 120240 ) or nanoluc - brd4 full - length ( 060885 ) . twenty hours post - transfection 2 10 cells were trypsinized , washed with pbs , and exchanged into media containing phenol red - free dmem , 10% fbs in the absence ( control sample ) , or the presence ( experimental sample ) of 100 nm nanobret 618 fluorescent ligand ( promega ) . cell density was adjusted to 2 10 cells / ml and then replated in a 96-well assay white plate ( corning costar no . 3917 ) . at the time of replating compound 7d or 11 were added at a final concentration spanning from 0.005 to 33 m and the plates were incubated for 18 h at 37 c in the presence of 5% co2 . nanobret substrate ( promega ) was added to both control and experimental samples at a final concentration of 10 m . readings were performed within 5 min using the clariostar reader ( bmg lifetechnologies ) equipped with 450/80 nm bandpass and 610 nm long - pass filters . a corrected bret ratio was calculated and is defined as the ratio of the emission at 610 nm/450 nm for experimental samples ( i.e. , those treated with nanobret fluorescent ligand ) subtracted by and the emission at 610 nm/450 nm for control samples ( not treated with nanobret fluorescent ) . bret ratios are expressed as millibret units ( mbu ) , where 1 mbu corresponds to the corrected bret ratio multiplied by 1000 . twenty - four hours post - transfection cells were labeled with 5 m halotag tmr ligand ( promega ) in complete medium ( dmem and 10% fbs ) for 15 min at 37 c and 5% co2 . medium containing halotag - tmr ligand was then washed twice with fresh complete medium . cells were placed back at 37 c and 5% co2 for 30 min and then imaged . images were acquired on an olympus fluoview fv500 confocal microscope ( olympus , center valley , pa , usa ) containing a 37 c and co2 environmental chamber ( solent scientific ltd . , segensworth , u.k . ) using appropriate filter sets . following nanobret substrate addition and nanobret measurements , to the same wells an equal volume of celltiter - glo reagent ( promega ) was added and the plates were incubated for 30 min at room temperature . total luminescence was measured , and relative compound toxicity was determined by comparing the rlus ( relative light units ) of a sample containing dmso vehicle ( in the absence of compound 11 ) to the rlus of the samples containing 0.00533 m compound 11 . aliquots of the purified proteins were set up for crystallization using a mosquito crystallization robot ( ttp labtech , royston , u.k . ) . coarse screens were typically set up onto greiner 3-well plates using three different drop ratios of precipitant to protein per condition ( 100 + 50 nl , 75 + 75 nl , and 50 + 100 nl ) . all crystallizations were carried out using the sitting drop vapor diffusion method at 4 c . brd9 crystals with 7d were grown by mixing 150 nl of the protein ( 17.9 mg / ml and 2 mm final ligand concentration ) with 150 nl of reservoir solution containing 0.20 m sodium bromate , 0.1 m bt - propane , ph 7.5 , 20% peg3350 , and 10% ethylene glycol . brd4(1 ) crystals with 7d were grown by mixing 200 nl of protein ( 12 mg / ml and 2 mm final ligand concentration ) with 100 nl of reservoir solution containing 0.2 m sodium nitrate , 20% peg3350 , and 10% ethylene glycol . brd4(1 ) crystals with 11 were grown by mixing 150 nl of protein ( 8.25 mg / ml and 2 mm final ligand concentration ) with 150 nl of reservoir solution containing 0.1 m bis - tris - propane , ph 8.5 , 20% peg3350 , and 10% ethylene glycol . in all cases diffraction quality crystals grew within a few days . all crystals were cryoprotected using the well solution supplemented with additional ethylene glycol and were flash frozen in liquid nitrogen . data were collected in - house on a rigaku fre rotating anode system equipped with a raxis - iv detector at 1.52 ( brd9/7d ) or on a bruker microstar equipped with an apex ii detector at 1.54 ( brd4(1)/7d and brd4(1)/11 ) . indexing and integration were carried out using saint ( version 8.3 , bruker axs inc . , 2013 ) or mosflm or xds , and scaling was performed with scala or xprep ( version 2008/2 , bruker axs inc . ) . initial phases were calculated by molecular replacement with phaser using the known models of brd9 and brd4(1 ) ( pdb codes 3hme and 2oss , respectively ) . the models and structure factors have been deposited in the pdb with accession codes 4xy8 ( brd9/7d ) , 4xy9 ( brd4(1)/7d ) , and 4xya ( brd4(1)/11 ) . titrations were carried out in 50 mm hepes , ph 7.5 ( at 25 c ) , 150 mm nacl , and 15 c while stirring at 1000 rpm . in both cases the protein was titrated into the ligand solution ( reverse titration ) . ligand efficiencies ( le ) have also been calculated where g values were available ( le = g / n where n is the number of non - hydrogen atoms ) . in order to assess the binding of purine fragments on human brds , we first performed molecular docking experiments employing the previously determined crystal structure of the complex of brd4(1 ) with a 5-methyltriazolopyrimidine ligand ( pdb code 4men(30 ) ) . to this end we investigated binding of purine fragments 1 , 2a , and 2b ( figure 1b ) , seeking to determine acetyllysine competitive binding modes within the brd cavity with promising predicted binding affinities , ideally establishing favorable interactions with residues implicated in acetyllysine peptide recognition . in order to account for putative conformational changes of the receptor s binding site cavity upon ligand binding , we employed the induced fit docking protocol ( as implemented in the schrdinger software package ) . molecular modeling resulted in good accommodation of the investigated purine fragments within the kac binding site of brd4(1 ) , mainly packing between the za - loop hydrophobic residues ( val87 , leu92 , leu94 ) and ile146 from helix c , in a groove that is capped on one end by tyr97 and tyr139 and trp81 on the other end ( figure 1c ) . we observed different conformations of compound 1 within the brd4(1 ) cavity , with the two chloro functions pointing to the top of the pocket ( figure 1c ) or adopting a kac mimetic pose with one chlorine inserting deep into the pocket ( figure 1d ) . compound 2a was also found in two different states in our calculations , either orienting its primary amine function away from the conserved asparagine ( asn140 , figure 1e ) or directly engaging this residue while orienting its 6-br substituent toward the za - loop ( supporting information figure 1a , b ) . in all cases the ligand poses resulted in promising predicted binding energy values ( 9.13 kcal / mol for 1 , 9.95 kcal / mol for 2a , and 9.13 kcal / mol for 2b ) . we identified in our calculations , in the case of compound 2a , poses that exposed the halogen in position 6 so that the purine core scaffold may be further optimized based on its topology within the brd binding site . the opposite was true in the case of the 2-cl substituent or the methyl substituent at n9 ( 1 or 2b , respectively ) , which resulted in steric clashes and topologies that would not allow for subsequent optimizations . to better understand the binding mode of the purine scaffold to brds , given the multiple docking conformations observed , we decided to systematically probe the topology of these fragments employing synthetic chemistry and structure activity relationships . we purchased 2,6-dichloro-9h - purine 1 and 2-amino-6-bromo-9h - purine 2a and synthesized compound 2b employing a tbaf - assisted n-9 methylation on the purine ring of 2a . to confirm binding of these fragments to human bromodomains , we employed a thermal shift assay ( tm assay ) which we have previously used successfully with fragments and various human bromodomains . typically we perform the assay using 100 m compounds in the case of fragments and very weak ligands , but in the case of the purine analogues tested we were surprised to see binding at 10 m compound to bet brds and in particular to brd4(1 ) ; however , the optimized cdk inhibitor olomoucine did not show any significant stabilization toward any proteins in the panel ( figure 1f , supporting information table s1 ) . encouraged by this result , we tested these compounds at the same concentration against five other diverse brds in an effort to cover most of the human brd phylogenetic tree ( supporting information figure 1c ) and found that despite their structural diversity , the brds of crebbp , pb1(5 ) , and brd9 exhibited weak binding . we were particularly intrigued by the interaction of 2a with brd9 , since to our knowledge only a small subset of compounds has been previously shown to bind to this module . to further evaluate binding of 2a onto brd9 , we performed docking experiments , using the recently crystallized apo structure of the protein ( pdb code 3hme(15 ) ) accounting for the flexibility of key residues after ligand binding . similar to our brd4(1 ) docking experiment , we obtained two main binding poses for this compound , with the most energetically favored one ( predicted binding affinity of 9.06 kcal / mol ) exhibiting an extended hydrogen bond network with the conserved asn100 and interactions with the za - loop tyr57 and tyr106 from helix c , while the 6-br substituent was found oriented toward the external part of the binding site toward the za - loop phe47 , suggesting that modifications on this position for subsequent optimization would not be tolerated without affecting the ligand orientation in the acetyllysine cavity ( supporting information figure 1d ) . conversely , we observed subtle rearrangement of the side chains of phe44 and tyr106 accompanied by almost a 90 rotation of the side chain of phe47 , suggesting an induced - fit binding mode . we also observed an alternative binding pose ( predicted binding energy of 8.62 kcal / mol ) in which the ligand maintained the hydrogen bond to asn100 , albeit from its primary amine function which inserted toward the conserved asparagine , as well as the interaction with tyr106 from helix c , while orienting the modifiable 6-br substituent toward the top of the brd cavity ( supporting information figure 1e ) , offering a promising vector for subsequent modifications . using both possible orientations as starting points , we decided to further interrogate how the purine core scaffold binds to this brd . encouraged by our initial findings , we synthesized a number of 2-amino-9h - purine analogues and tested their ability to interact with human brds , primarily of the bet subfamily ( subfamily ii ) , while systematically testing binding to representative brds from other structural subfamilies ( family i , pcaf ; family iii , crebbp ; family iv , brd9 ; family v , baz2b ; family viii , pb1(5 ) ; see supporting information figure 1c ) in order to probe structurally diverse proteins against this core purine scaffold . miyaura cross - coupling reaction to synthesize 2-amino-6-aryl-9h - purine derivatives , yielding highly c-6 decorated 9h - purines in a one - step procedure and performed a subsequent tbaf - assisted n-9 alkylation to access n-9 substituted analogues ( scheme 1 and chart 2 ) . we accomplished the coupling step under microwave irradiation with pd(oac)2 and triphenylphosphine-3,3,3-trisulfonic acid trisodium salt as the catalytic system , with cs2co3 as base , in a water this approach allowed synthesizing 2-amino-6-aryl-9h - purines with very short reaction times ( 515 min ) at high yields and purity . reagents and conditions : ( a ) pd(oac)2/p(c6h4so3na)3 , cs2co3 , mecn / h2o ( 1:2 ) , microwaves , 150 c , 515 min ; ( b ) ch3i or ch3coch2cl , tbaf , thf , rt , 10 min ; ( c ) 50% h2so4 , nano2 , 10 c , 2 h , then 50 c , 1 h. first we introduced a phenyl substituent at position 6 of the core purine scaffold ( compound 3a ) only to find that brd2(1 ) , brd4(1 ) , and pcaf were stabilized in thermal melt assays by this compound ( figure 2a , b ) . we confirmed binding to brd4(1 ) by isothermal titration calorimetry and measured a dissociation constant of 11.99 m ( figure 2c and table 1 ) . the interaction of 3a with brd4(1 ) was mainly driven by enthalpic contributions ( h = 9.45 kcal / mol ) , opposed by negative entropy ( ts = 2.97 kcal / mol ) . we were also intrigued by the lack of affinity toward brd9 in the tm assay , since our initial fragment hit , compound 2a , had exhibited a thermal shift of 1.6 c toward that domain . we therefore tested this scaffold by isothermal titration calorimetry against brd9 and measured a weak dissociation constant of 8.5 m ( figure 2d , table 2 ) suggesting that our primary assay ( tm ) may not be very robust in the case of brd9 when applied to weak ligands . this is important , since the first bromodomain of brd4 has been repeatedly shown to bind to weak compounds employing this assay , while it has been noted that other brds do not always exhibit high temperature shifts although they bind to several compounds very potently . additionally , in the case of bet proteins it has been demonstrated that thermal melt data correlate well with in vitro dissociation constants . importantly , similar to the brd4(1)/3a interaction , the compound interacted with brd9 mainly driven by enthalpic contributions ( h = 9.11 kcal / mol ) opposed by negative entropy ( ts = 2.42 kcal / mol ) . the 9h - purine core scaffold was iteratively decorated and tested for binding to human bromodomains . . compounds highlighted with a colored star were further validated by isothermal titration calorimetry as shown in ( c)/(d ) . ( c ) isothermal titration calorimetry validation of key compounds binding to brd4(1 ) showing raw injection heats for titrations of protein into compound . the inset shows the normalized binding enthalpies corrected for the heat of protein dilution as a function of binding site saturation ( symbols as indicated in the figure ) . solid lines represent a nonlinear least - squares fit using a single - site binding model . ( d ) compounds bearing ortho , meta substitutions gain potency toward brd9 as demonstrated by itc experiments . all itc titrations were carried out in 50 mm hepes , ph 7.5 ( at 25 c ) , 150 mm nacl , and 15 c while stirring at 1000 rpm . in order to better explore the structure activity relationship of 6-phenyl substituted 9h - purines , we synthesized a range of compounds carrying different patterns of functions , including para - substitutions ( compounds 3b h ) , meta , para substitutions ( compounds 4a e ) , meta , meta substitutions ( compounds 5a , b ) , ortho , ortho substitutions ( compounds 6a c ) , and ortho , meta substitutions ( compounds 7a e ) ( figure 2a and chart 2 ) . we tested binding of these analogues to the eight bet brds , as well as the five more diverse brds previously mentioned , employing the same thermal shift assay as above . interestingly , compounds synthesized that carried an additional methyl modification at n9 ( compounds 3c , 3e , 4e , 5b ) exhibited very weak or no binding toward most brds while showing small thermal shits ( 1.01.3 c ) for the bromodomain of crebbp . para substitution of the 6-phenyl-9h - purines ( compounds 3b h ) resulted in lower stabilization of all brds ; however , compound 3f showed binding toward all brds without any hints of selectivity toward brd9 . meta substitutions ( compounds 4a e ) were also very weak across brds with no affinity for brd9 , suggesting that modifications on that vector were not tolerated . interestingly meta , meta substitution ( compound 5a ) resulted in binding to most brds albeit weak , with tm values between 1.1 and 1.8 c . as expected , this binding event was abolished when the compound was methylated at n9 ( compound 5b ) . binding was not improved with ortho , ortho substitutions of the 6-phenyl 9h - purine scaffold ( compounds 6a c ) ( figure 2b and supporting information table s1 ) . since methyl substitution at n9 could not be tolerated in brd4(1 ) or brd9 binding , we concluded at this stage that the five - membered ring is probably not oriented toward the top of the brd cavity but points toward the bottom of the acetyllysine binding cavity , as predicted in our docking model ( supporting information figure 1a , b ) , with the 6-substituted position toward the front of the pocket in order to accommodate the larger phenyl - substituted functions . we decided to further test combinations in ortho , meta substituted compounds by first maintaining a methoxy functionality at the ortho position while changing the steric bulk at the meta position ( compounds 7a d ) . 2-methoxyphenyl substitution ( compound 7a ) yielded thermal shifts between 1.4 and 2.5 c for bet brds as well as 1.5 c for the brd of crebbp while significantly stabilizing brd9 compared to all previous compounds tested ( 2.9 c ) . we confirmed this binding by itc and measured a dissociation constant of 641 nm against brd9 ( figure 2d , table 2 ) . notably the change in affinity was accompanied by negative entropic contributions ( ts = 4.55 kcal / mol ) . intrigued by this step change in affinity , we tested halide analogues at the meta position while retaining the ortho - methoxy functionality ( compounds 7b d ) and observed improved thermal shifts for all compounds tested against brd9 , while bet affinity seemed to be variable . notably , we observed tm values following the order h < f < cl > br , suggesting that steric bulk and charge at the meta position is important , with compound 7c showing a tm of 3.8 c against brd9 . we validated binding using itc as an orthogonal method and measured dissociation constants of 351 , 297 , and 397 nm against brd9 following the same ranking as the thermal shift assay for compounds 7b , 7c , and 7d , respectively ( figure 2d , table 2 ) . importantly , we observed a gradual increase in entropic contributions with increasing bulk of the 5-halide-2-methoxyphenyl substitution , with compound 7a exhibiting the lowest entropic term ( ts = 4.55 kcal / mol ) and compound 7d the highest ( ts = 1.57 kcal / mol ) . interestingly , brd4(1 ) , which exhibited tm values of 1.1 and 3.2 c for compounds 7c and 7d , was found to bind weakly to these scaffolds by itc , and the dissociation constants that we measured were 2.04 and 4.7 m , respectively , following again the same trend seen in the thermal melt assay ( figure 2c , table 1 ) . affinity for brd9 was lost when we synthesized compound 7e which carried a bromine function at the meta position and an ethoxy substituent at the ortho position , suggesting that the longer and bulkier group probably affects rotation of the 6-aryl ring with respect to the core 9h - purine-2-amine fold . in order to verify the mode of interaction of the 9h - purine core within the brd acetyllysine binding cavity , we crystallized and determined the complexes of compound 7d with the bromodomain of brd9 and the first bromodomain of brd4 . the ligand was found in both cases to occupy the acetyllysine recognition pocket ( figure 3a and supporting information figure 2a ) and was clearly defined in the electron density map ( figure 3b and supporting information figure 2b ) . compound 7d directly engaged , via the primary amine function as well as the nitrogen at position 3 , the conserved asparagine in both structures ( asn100 in brd9 ; asn140 in brd4(1 ) ) and was additionally held in place via a number of hydrogen bonds to the protein backbone and the network of conserved water molecules previously described ( figure 3c and supporting information figure 2c ) . notably , compound 7d initiated hydrogen bonds to a water molecule located on top of the brd cavity , which in turn linked the ligand to the top of the za - loop , either to the carbonyl of ile53 ( in the case of brd9 ) or asn93 ( in the case of brd4(1 ) ) . the mode of interaction is similar to that observed before for triazolothienodiazepine complexes such as ( + ) -jq1 , with the five - membered ring of the purine system acting as the acetyllysine mimetic moiety , superimposing well with the methyltriazolo ring of ( + ) -jq1 ( supporting information figure 2d ) while the bromomethoxyphenyl substituent of 7d stacks well between the za - loop leu92 and the za - channel s phe81/pro82 of brd4(1 ) ( supporting information figure 2e ) . consistent with our induced fit computational models of binding of 2a to brd9 , superimposition of the brd9/7d complex to the apo structure of brd9 ( pdb code 3hme ) revealed rotations of the side chains of phe47 and phe44 while the top of the za loop collapsed toward the ligand ( figure 3d , e ) , resulting in an unprecedented induced fit pocket of brd9 . particularly the side chain of phe47 rotated 120 , thus blocking the za - channel of the protein resulting in a steric bulk around compound 7d . on the basis of our thermodynamic measurements showing high entropic contributions to binding ( table 2 ) , we concluded that all compounds in this subseries ( 7a d ) should be inducing a rearrangement of the binding cavity residues of brd9 upon binding to the acetyllysine site . intriguingly as the substituent size increases , the affinity also increases as exemplified by the determined dissociation constants , with the chloro analogue ( 7c ) exhibiting a dissociation constant of 297 nm against brd9 while the bromo analogue ( 7d ) is slightly weaker ( 397 nm against brd9 ; 4.65 m against brd4(1 ) ) . however , the structural rearrangement that we observed was unique to brd9 ; the structure of compound 7d in complex with the first bromodomain of brd4 did not reveal any rearrangements of the acetyllysine binding cavity as the inhibitor packed between trp81 and leu92 of the za - loop ( supporting information figure 2d , e ) . it is tempting to speculate that the observed increase in entropic contributions within the series of compounds 7a d measured by itc , following the increase in the halide substituent size , is associated with the side chain rearrangement observed by docking as well as the crystal structure of compound 7d with the bromodomain of brd9 . however , entropy changes may be due to a number of factors , such as release of water molecules , therefore making further interpretation of the small data set obtained very difficult . ( b ) 2fcfo map of 7d in complex with brd9 contoured at 2. ( c ) 7d occupies the acetyllysine binding cavity of the bromodomain module initiating direct interactions with the conserved asparagine ( n100 ) and packing onto the za - loop hydrophobic backbone ( f44 , f47 , i53 , a54 , p56 , y57 ) . the conserved water network is preserved in the structure , and the ligand initiates hydrogen bonds that further stabilize binding . notably the ligand engages a water molecule located on the top of the brd cavity in a network of hydrogen bonds to the backbone of i53 . ( d ) binding of 7d to brd9 results in a distinct rearrangement of the brd fold . residues on both sides of the ligand shift toward it , forming an induced - fit pocket with f47 rotating 120 capping the channel found in the apo brd9 structure at the front of the za - loop . ( e ) surface view of the side chain rearrangement described in ( c ) highlighting the induced pocket upon binding of 7d to brd9 . the model and structure factors of the brd9/7d complex shown have been deposited to the pdb with ascension code 4xy8 . we next questioned whether the primary amine function at position 2 of the 9h - purine core scaffold is necessary for binding to bromodomains . first we substituted the amine with a chlorine group ( compound 8a , figure 4a ) while retaining the 6-(5-bromo-2-methoxyphenyl ) substitution , resulting in loss of affinity toward all brds in our panel ( figure 4b ) . in the case of brd9 we confirmed this observation by performing an isothermal titration calorimetry measurement which yielded a kd of 7.9 m ( figure 4c ) . as with compounds from previous series , methyl substitution at position 9 completely abolished bromodomain binding ( compound 8b , figure 4a ) , measured by both thermal melt ( figure 4b ) and itc assays in the case of brd9 ( figure 4c ) ; larger substituents ( compound 8c ) could not be tolerated suggesting that the core scaffold retained its pose within the bromodomain binding cavity . hydroxy substitution at position 2 while retaining a 6-(5-halide-2-methoxyphenyl ) substituent ( compounds 8d f ) had variable effects on the 9h - purine affinity toward brds . moreover , fluoro ( 8d ) and bromo ( 8f ) substituted compounds lost affinity across the panel , while the chloro - substituted compound ( 8e ) promiscuously bound to most bromodomains in the tm assay , albeit weaker than its primary amine analogue 7c ( figure 4a , b ) , thus suggesting that the interactions initiated by the hydroxyl group and the conserved asparagine ( asn100 in brd9 ; asn140 in brd4(1 ) ) are not favored over the primary amine . ( a ) compounds designed to probe the acetyllysine mimetic character of the purine scaffold , disrupting interactions ( n9-methyl analogues ) or reaching deeper into the acetyllysine cavity ( 8-methyl analogues ) . . compounds highlighted with a colored star were further validated as shown in ( c)/(d ) . ( c ) substitution of the primary amine group to a hydroxyl ( compound 8a ) impairs binding toward brd9 as demonstrated by itc experiments , while cyclization of the aromatic substituent results in enhanced potency ( compound 11 ) . the inset shows the normalized binding enthalpies corrected for the heat of protein dilution as a function of binding site saturation ( symbols as indicated in the figure ) . solid lines represent a nonlinear least - squares fit using a single - site binding model . ( d ) isothermal titration calorimetry validation of compound 11 binding to brd4(1 ) showing raw injection heats for titrations of protein into compound . all itc titrations were carried out in 50 mm hepes , ph 7.5 ( at 25 c ) , 150 mm nacl , and 15 c while stirring at 1000 rpm . the 9h - purine crystal structure complexes that we obtained with the bromodomains of brd9 and brd4(1 ) highlighted the failure of the ligand to insert deep inside the bromodomain cavity , thus not replacing the conserved network of water molecules . in an attempt to reach deeper within the cavity , we synthesized compounds 9a and 9b , introducing a methyl group at position 8 of the 9h - purine core . the low solubility of compound 9a did not allow for any measurements , but compound 9b exhibited very low affinity for all bromodomains in our panel with the exception of brdt(2 ) ( tm of 2.9 c ) , suggesting that substitution at this position of the 9h - purine core in the absence of a 2-amine function can not be tolerated and these compounds could not displace the conserved water molecule network deep inside the bromodomain pocket . our synthetic efforts to insert a fluorine atom at position 8 through a c-8 electrophilic fluorination on the bis(tetrahydropyran-2-yl)-protected derivative of 2a , following a reported metalation fluorination reaction with n - fluorobenzenesulfonimide , were also unsuccessful ; we observed formation of the corresponding 8-phenylsulfonyl product instead of the 8-fluoro derivative , similar to reported work done by roy and co - workers , even under heterogeneous conditions . our structural insight suggested that augmentation of the 6-(5-halide-2-methoxyphenyl ) substituent would take a toll on brd9 , as it would force the structure toward an apo - like conformation , sterically pushing phe47 toward the apo - conformation of brd9 , while it should not affect binding to bet bromodomains which contained an open channel between trp81 and leu92 ( figure 3e and supporting information figure 2e ) . to test this hypothesis , we synthesized compound 10 ( figure 4a ) and tested its ability to bind to human brds . unfortunately its bright yellow color and low solubility did not allow for further validation . we therefore decided to increase the size of the substituent found in 7d by cyclizing the 2-methoxyphenyl ring into a 2,3-dihydrobenzofuran-7-yl while retaining the bromo function at position 5 . we tested the resulting compound 11 in the tm assay against the panel of bromodomains and observed increased temperature shifts among bet brds ( between 1.7 and 5.4 c ) together with a remarkable increase in the case of brd9 ( 6.5 c ) . to verify this observation , we performed isothermal titration calorimetry measurements and obtained a dissociation constant of 278 nm for brd9 ( figure 4c ) while brd4(1 ) binding resulted in a much weaker affinity ( 1.4 m ) ( figure 4d ) . notably the entropic contribution of compound 11 binding to brd9 was comparable to compound 7d for both brd9 and brd4(1 ) , and thus we speculated that a similar rearrangement should be taking place upon binding to the bromodomain of brd9 . in order to test our hypothesis that compound 11 binding would also result in structural rearrangement of the bromodomain binding cavity of brd9 , in a similar way to compound 7d , while not affecting the cavity of brd4(1 ) , we attempted to determine the crystal structures of this compound in complex with these two bromodomains . compound 11 readily crystallized with brd4(1 ) and was found to occupy the acetyllysine binding cavity of the bromodomain ( figure 5a ) and was very well - defined in the density ( figure 5b ) despite its weaker binding affinity of 1.37 m toward brd4(1 ) ( table 1 ) . the ligand was found to directly interact with the conserved asparagine ( asn140 ) as well as with the conserved network of water molecules while packing between the za - channel tryptophan ( trp81 ) and the za - loop leucine ( leu92 ) ( figure 5c ) . our efforts however to obtain a brd9/11 complex did not yield diffracting quality crystals suitable for structure determination , and as such , we employed computational methods to account for its binding to brd9 . rigid docking into the brd9/7d complex structure resulted in a conformation similar to that observed with compound 7d , with the ligand engaging the conserved asparagine via its primary amine function and the 6-aryl - substituted ring packing betweent the za - loop ile53 and phe44 ( supporting information figure 3a ) . we then performed induced - fit docking employing the algorithms previously described for purine fragments using the complex of brd9/7d as starting point and obtained a pose whereby the 2-amine function inverted and inserted in the brd pocket , without any changes in the surrounding side chains of phe44 , phe47 , ile53 , and tyr106 ( supporting information figure 3b ) . intrigued by this finding , we performed another induced fit docking experiment , starting with the brd9 apo structure and allowing residues to freely move in the presence of the ligand . we observed a similar set of side chain rearrangement within the brd9 acetyllysine cavity , including a rotation of phe47 resulting in capping of the binding groove , accompanied by repositioning of phe44 from helix c and ile53 from the za - loop ( supporting information figure 3c ) . we therefore concluded that the 2-amine-9h - purine scaffolds that we developed can induce a closed pocket within the bromodomain of brd9 resulting in tight binding , without at the same time exhibiting high affinity for brd4(1 ) or other bet family members . indeed , we determined the dissociation constants for binding to bet domains employing itc and found that both compounds 7d and 11 exhibited low micromolar affinities ( 1.44.6 m ) correlating well with the higher thermal shifts observed ( supporting information table s1 ) , while retaining higher affinity against the bromodomain of brd9 ( supporting information figure 4 and tables 2 and 3 ) . ( a ) overview of the complex of compound 11 with the first bromodomain of brd4 . the ligand retained the acetyllysine mimetic pose that was observed in the case of 7d . ( b ) compound 11fcfo omit map from the brd4(1)/11 complex contoured at 2. ( c ) detail of compound 11 biding to brd4(1 ) demonstrating the acetyllysine mimetic binding mode , initiating interactions with the conserved asparagine ( n140 ) , and packing between the za - loop l92 and the za - channel w81 while retaining the network of conserved water interactions . the model and structure factors of the brd4(1)/11 complex shown in panels a , b , and c have been deposited to the pdb with ascension code 4xya . ( d ) titration of compounds 7d and 11 into hek293 cells transfected with nanoluc - fused bromodomain of brd9 and halo - tagged histone h3.3 . the ligands disrupt the histone / brd9 interaction with an apparent ic50 of 3.5 m ( 7d ) and 480 nm ( 11 ) , resulting in loss of signal due to separation of the brd9-histone complex . ( e ) titration of compounds 7d and 11 into hek293 cells transfected with nanoluc - fused full length brd4 ( uniprot code o60885 ) and halo - tagged histone h3.3 . although the ligands gradually disrupt the brd4-fl / h3.3 interaction , they fail to elicit the same effect as in the case of brd9 in the concentration range tested . having obtained a small window of selectivity over brd4 , we wanted to verify that the 2-amine-9h - purine scaffolds that we developed are in fact active in a cellular environment and can perturb the interaction of brd9 with acetylated histones . previous studies have established that potent compounds that competitively bind to the acetyllysine binding cavity of bet proteins can displace the entire protein from chromatin in fluorescence recovery after photobleaching ( frap ) assays , while larger brd - containing proteins that contain in addition multiple reader domains are harder to fully displace ; in the case of crebbp this was solved by creating an artificial gfp - fusion construct that contained three brd modules of crebbp and a nuclear localization signal . brd9 is part of the large swi / snf complex , and its bromodomain has been shown to bind to acetylated histone h3 peptides . we speculated that 9h - purines should be able to competitively displace the bromodomain of brd9 from chromatin and constructed a bioluminescence resonance energy transfer ( bret ) system that combined nanoluc luciferase fusions of the brd9 bromodomain ( supporting information figure 5a ) or full length brd4 and halo - tagged histone h3.3 as bret pairs . ligand interactions in a cellular system and has recently been used to determine cellular ic50 values for the inhibition of the histone first we established by fluorescence microscopy that halo - tagged histone h3.3 is readily incorporated into chromatin ( supporting information figure 5b ) . we then performed dose response experiments and demonstrated the nanoluc - brd9 bromodomain was readily displaced from chromatin by compounds 7d and 11 with cellular ic50 values of 3.5 0.11 m and 477 194 nm , respectively ( figure 5d ) , in agreement with the in vitro affinities determined by itc ( table 2 ) . in contrast , full - length brd4 was not completely displaced in this assay up to concentrations of 33 m of both compounds ( figure 5e ) , suggesting that the compounds retained the in vitro selectivity toward brd9 in this cellular system . we further examined the toxicity of compound 11 toward hek293 cells , using cell viability in the presence of the compound in the concentration regime of our bret experiments as a readout and did not observe any cytotoxicty ( supporting information figure 5c ) , suggesting that this compound can be used in cellular systems to target brd9/kac interactions without affecting brd4/kac interactions or causing any cytotoxic responses . we have described here a structure - guided approach to identify inhibitors for diverse brds starting from small fragment - like 9h - purine scaffolds . through structure activity relationship we established compounds 7d and 11 and characterized them as nanomolar binders for the brd of brd9 while retaining weak in vitro activity against the first bromodomain of brd4 . these compounds are not cytotoxic at concentrations up to 33 m in a hek293 system and can competitively displace the brd9 bromodomain from chromatin while failing to displace full length human brd4 at the same concentration range in bioluminescence proximity assays , despite the fact that the in vitro affinity difference for these two proteins is not large . importantly , compound 7d induces structural rearrangement on the acetyllysine binding cavity of brd9 resulting in an unprecedented cavity shape which accommodates this scaffold , explaining the higher affinity toward this protein , while docking studies suggest that compound 11 elicits the same type of structural rearrangement . the 9h - purine scaffold therefore offers a simple template that can be used to generate initial tools that will no doubt prove useful in interrogating the biology of bromodomains beyond the bet subfamily , such as brd9 , which have not attracted attention until now , avoiding cross - reactivity with bet bromodomains .
the 2-amine-9h - purine scaffold was identified as a weak bromodomain template and was developed via iterative structure based design into a potent nanomolar ligand for the bromodomain of human brd9 with small residual micromolar affinity toward the bromodomain of brd4 . binding of the lead compound 11 to the bromodomain of brd9 results in an unprecedented rearrangement of residues forming the acetyllysine recognition site , affecting plasticity of the protein in an induced - fit pocket . the compound does not exhibit any cytotoxic effect in hek293 cells and displaces the brd9 bromodomain from chromatin in bioluminescence proximity assays without affecting the brd4/histone complex . the 2-amine-9h - purine scaffold represents a novel template that can be further modified to yield highly potent and selective tool compounds to interrogate the biological role of brd9 in diverse cellular systems .
SECTION 1. SHORT TITLE. This Act may be cited as the ``Poison Control Center Enhancement and Awareness Act''. SEC. 2. FINDINGS. Congress makes the following findings: (1) Each year more than 2,000,000 poisonings are reported to poison control centers throughout the United States. More than 90 percent of these poisonings happen in the home. Fifty-three percent of poisoning victims are children younger than 6 years of age. (2) Poison control centers are a valuable national resource that provide life-saving and cost-effective public health services. For every dollar spent on poison control centers, $7 in medical costs are saved. The average cost of a poisoning exposure call is $32, while the average cost if other parts of the medical system are involved is $932. Over the last 2 decades, the instability and lack of funding has resulted in a steady decline in the number of poison control centers in the United States. Within just the last year, 2 poison control centers have been forced to close because of funding problems. A third poison control center is scheduled to close in April 1999. Currently, there are 73 such centers. (3) Stabilizing the funding structure and increasing accessibility to poison control centers will increase the number of United States residents who have access to a certified poison control center, and reduce the inappropriate use of emergency medical services and other more costly health care services. SEC. 3. DEFINITION. In this Act, the term ``Secretary'' means the Secretary of Health and Human Services. SEC. 4. ESTABLISHMENT OF A NATIONAL TOLL-FREE NUMBER. (a) In General.--The Secretary shall provide coordination and assistance to regional poison control centers for the establishment of a nationwide toll-free phone number to be used to access such centers. (b) Rule of Construction.--Nothing in this section shall be construed as prohibiting the establishment or continued operation of any privately funded nationwide toll-free phone number used to provide advice and other assistance for poisonings or accidental exposures. (c) Authorization of Appropriations.--There is authorized to be appropriated to carry out this section, $2,000,000 for each of the fiscal years 2000 through 2004. Funds appropriated under this subsection shall not be used to fund any toll-free phone number described in subsection (b). SEC. 5. ESTABLISHMENT OF NATIONWIDE MEDIA CAMPAIGN. (a) In General.--The Secretary shall establish a national media campaign to educate the public and health care providers about poison prevention and the availability of poison control resources in local communities and to conduct advertising campaigns concerning the nationwide toll-free number established under section 4. (b) Contract With Entity.--The Secretary may carry out subsection (a) by entering into contracts with 1 or more nationally recognized media firms for the development and distribution of monthly television, radio, and newspaper public service announcements. (c) Authorization of Appropriations.--There is authorized to be appropriated to carry out this section, $600,000 for each of the fiscal years 2000 through 2004. SEC. 6. ESTABLISHMENT OF A GRANT PROGRAM. (a) Regional Poison Control Centers.--The Secretary shall award grants to certified regional poison control centers for the purposes of achieving the financial stability of such centers, and for preventing and providing treatment recommendations for poisonings. (b) Other Improvements.--The Secretary shall also use amounts received under this section to-- (1) develop standard education programs; (2) develop standard patient management protocols for commonly encountered toxic exposures; (3) improve and expand the poison control data collection systems; (4) improve national toxic exposure surveillance; and (5) expand the physician/medical toxicologist supervision of poison control centers. (c) Certification.--Except as provided in subsection (d), the Secretary may make a grant to a center under subsection (a) only if-- (1) the center has been certified by a professional organization in the field of poison control, and the Secretary has approved the organization as having in effect standards for certification that reasonably provide for the protection of the public health with respect to poisoning; or (2) the center has been certified by a State government, and the Secretary has approved the State government as having in effect standards for certification that reasonably provide for the protection of the public health with respect to poisoning. (d) Waiver of Certification Requirements.-- (1) In general.--The Secretary may grant a waiver of the certification requirement of subsection (c) with respect to a noncertified poison control center or a newly established center that applies for a grant under this section if such center can reasonably demonstrate that the center will obtain such a certification within a reasonable period of time as determined appropriate by the Secretary. (2) Renewal.--The Secretary may only renew a waiver under paragraph (1) for a period of 3 years. (e) Supplement Not Supplant.--Amounts made available to a poison control center under this section shall be used to supplement and not supplant other Federal, State, or local funds provided for such center. (f) Maintenance of Effort.--A poison control center, in utilizing the proceeds of a grant under this section, shall maintain the expenditures of the center for activities of the center at a level that is not less than the level of such expenditures maintained by the center for the fiscal year preceding the fiscal year for which the grant is received. (g) Matching Requirement.--The Secretary may impose a matching requirement with respect to amounts provided under a grant under this section if the Secretary determines appropriate. (h) Authorization of Appropriations.--There is authorized to be appropriated to carry out this section, $25,000,000 for each of the fiscal years 2000 through 2004. Speaker of the House of Representatives. Vice President of the United States and President of the Senate.
Poison Control Center Enhancement and Awareness Act - Directs the Secretary of Health and Human Services to provide coordination and assistance to regional poison control centers for the establishment of a nationwide toll-free phone number to be used to access such centers. Authorizes appropriations, prohibiting use of the funds to fund any privately funded nationwide toll-free phone number used to provide advice and other assistance for poisonings or accidental exposures. Directs the Secretary to establish a national media campaign to educate the public about poison prevention and the availability of local poison control resources and to conduct advertising campaigns concerning the nationwide toll-free number. Authorizes appropriations. Directs the Secretary to award grants for certified regional poison control centers to achieve financial stability and to prevent, and provide treatment recommendations for, poisoning. Mandates other grant uses. Sets forth center certification requirements. Authorizes appropriations.
SECTION 1. SHORT TITLE AND PURPOSE. (a) Short Title.--This Act may be cited as the ``Video Game Rating Act of 1994''. (b) Purpose.--The purpose of this Act is to provide parents with information about the nature of video games which are used in homes or public areas, including arcades or family entertainment centers. SEC. 2. DEFINITIONS. For purposes of this Act-- (1) the terms ``video games'' and ``video devices'' mean any interactive computer game, including all software, framework and hardware necessary to operate a game, placed in interstate commerce; and (2) the term ``video game industry'' means all manufacturers of video games and related products. SEC. 3. THE INTERACTIVE ENTERTAINMENT RATING COMMISSION. (a) Establishment.--There is established the Interactive Entertainment Rating Commission (hereafter in this Act referred to as the ``Commission'') which shall be an independent establishment in the executive branch as defined under section 104 of title 5, United States Code. (b) Members of the Commission.--(1)(A) The Commission shall be composed of 5 members. No more than 3 members shall be affiliated with any 1 political party. (B) The members shall be appointed by the President, by and with the advice and consent of the Senate. The President shall designate 1 member as the Chairman of the Commission. (2) All members shall be appointed within 60 days after the date of the enactment of this Act. (c) Terms.--Each member shall serve until the termination of the Commission. (d) Vacancies.--A vacancy on the Commission shall be filled in the same manner as the original appointment. (e) Compensation of Members.--(1) The Chairman shall be paid at a rate equal to the daily equivalent of the minimum annual rate of basic pay payable for level IV of the Executive Schedule under section 5314 of title 5, United States Code, for each day (including traveltime) during which the Chairman is engaged in the performance of duties vested in the Commission. (2) Except for the Chairman who shall be paid as provided under subparagraph (A), each member of the Commission shall be paid at a rate equal to the daily equivalent of the minimum annual rate of basic pay payable for level V of the Executive Schedule under section 5315 of title 5, United States Code, for each day (including traveltime) during which the member is engaged in the performance of duties vested in the Commission. (3) The amendments made by this subsection are repealed effective on the date of termination of the Commission. (f) Staff.--(1) The Chairman of the Commission may, without regard to the civil service laws and regulations, appoint and terminate an executive director and such other additional personnel as may be necessary to enable the Commission to perform its duties. The employment of an executive director shall be subject to confirmation by the Commission. (2) The Chairman of the Commission may fix the compensation of the executive director and other personnel without regard to the provisions of chapter 51 and subchapter III of chapter 53 of title 5, United States Code, relating to classification of positions and General Schedule pay rates, except that the rate of pay for the executive director and other personnel may not exceed the rate payable for level V of the Executive Schedule under section 5316 of such title. (g) Consultants.--The Commission may procure by contract, to the extent funds are available, the temporary or intermittent services of experts or consultants under section 3109 of title 5, United States Code. The Commission shall give public notice of any such contract before entering into such contract. (h) Funding.--(1) There are authorized to be appropriated to the Commission such sums as are necessary to enable the Commission to carry out its duties under this Act, such sums to remain available until December 31, 1996. (2) The Commission shall set a reasonable user fee which shall be calculated to be sufficient to reimburse the United States for all sums appropriated under subparagraph (1). (i) Termination.--The Commission shall terminate on the earlier of-- (1) December 31, 1996; or (2) 90 days after the Commission submits a written determination to the President that voluntary standards are established that are adequate to warn purchasers of the violent or sexually explicit content of video games. SEC. 4. AUTHORITY AND FUNCTIONS OF THE COMMISSION. (a) Voluntary Standards.--(1) The Commission shall-- (A) during the 1-year period beginning on the date of the enactment of this Act, and to the greatest extent practicable, coordinate with the video game industry in the development of a voluntary system for providing information concerning the contents of video games to purchasers and users; and (B) 1 year after the date of enactment of this Act-- (i) evaluate whether any voluntary standards proposed by the video game industry are adequate to warn purchasers and users about the violence or sexually explicit content of video games; and (ii) determine whether the voluntary industry response is sufficient to adequately warn parents and users of the violence or sex content of video games. (2) If before the end of the 1-year period beginning on the date of the enactment of this Act, the Commission makes a determination of adequate industry response under paragraph (1)(B)(ii) and a determination that sufficient voluntary standards are established, the Commission shall-- (A) submit a report of such determinations and the reasons therefor to the President and the Congress; and (B) terminate in accordance with section 3(i)(2). (b) Regulatory Authority.--Effective on and after the date occurring 1 year after the date of the enactment of this Act the Commission may promulgate regulations requiring manufacturers and sellers of video games to provide adequate information relating to violence or sexually explicit content of such video games to purchasers and users. SEC. 5. ANTITRUST EXEMPTION. The antitrust laws as defined in subsection (a) of the first section of the Clayton Act (15 U.S.C. 45) and the law of unfair competition under section 5 of the Federal Trade Commission Act (15 U.S.C. 45) shall not apply to any joint discussion, consideration, review, action, or agreement by or among persons in the video game industry for the purpose of, and limited to, developing and disseminating voluntary guidelines designed to provide appropriate information regarding the sex or violence content of video games to purchasers of video games at the point of sale or initial use or other users of such video games. The exemption provided for in this subsection shall not apply to any joint discussion, consideration, review, action, or agreement which results in a boycott of any person.
Video Game Rating Act of 1994 - Establishes the Interactive Entertainment Rating Commission to: (1) coordinate with the video game industry in the development of a voluntary standard for providing information to purchasers and users concerning the contents of video games; (2) evaluate whether any standards proposed are adequate to warn purchasers and users of the violent or sexually explicit content of such games; and (3) report to the President and the Congress regarding the adequacy of the industry's response. Provides Commission funding through December 31, 1996. Directs the Commission to set a reasonable user fee calculated to be sufficient to reimburse the United States for all sums so appropriated. Terminates the Commission on the earlier of such date or 90 days after submission of its report. Provides an antitrust exemption for any actions taken by the video game industry in developing such guidelines.
SECTION 1. SHORT TITLE. This Act may be cited as the ``Community Recovery and Enhancement Act of 2011'' or the ``CRE Act of 2011''. SEC. 2. DEDUCTION FOR CERTAIN PAYMENTS MADE REDUCE DEBT ON COMMERCIAL REAL PROPERTY. (a) In General.--Part VI of subchapter B of chapter 1 of the Internal Revenue Code of 1986 (relating to additional itemized deductions for individuals and corporations) is amended by adding at the end the following new section: ``SEC. 199A. DEDUCTION FOR PAYMENTS MADE TO REDUCE DEBT ON COMMERCIAL REAL PROPERTY. ``(a) In General.--There shall be allowed as a deduction an amount equal to 50 percent of any qualified debt reduction payment made during the taxable year by the taxpayer with respect to qualified indebtedness on eligible commercial real property held by the taxpayer. ``(b) Maximum Deduction.--The deduction allowed by subsection (a) for any taxable year shall not exceed, with respect to each eligible commercial real property, whichever of the following amounts is the least: ``(1) The amount equal to 50 percent of the excess (if any) of-- ``(A) the amount of the qualified indebtedness secured by such property as of the beginning of such taxable year (reduced by amounts required to be paid under the terms of the loan during such taxable year), over ``(B) 50 percent of the fair market value of such property as of the close of the taxable year. ``(2) $10,000,000. ``(3) The adjusted basis of such property as of the close of such taxable year (determined without regard to qualified debt reduction payments made during the taxable year and depreciation for such year). ``(c) Eligible Commercial Real Property.--For purposes of this section, the term `eligible commercial real property' means any commercial real property if-- ``(1) as of the beginning of the taxable year, the amount of the qualified indebtedness secured by such property is at least equal to 85 percent of the fair market value of the property, or ``(2) such property is, or is reasonably expected to be, treated as being in an in-substance foreclosure by the Comptroller of the Currency. ``(d) Qualified Debt Reduction Payment.--For purposes of this section, the term `qualified debt reduction payment' means the amount of cash paid by the taxpayer during the taxable year to reduce the principal amount of qualified indebtedness of the taxpayer but only to the extent such amount exceeds the amounts required to be paid under the terms of the loan during such taxable year. ``(e) Property Held by a Partnership.-- ``(1) In general.--In the case of property held by a partnership, a qualified debt reduction payment by the partnership may be taken into account under this section only if-- ``(A) such payment is attributable to a qualified equity investment made by a partner in such partnership, and ``(B) any deduction under this section which is attributable to such investment is properly allocated to such partner under section 704(b). ``(2) Qualified equity investment.--For purposes of this section-- ``(A) In general.--The term `qualified equity investment' means any equity investment (as defined in section 45D(b)(6)) in a partnership if-- ``(i) such investment is acquired by the partner at its original issue (directly or through an underwriter) solely in exchange for cash, ``(ii) at least 80 percent of such cash is used by the partnership to reduce the principal amount of qualified indebtedness of the partnership, ``(iii) the portion of such cash not so used is used by the partnership for improvements to commercial real property held by the partnership, and ``(iv) the person or persons otherwise entitled to depreciation with respect to the portion of the basis of the property being reduced under subsection (g)(1) consent to such reduction. ``(B) Redemptions.--A rule similar to the rule of section 1202(c)(3) shall apply for purposes of this paragraph. ``(f) Other Definitions.--For purposes of this section-- ``(1) Qualified indebtedness.--The term `qualified indebtedness' means any indebtedness-- ``(A) which is incurred or assumed by the taxpayer on or before January 1, 2009, and ``(B) which is secured by commercial real property held by the taxpayer at the time the qualified debt reduction equity payment is made by the taxpayer. ``(2) Commercial real property.--The term `commercial real property' means section 1250 property (as defined in section 1250(c)); except that such term shall not include residential rental property (as defined in section 168(e)(2)) unless the building contains at least 3 dwelling units. ``(g) Application of Section 1250.--For purposes of determining the depreciation adjustments under section 1250 with respect to any property-- ``(1) the deduction allowed by this section shall be treated as a deduction for depreciation, and ``(2) the depreciation adjustments in respect of such property shall include all deductions allowed by this section to all taxpayers by reason of reducing the debt secured by such property. ``(h) Special Rules.-- ``(1) Basis reduction.--The basis of any property with respect to which any qualified debt reduction payment is made shall be reduced by the amount of the deduction allowed by this section by reason of such payment. ``(2) Refinancings.--The indebtedness described in subsection (f)(1)(A) shall include indebtedness resulting from the refinancing of indebtedness described in such subsection (or this sentence), but only to the extent it does not exceed the amount of the indebtedness being refinanced. ``(3) Denial of deduction for debt-financed payments.--No deduction shall be allowed by this section for any qualified debt reduction payment-- ``(A) to the extent indebtedness is incurred or continued by the taxpayer to make such payment, and ``(B) in the case of a qualified debt reduction payment made by a partnership on qualified indebtedness on commercial real property held by the partnership, to the extent of indebtedness-- ``(i) which is incurred or continued by any partner to whom such payment is allocable, and ``(ii) which is secured by such partner's interest in the partnership or by such commercial real property. ``(4) Treatment of amounts required to be paid by reason of loan default.--For purposes of subsections (b)(1)(A) and (d), accelerated payments required to be made under the terms of a loan solely by reason of a default on the loan shall not be taken into account. ``(5) Recapture of deduction if additional debt within 3 years.-- ``(A) In general.--If a taxpayer incurs any additional debt within 3 years after the date that the taxpayer made a qualified debt reduction payment, the ordinary income of the taxpayer making such payment shall be increased by the applicable percentage of the recaptured deduction. ``(B) Recaptured deduction.--For purposed of this paragraph, the recaptured deduction is the excess of-- ``(i) the deduction allowed by subsection (a) on account of a qualified debt reduction payment, over ``(ii) the deduction which would have been so allowed if such payment had been reduced by the additional debt. ``(C) Applicable percentage.--The applicable percentage shall be determined in accordance with the following table: ``If, of the 3 years referred to in The applicable subparagraph (A), the additional percentage is: debt occurs during the: 1st such year...................................... 100 2d such year....................................... 66 2/3 3d such year....................................... 33 1/3. ``(D) Partnerships.-- ``(i) Allocation of income inclusion.--Any increase in the ordinary income of a partnership by reason of this paragraph shall be allocated (under regulations prescribed by the Secretary) among the partners receiving a deduction under this section by reason of making qualified equity investments in the partnership. ``(ii) Debt-financed equity investment by partner.--Rules similar to the rules of the paragraph shall apply in cases where additional debt is incurred by a partner making a qualified equity investment in a partnership. ``(E) Subsequent deprecation.--The deductions under section 168 for periods after a recaptured deduction under this paragraph shall be determined as if the portion of the qualified debt reduction payment allocable to the recaptured deduction had never been made. ``(6) Recapture where partner disposes of interest in partnership.--If any partner to whom a deduction under this section is allocable by reason of making a qualified equity investment in a partnership disposes of any portion of such partner's interest in the partnership within 1 year after the date such investment was made, the ordinary income of such partner shall be increased by the amount which bears the same ratio to the deduction allowed on account of such investment as such portion bears to such partner's interest in the partnership immediately before such disposition. ``(7) Exemption from passive loss rules.--Section 469 shall not apply to the deduction allowed by this section. ``(i) Application of Section.--This section shall apply to qualified debt reduction payments made within the 2-year period beginning on the day after the date of the enactment of this section.''. (b) Earnings and Profits.--Subsection (k) of section 312 of such Code is amended by adding at the end the following new paragraph: ``(6) Treatment of section 199a.--Paragraphs (1) and (3) shall not apply to the deduction allowed by section 199A.''. (c) Clerical Amendment.--The table of sections for part VI of subchapter B of chapter 1 of such Code is amended by adding at the end the following new item: ``Sec. 199A. Deduction for payments made to reduce debt on commercial real property.''. (d) Effective Date.--The amendments made by this section shall apply to taxable years ending after the date of the enactment of this Act.
Community Recovery and Enhancement Act of 2011 or the CRE Act of 2011 - Amends the Internal Revenue Code to allow a tax deduction for payments made to reduce debt on eligible commercial property. Limits the amount of such deduction to the lesser of: (1) 50% of the excess of the amount of qualified debt secured by such property, (2) 50% of the fair market value of such property, (3) $10 million, or (4) the adjusted basis of such property at the close of the taxable year. Defines "eligible commercial property" as any commercial real property if: (1) the amount of the qualified indebtedness secured by such property is at least 85% of the fair market value of the property, or (2) such property is, or is reasonably expected to be, treated as being in an in-substance foreclosure by the Comptroller of the Currency. Denies a tax deduction for debt reduction payments that are debt-financed. Requires a recapture in income of tax deduction amounts allowed by this Act if additional indebtedness is incurred within three years after a qualified debt reduction payment is made.
insulating rare - earth titanate pyrochlores ( a@xmath0ti@xmath0o@xmath1 , where a is tri - positive rare - earth ion ) are known to show complex magnetic behaviors , arising from the geometrical frustration of exchange interaction between the rare - earth spins located on an infinite network of corner - sharing tetrahedrons @xcite . theoretically , for antiferromagnetically coupled classical or heisenberg spins on the pyrochlore lattice the magnetic ground state should be infinitely degenerate @xcite . however , the ubiquitous presence of residual terms , like next near - neighbor interactions , crystal field and dipolar interactions can remove this macroscopic degeneracy either completely or partially leading often to complex spin structures at low temperatures @xcite . the only member of the a@xmath0ti@xmath0o@xmath1 series where the presence of residual terms has apparently no significant influence on the spin - dynamics is tb@xmath0ti@xmath0o@xmath1 . in this pyrochlore , the strength of antiferromagnetic exchange is of the order of 20 k , however despite this , the tb@xmath3 spins show no signs of freezing or long - range ordering down to a temperature of at least 70 mk @xcite . it has been shown , however , that this `` collective paramagnetic '' or the so - called `` spin - liquid '' state of tb@xmath3 moments is instable under high - pressure @xcite . using powder neutron diffraction experiments , mirebeau _ et al . _ @xcite showed that application of iso - static pressure of about 8.6 gpa in tb@xmath0ti@xmath0o@xmath1 induces a long - range order of tb spins coexisting with the spin - liquid . since no indication of pressure induced structural deformation was observed in this study , the spin - crystallization , under pressure , was believed to have resulted from the break - down of a delicate balance among the residual terms . recently , the vibrational properties of some of these pyrochlores have been investigated by several groups @xcite . these studies not only show that phonons in the titanate pyrochlores are highly anomalous , but also indicate the extreme sensitivity of vibrational spectroscopy towards probing subtle structural and electronic features not observed . in the pyrochlore dy@xmath0ti@xmath0o@xmath1 , raman spectroscopy revealed a subtle structural deformation of the pyrochlore lattice upon cooling below t = 100 k @xcite . in the pyrochlore tb@xmath0ti@xmath0o@xmath1 , new crystal - field ( cf ) excitations were identified using raman data at t = 4 k @xcite . in the temperature - dependent studies , signature of highly anomalous phonons ( i.e. , decrease of phonon frequency upon cooling ; also referred to as phonon softening ) has been witnessed in the pyrochlores er@xmath0ti@xmath0o@xmath1 @xcite , gd@xmath0ti@xmath0o@xmath1 @xcite and dy@xmath0ti@xmath0o@xmath1 @xcite . the effect of pressure , at ambient temperature , has also been studied recently for several of these titanate pyrochlores . sm@xmath0ti@xmath0o@xmath1 and gd@xmath0ti@xmath0o@xmath1 pick up anion disorder above 40 gpa and become amorphous above 51 gpa @xcite . gd@xmath0ti@xmath0o@xmath1 exhibits a structural deformation near 9 gpa @xcite . in this paper we present raman and powder x - ray diffraction studies on the pyrochlore tb@xmath0ti@xmath0o@xmath1 . these studies were carried out in the temperature range between room temperature and 27 k ; and pressure varying from ambient pressure to 25 gpa . our study reveals highly anomalous softening of the phonons upon cooling . to understand this anomalous behavior , we have estimated the quasiharmonic contribution to the temperature - dependent shift of the frequencies of different raman phonons using the mode grneisen parameters obtained from high - pressure raman data ; and bulk modulus and thermal expansion coefficient obtained from high - pressure and temperature - dependent powder x - ray diffraction data , respectively . these analyses allow us to extract the changes in the phonon frequencies arising solely due to anharmonic interactions . we also bring out the effect of pressure on phonons manifesting a subtle structural deformation of the lattice near 9 gpa which is corroborated by a change in the bulk modulus by @xmath2 62% . this observation may have relevance to the observations of powder neutron scattering study @xcite mentioned above . while this paper was being written , we came across a very recent temperature - dependent study @xcite on this system revealing phonon softening behavior and the coupling of phonons with the crystal field transitions . we shall compare below our results on the temperature dependence of phonons with those of this recent study and quantify the quasiharmonic and anharmonic contributions to the change in phonon frequencies . stoichiometric amounts of tb@xmath0o@xmath4 ( 99.99 @xmath5 ) and tio@xmath0 ( 99.99 @xmath5 ) were mixed thoroughly and heated at 1200 @xmath6c for about 15 h. the resulting mixture was well ground and isostatically pressed into rods of about 6 cm long and 5 mm diameter . these rods were sintered at 1400 @xmath6c in air for about 72 h. this procedure was repeated until the compound tb@xmath0ti@xmath0o@xmath1 was formed , as revealed by powder x - ray diffraction analysis , with no traces of any secondary phase . these rods were then subjected to single crystal growth by the floating - zone method in an infrared image furnace under flowing oxygen . x - ray diffraction measurement was carried out on the powder obtained by crushing part of a single crystalline sample and energy dispersive x - ray analysis in a scanning electron microscope indicated a pure pyrochlore tb@xmath0ti@xmath0o@xmath1 phase . the lue back - reflection technique was used to orient the crystal along the principal crystallographic directions . raman spectroscopic measurements on a ( 111 ) cut thin single - crystalline slice ( 0.5 mm thick and 3 mm in diameter , polished down to a roughness of almost 10 @xmath7 m ) of tb@xmath0ti@xmath0o@xmath1 were performed at low temperatures in back - scattering geometry , using the 514.5 nm line of an @xmath8 ion laser ( spectra - physics ) with @xmath2 20 mw of power falling on the sample . temperature scanning was done using a cti - cryogenics closed cycle refrigerator . temperature was measured and controlled ( with a maximum error of 0.5 k ) using a calibrated pt - sensor and a cryo - con 32b temperature controller . the scattered light was collected by a lens and was analyzed using a computer controlled spex ramalog spectrometer having two holographic gratings ( 1800 groves / mm ) coupled to a peltier - cooled photo multiplier tube connected to a digital photon counter . high - pressure raman experiments were carried out at room temperature up to @xmath2 25 gpa in a mao - bell type diamond anvil cell ( dac ) . a single crystalline tb@xmath0ti@xmath0o@xmath1 sample ( size @xmath2 50 @xmath7 m ) was placed with a ruby chip ( size @xmath2 10 @xmath7 m ) in a hole of @xmath2 200 @xmath7 m diameter drilled in a preindented stainless - steel gasket with a mixture of 4:1 methanol and ethanol as the pressure - transmitting medium . pressure was calibrated using the ruby fluorescence technique @xcite . high resolution x - ray diffraction measurements were performed between 10 - 300 k ( with temperature accuracy better than 0.5 k ) using a highly accurate two - axis diffractometer in a bragg - brentano geometry ( focalization circle of 50 @xmath7 m ) using the cu - k@xmath9 line ( @xmath10=1.39223 ) of a 18 kw rotating anode . for high - pressure x - ray experiments , single crystalline tb@xmath0ti@xmath0o@xmath1 samples were crushed into fine powder which was loaded along with a few particles of copper , in a hole of @xmath2 120 @xmath7 m diameter drilled in a preindented ( @xmath2 70 @xmath7 m thick ) tungsten gasket of a mao - bell - type diamond - anvil cell ( dac ) . the pressure - transmitting medium was methanol - ethanol - water ( 16:3:1 ) mixture , which remains hydrostatic until a pressure of @xmath2 15 gpa . pressure was determined from the known equation of state of copper @xcite . high - pressure angle dispersive x - ray diffraction experiments were carried out up to @xmath2 25 gpa on tb@xmath0ti@xmath0o@xmath1 at the 5.2@xmath11 ( xrd1 ) beamline of the elettra synchrotron source ( italy ) with monochromatized x - rays ( @xmath12= 0.69012 ) . the diffraction patterns were recorded using a mar345 imaging plate detector kept at a distance of @xmath2 20 cm from the sample . two - dimensional ( 2d ) imaging plate records were transformed into one - dimensional ( 1d ) diffraction profiles by radial integration of the diffraction rings using the fit2d software @xcite . pyrochlores belong to the space group @xmath13 with an @xmath14 stoichiometry , where @xmath15 occupies the 16d and @xmath16 occupies the 16c wyckoff positions and the oxygen atoms o and o@xmath17 occupy the 48@xmath18 and 8@xmath19 sites , respectively . factor group analysis for this family of structures gives six raman active modes ( @xmath20 ) and seven infrared active modes ( @xmath21 ) . raman spectra of tb@xmath0ti@xmath0o@xmath1 have been recorded between 125 to 925 @xmath22 from room temperature down to 27 k. a strong rayleigh contribution made the signal to noise ratio poor below 125 @xmath22 . [ fig:1 ] shows the raman spectrum at 27 k , fitted with lorentzians and labeled as p1 to p9 . following previous reports @xcite , the modes can be assigned as follows : p3 ( 294 @xmath22 , @xmath23 ) , p4 ( 325 @xmath22 , @xmath24 ) , p5 ( 513 @xmath22 , @xmath25 ) and p6 ( 550 @xmath22 , @xmath23 ) . one @xmath23 mode near 425 @xmath22 ( observed in other pyrochlore titanates @xcite ) could not be observed due to weak signal . the mode p1 ( 170 @xmath22 ) has been assigned to be the fourth @xmath23 mode by refs . however , there has been a controversy on the assignment of the p1 mode @xcite and we , therefore , assign the mode p7 ( 672 @xmath22 ) as the fourth @xmath23 mode . we support this assignment for the following reason : it is well established that the symmetry - allowed six raman active modes ( @xmath20 ) in pyrochlore involve only the vibrations of oxygen atoms . this will imply that isotopic substitution by o@xmath26 in pyrochlore should lower the phonon frequencies by @xmath2 5 % . this has , indeed , been seen in our recent experiments @xcite on dy@xmath0ti@xmath0o@xmath1 and lu@xmath0ti@xmath0o@xmath1 for the modes p3 to p9 but not for p1 and p2 . another argument against p7 being a combination mode is the pressure dependence of the modes presented later ( fig . [ fig:7 ] ) . possible candidates for the combination are @xmath27 and @xmath28 . the pressure derivative of frequency of the mode p7 ( @xmath29 ) does not agree with the sum of the pressure derivatives of the individual modes . next , the question arises on the origin of the modes p1 and p2 . since these modes are also seen in gd@xmath0ti@xmath0o@xmath1 and in non - magnetic lu@xmath0ti@xmath0o@xmath1 , their crystal field ( cf ) origin can be completely ruled out . we , therefore , attribute these low frequency modes to disorder induced raman active modes . the high frequency modes ( p8 and p9 ) are possibly second - order raman modes @xcite . we have recorded raman spectra of tb@xmath0ti@xmath0o@xmath1 from room temperature down to 27 k and followed the temperature dependence of the modes p1 , p3 , p4 , p5 and p7 . as shown in fig . [ fig:2 ] , the modes p1 , p3 , p5 and p7 soften with decreasing temperature . since the raman bands p2 and p6 are weak near room temperature , their temperature dependence is not shown . it needs to be mentioned that temperature - dependent anomalies of the modes p1 , p5 and p7 have also been reported in other pyrochlore titanates @xcite and attributed to phonon - phonon anharmonic interactions . however , anomalous behavior of the @xmath23(p3 ) mode near 300 @xmath22 has been reported only in the non - magnetic lu@xmath0ti@xmath0o@xmath1 pyrochlore @xcite . we evidence a similar anomaly in p3 in tb@xmath0ti@xmath0o@xmath1 with unusually broad linewidth . recently , maczka _ et al . _ @xcite have also reported this unusually broad linewidths in tb@xmath0ti@xmath0o@xmath1 which has been explained in terms of coupling between phonon and crystal field transition . temperature dependence of a phonon mode ( @xmath30 ) of frequency @xmath31 can be expressed as @xcite , @xmath32 the term @xmath33 corresponds to the phonon frequency at absolute zero . in eqn . 1 above , the first term on the right hand side corresponds to quasiharmonic contribution to the frequency change . the second term corresponds to the intrinsic anharmonic contribution to phonon frequency that comes from the real part of the self - energy of the phonon decaying into two phonons ( cubic anharmonicity ) or three phonons ( quartic anharmonicity ) . the third term @xmath34 is the renormalisation of the phonon energy due to coupling of phonons with charge carriers in the system which is absent in insulating pyrochlore titanates . the last term , @xmath35 , is the change in phonon frequency due to spin - phonon coupling arising from modulation of the spin exchange integral by the lattice vibration . recently , we have shown @xcite that the magnitude of phonon anomalies is comparable in both magnetic and non - magnetic pyrochlore titanates , thus ruling out any contribution from spin - phonon coupling . therefore , the change in phonon frequency is solely due to quasiharmonic and intrinsic anharmonic effects whose temperature variations , as estimated below for the modes p1 , p3 , p5 and p7 , are shown in fig . [ fig:3 ] . the change in phonon frequency due to quasiharmonic effects ( @xmath36 ) comes from the change in the unit cell volume . this change can be expressed as @xcite , @xmath37 where @xmath33 is the frequency of the @xmath38 phonon mode at 0 k , @xmath39 is the temperature - dependent grneisen parameter of that phonon and @xmath40 is the temperature - dependent coefficient of the volume expansion . since our lowest temperature is 27 k , the quasiharmonic change can be approximated as , @xmath41 assuming the grneisen parameter to be temperature independent . to measure the @xmath42 , we have recorded x - ray diffraction patterns of tb@xmath0ti@xmath0o@xmath1 from room temperature to 10 k. we present the temperature - dependent lattice parameter in fig . [ fig:4 ] . our data agree with the recent data by ruff _ _ @xcite . the solid line in fig . [ fig:4 ] is a fit to our data by the relation @xmath43 $ ] , where @xmath44=10.14 @xmath45 is the lattice constant at 0 k and b=9.45 k and c=648.5 k are fitting parameters @xcite . in a recent study by ruff et al . @xcite , it was shown that the lattice undergoes an anomalous expansion along with broadening of allowed bragg peaks as temperature is reduced below @xmath2 10 k. this was attributed to structural fluctuation from cubic - to - tetragonal lattice that consequently coincides with the development of correlated spin - liquid ground state in tb@xmath0ti@xmath0o@xmath1 . our data are up to 10 k and hence , we could not observe this feature at low temperatures . we have derived the temperature - dependent coefficient of thermal expansion ( @xmath46 ) from the temperature - dependent lattice parameter which is shown in the inset of fig . [ fig:4 ] . the @xmath47 at 300 k for tb@xmath0ti@xmath0o@xmath1 is ( @xmath48 ) , slightly higher than those of dy@xmath0ti@xmath0o@xmath1 ( @xmath49 ) and lu@xmath0ti@xmath0o@xmath1 ( @xmath50 ) , estimated from the temperature - dependent x - ray diffraction results reported in ref . we note that @xmath51 for tb@xmath0ti@xmath0o@xmath1 is about 10 times higher than that of si @xcite and nearly 7 times higher than that of gd@xmath0zr@xmath0o@xmath1 @xcite , implying that the anharmonic interactions in tb@xmath0ti@xmath0o@xmath1 are strong . the mode grneisen parameter for @xmath38 phonon mode is @xmath52 , where @xmath53 is the bulk modulus , @xmath54 is the frequency change with pressure @xmath55 . taking b=154 gpa , obtained from our high pressure x - ray diffraction data discussed later , we find the values of the grneisen parameter for the various modes as listed in table - i . the change in phonon frequency due to quasiharmonic effect , @xmath56 , has been estimated for the modes p1 , p3 , p5 and p7 , and is shown in the insets of fig . [ fig:3 ] . the anharmonic contribution , @xmath57 , for the modes p1 , p3 , p5 and p7 are shown in fig . [ fig:3 ] . we note that the temperature - dependent @xmath58 for these four modes is anomalous . further , upon changing the temperature from 27 k to 300 k , we find that for the mode p1 , the percentage change in frequency due to anharmonic interactions , @xmath59 , is exceptionally high . it is customary to fit the @xmath58 data by the expression @xcite , @xmath60 where the @xmath38 phonon decays into two phonons of equal energy ( @xmath61 ) . the parameter `` @xmath62 '' can be positive ( for normal behavior of phonon ) or negative ( anomalous phonon ) @xcite . we have seen that eqn . 4 does not fit to our data of @xmath58 ( fitting not shown in fig . [ fig:3 ] ) . this may be because , in the expression for @xmath58 ( eqn . 4 ) , all the decay channels for the phonons are not taken into account . therefore , a full calculation for the anharmonic interactions considering all the possible decay channels is required to understand the @xmath58 data , shown in fig . [ fig:3 ] . considering only the cubic phonon - phonon anharmonic interactions where a phonon decays into two phonons of equal energy , the temperature - dependent broadening of the linewidth can be expressed as @xcite : @xmath63 where @xmath33 is the zero temperature frequency and @xmath64 is the linewidth arising from disorder . [ fig:5 ] shows the temperature dependence of linewidths of the raman modes p3 , p4 and p5 . it can be seen that the linewidth of p3 and p4 modes are almost double of the linewidth of the p5 mode , as reported by maczka _ these authors have attributed this to the strong coupling of the @xmath23(p3 ) and @xmath24(p4 ) phonons with the crystal field transitions of tb@xmath3 which is absent for the @xmath25 ( p5 ) mode due to symmetry consideration . to strengthen this argument , we compare ( fig . [ fig:5 ] ) these results with the linewidths of the corresponding phonons in non - magnetic lu@xmath0ti@xmath0o@xmath1 ( lu@xmath3 : j=0 ) @xcite and , indeed , the linewidths of p3 and p4 modes in tb@xmath0ti@xmath0o@xmath1 are much broader than those in lu@xmath0ti@xmath0o@xmath1 . the change in linewidth of the @xmath23(p3 ) mode in tb@xmath0ti@xmath0o@xmath1 from room temperature down to 27 k is nearly half the change in linewidth of the same mode in lu@xmath0ti@xmath0o@xmath1 and , therefore , the parameter `` @xmath65 '' for @xmath23(p3 ) mode in tb@xmath0ti@xmath0o@xmath1 is 7.2 @xmath22 , which is nearly half of that in lu@xmath0ti@xmath0o@xmath1 ( @xmath65=13.4 @xmath22 ) . however , the linewidth of the @xmath25 mode for both titanates is comparable and , therefore , the fitting parameter `` @xmath65 '' for this mode in tb@xmath0ti@xmath0o@xmath1 and lu@xmath0ti@xmath0o@xmath1 are nearly the same , i.e. , 7.8 @xmath22 and 7.3 @xmath22 , respectively . all these results , therefore , corroborate the suggestion of maczka _ et al . _ @xcite thus emphasizing a strong coupling between the phonon and crystal field modes . [ fig:6 ] shows room temperature raman spectra at ambient and a few high pressures , the maximum pressure being @xmath2 25 gpa . we could not resolve p2 and p9 at room temperature inside the high pressure cell due to the reasons described above . the phonon frequencies increase with increasing pressure , as shown in figs . [ fig:6 ] and [ fig:7 ] . interestingly , we find that upon increasing the pressure , the intensity of the p1 mode diminishes and is no longer resolvable above @xmath2 9 gpa . on decompressing the sample from @xmath2 25 gpa , the mode recovers , as shown in the top panel of fig . [ fig:6 ] . similarly the mode p6 ( @xmath23 ) also vanishes above @xmath2 9 gpa and reappears on decompression . the intensity ratios of the modes p1 to p3 and p6 to p5 , as shown in fig . [ fig:8 ] , gradually decrease with increasing pressure and become zero near 9 gpa . as shown in fig . [ fig:7 ] , the maximum change in phonon frequency is seen in mode p7 ( @xmath23 ) , which shows a dramatic change in the rate of change of frequency with pressure at a pressure of @xmath2 9 gpa . in sharp contrast , the other modes p3 , p4 and p5 do not show any change in slope till the maximum pressure applied . the changes seen in the modes p1 , p6 and p7 near 9 gpa are indicative of a structural transition of the tb@xmath0ti@xmath0o@xmath1 lattice . in order to ascertain the structural transition we have performed high pressure x - ray diffraction measurements and the results are discussed below . [ fig:9 ] shows the x - ray diffraction patterns of tb@xmath0ti@xmath0o@xmath1 at a few high pressures . the ( hkl ) values are marked on the corresponding diffraction peaks . as we increase the pressure , we find that the diffraction peaks shift to higher angles but no signature of new peak or peak splitting could be observed . however , the change in lattice parameter with pressure , shown in fig . [ fig:10 ] , shows a change in slope near 9 gpa implying a structural deformation , thus corroborating the transition observed in the raman data . the transition possibly involves just a local rearrangement of the atoms retaining the cubic symmetry of the crystal . fitting the pressure - dependent volume to the third order birch - murnaghan equation of state @xcite , we find that b = 154 gpa and b@xmath17=6.6 when the applied pressure is below 9 gpa . but , when the applied pressure is above this transition pressure , these values change to b = 250 gpa and b@xmath17 = 7.1 thus implying an increment of the bulk modulus by @xmath2 62% after the transition . a similar transition had also been observed @xcite in gd@xmath0ti@xmath0o@xmath1 at @xmath2 9 gpa and was attributed to the tio@xmath66 octahedral rearrangement . it needs to be mentioned here that the pressure transmitting medium ( methanol - ethanol mixture , used in our raman experiments ) remains hydrostatic up to 10 gpa which is close to the transition pressure , thus implying that the possibility of a contribution from non - hydrostaticity of the medium can not be completely ruled out . however , experiments in a non - hydrostatic medium ( water ) has as well revealed the transition at @xmath2 9 gpa in gd@xmath0ti@xmath0o@xmath1 @xcite . we , therefore , believe that the transition near 9 gpa is an intrinsic property of tb@xmath0ti@xmath0o@xmath1 and also that performing this experiment with helium as the pressure transmitting medium , will further strengthen our suggestion of a possible transition at @xmath2 9 gpa . we have performed temperature and pressure - dependent raman and x - ray diffraction studies on pyrochlore tb@xmath0ti@xmath0o@xmath1 and the main results can be summarised as follows : ( 1 ) the phonon frequencies show anomalous temperature dependence , ( 2 ) the linewidths of the @xmath23 and @xmath24 modes near 300 @xmath22 are unusually broad in comparison to those of non - magnetic lu@xmath0ti@xmath0o@xmath1 phonons , thus corroborating the suggestion @xcite of a possible coupling between phonons and crystal field transitions , ( 3 ) intensities of two phonon modes ( p1 and p6 ) decrease to zero as the applied pressure approaches 9 gpa . another raman band p7 near 672 @xmath22 ( @xmath23 ) shows a large change in slope ( @xmath67 ) at @xmath2 9 gpa , thus indicating a possible transition , ( 4 ) x - ray diffraction study as a function of pressure reveals an increase in bulk modulus by @xmath2 62% when the applied pressure is above 9 gpa thus corroborating the transition suggested by raman data . the phonons in tb@xmath0ti@xmath0o@xmath1 show anomalous temperature dependence which has been attributed to the phonon - phonon anharmonic interactions @xcite . using the required parameters ( @xmath68 , b and @xmath47 ) , derived from our high pressure and temperature - dependent raman and x - ray experiments , we have estimated the contributions of quasiharmonic and anharmonic effects ( fig . [ fig:3 ] ) to the phonon frequencies . we note that the anharmonicity of the mode p1 ( mode near 200 @xmath22 ) is unusually high as compared to other modes . p1 is a phonon mode which do not involve oxygen but includes the vibrations of ti@xmath69 ions @xcite . this can be qualitatively understood by examining how ti@xmath69 and tb@xmath3 ions are coordinated . there are tetrahedra in the unit cell which are occupied by ti@xmath69 ions at the vertices with a vacant 8@xmath70-site inside . the later will tend to make the vibrational amplitudes of ti@xmath69 ions larger and thus contributing to the high anharmonic nature of the p1 mode . the high anharmonic behavior of the raman modes involving 48@xmath18-oxygen ions arises due to the fact that the o@xmath71 anions are off centered towards the 8@xmath70-vacant site from their ideal position @xmath72 to @xmath73 inside the tetrahedra @xcite whose two vertices are occupied by ti@xmath69 and other two by tb@xmath3 . here @xmath70 is the lattice parameter and @xmath74 is the o@xmath71 positional parameter . this anharmonicity is reflected in the high root mean squared displacement ( @xmath75 ) of o@xmath71 atoms : @xmath76 @xcite , where @xmath77 is the tb - o bond length . pressure - dependent raman data show that two raman modes , p1 and p6 , can not be seen above @xmath78 9 gpa and the p7 ( @xmath23 ) raman band shows a significant change in the slope ( @xmath67 ) at @xmath79 . these results suggest a subtle stuctural deformation which gets corroborated by a change in bulk modulus seen in pressure - dependent x - ray experiments . however , the pressure - dependent x - ray data do not reveal any new diffraction peak or splitting of line . this implies that the structural deformation near 9 gpa , as inferred from the raman study , is a local distortion of the lattice . it may be possible that as pressure increases , due to the vacancies at the @xmath80-sites , the ti@xmath69-ions adjust their local coordinates with a concomitant relocation of other atoms in the lattice . at this instant , we would like to recall the results of a neutron scattering experiment on tb@xmath0ti@xmath0o@xmath1 by mirebeau _ with a simultaneous change in pressure and temperature @xcite . it was seen that at 1.5 k , antiferromagnetic correlations develop in tb@xmath0ti@xmath0o@xmath1 at a pressure of 8.6 gpa . this was attributed to the delicate balance among the exchange coupling , crystal field and dipolar interactions that gets destroyed under high pressure . our high pressure raman and x - ray experiments on tb@xmath0ti@xmath0o@xmath1 suggest a local rearrangement of the atoms near 9 gpa retaining the cubic symmetry which , we believe , may contribute to the antiferromagnetic correlations observed in neutron scattering experiments @xcite . the possibility of a structural transition in tb@xmath0ti@xmath0o@xmath1 at low temperatures has recently been relooked . as discussed in section iii(b ) , ruff @xcite suggested an onset of cubic - to - tetragonal structual fluctuations below 20 k. a simultaneous presence of a cf mode at @xmath2 13 @xmath22 in raman and infrared spectrosopic measurements led lummen _ @xcite to propose a broken inversion symmetry in tb@xmath0ti@xmath0o@xmath1 at low temperatures . the authors suggested the presence of a second tb@xmath3 site with different site symmetry at low temperatures . followed by this , curnoe @xcite has proposed that a structural transition can occur at low temperatures with an @xmath81 lattice distortion resulting in a change of the point group symmetry , leaving the cubic lattice unchanged . our raman spectroscopic observations of a transition near 9 gpa may be related to the above discussion and can contribute to the increase in magnetic correlation observed by mirebeau _ . it will be relevant to do high pressure raman experiments at helium temperatures to strengthen our suggestion . to conclude , our raman spectroscopic and x - ray diffraction experiments on single crystals of pyorhclore tb@xmath0ti@xmath0o@xmath1 , with temperature , reveal highly anomalous temperature - dependent phonons attributed to strong phonon - phonon anharmonic interactions . our pressure - dependent raman and x - ray diffraction experiments suggest a local deformation of the pyrochlore lattice near 9 gpa . we believe that our experimental results play an important role in enriching the understanding of pyrochlore titanates , especially the spin - liquid tb@xmath0ti@xmath0o@xmath1 , thus motivating further experimental and theoretical studies on these exotic systems . we thank the indo - french centre for promotion of advanced research ( ifcpar ) , centre franco - indien pour la promotion de la recherche avance ( cefipra ) for financial support under project no . 3108 - 1 . aks thanks the department of science and technology ( dst ) , india , for partial financial support . j. s. gardner , s. r. dunsiger , b. d. gaulin , m. j. p. gingras , j. e. greedan , r. f. kiefl , m. d. lumsden , w. a. macfarlane , n. p. raju , j. e. sonier , i. swainson , and z. tun , phys . lett . * 82 * , 1012 ( 1999 ) . ti@xmath0o@xmath1 at 27 k. open circles represent the experimental data . thin ( blue ) solid lines are the individual modes and thick ( red ) line is the total fit to the experimental data . assignment of the modes p1 to p9 are done in the text ( table - i ) . ] ti@xmath0o@xmath1 with temperature . open and closed circles are values obtained from ( 333 ) and ( 004 ) diffraction peaks of our x - ray data . solid line is the fit to the data as discussed in text . inset shows the coefficient of thermal expansion ( @xmath85 ) derived from the lattice parameter.,scaledwidth=80.0% ] ti@xmath0o@xmath1 . they have been compared with those of the same modes in non - magnetic lu@xmath0ti@xmath0o@xmath1 ( open circles ) as discussed in the text . solid and dashed lines are fits to the data of tb@xmath0ti@xmath0o@xmath1 and lu@xmath0ti@xmath0o@xmath1 , respectively , as explained in the text.,scaledwidth=75.0% ] ti@xmath0o@xmath1 at ambient and a few high pressures . open circles are the experimental data . thin ( blue ) solid lines are the fits for individual modes and the thick ( red ) solid line is total fit to the data . the modes p1 and p6 disappears above @xmath2 9 gpa and reappear on decompression as indicated by the arrows.,scaledwidth=75.0% ] ti@xmath0o@xmath1 at a few high pressures . k l ) values are marked for each of the diffraction peaks . stars ( * ) and open circles ( o ) represent tungsten gasket and cu ( pressure marker ) peaks , respectively.,scaledwidth=80.0% ]
we have carried out temperature and pressure - dependent raman and x - ray measurements on single crystals of tb@xmath0ti@xmath0o@xmath1 . we attribute the observed anomalous temperature dependence of phonons to phonon - phonon anharmonic interactions . the quasiharmonic and anharmonic contributions to the temperature - dependent changes in phonon frequencies are estimated quantitatively using mode grneisen parameters derived from pressure - dependent raman experiments and bulk modulus from high pressure x - ray measurements . further , our raman and x - ray data suggest a subtle structural deformation of the pyrochlore lattice at @xmath2 9 gpa . we discuss possible implications of our results on the spin - liquid behaviour of tb@xmath0ti@xmath0o@xmath1 .
SECTION 1. SHORT TITLE; TABLE OF CONTENTS. (a) Short Title.--This Act may be cited as the ``Pool and Spa Safety Act''. (b) Table of Contents.--The table of contents for this Act is as follows: Sec. 1. Short title; table of contents. Sec. 2. Findings. Sec. 3. Federal swimming pool and spa drain cover standard. Sec. 4. State swimming pool safety grant program. Sec. 5. Minimum State law requirements. Sec. 6. Education program. Sec. 7. Definitions. Sec. 8. CPSC report. SEC. 2. FINDINGS. The Congress finds that-- (1) of injury-related deaths, drowning is the second leading cause of death in children aged 1 to 14 in the United States; (2) many children die due to pool and spa drowning and entrapment, such as Virginia Graeme Baker, who at age 7 drowned by entrapment in a residential spa; (3) in 2003, 782 children ages 14 and under died as a result of unintentional drowning; (4) adult supervision at all aquatic venues is a critical safety factor in preventing children from drowning; and (5) research studies show that the installation and proper use of barriers or fencing, as well as additional layers of protection, could substantially reduce the number of childhood residential swimming pool drownings and near drownings. SEC. 3. FEDERAL SWIMMING POOL AND SPA DRAIN COVER STANDARD. (a) Consumer Product Safety Rule.--The provisions of subsection (b) shall be considered to be a consumer product safety rule issued by the Consumer Product Safety Commission under section 9 of the Consumer Product Safety Act (15 U.S.C. 2058). (b) Drain Cover Standard.--Effective 1 year after the date of enactment of this Act, each swimming pool or spa drain cover manufactured, distributed, or entered into commerce in the United States shall conform to the entrapment protection standards of the ASME/ANSI A112.19.8 performance standard, or any successor standard regulating the same. SEC. 4. STATE SWIMMING POOL SAFETY GRANT PROGRAM. (a) In General.--Subject to the availability of appropriations authorized by subsection (e), the Commission shall establish a grant program to provide assistance to eligible States. (b) Eligibility.--To be eligible for a grant under the program, a State shall-- (1) demonstrate to the satisfaction of the Commission that it has a State statute, or that, after the date of enactment of this Act, it has enacted a statute, or amended an existing statute, and provides for the enforcement of, a law that-- (A) except as provided in section 5(a)(1)(A)(i), applies to all swimming pools in the State; and (B) meets the minimum State law requirements of section 5; and (2) submit an application to the Commission at such time, in such form, and containing such additional information as the Commission may require. (c) Amount of Grant.--The Commission shall determine the amount of a grant awarded under this Act, and shall consider-- (1) the population and relative enforcement needs of each qualifying State; and (2) allocation of grant funds in a manner designed to provide the maximum benefit from the program in terms of protecting children from drowning or entrapment, and, in making that allocation, shall give priority to States that have not received a grant under this Act in a preceding fiscal year. (d) Use of Grant Funds.--A State receiving a grant under this section shall use-- (1) at least 50 percent of amount made available to hire and train enforcement personnel for implementation and enforcement of standards under the State swimming pool and spa safety law; and (2) the remainder-- (A) to educate pool construction and installation companies and pool service companies about the standards; (B) to educate pool owners, pool operators, and other members of the public about the standards under the swimming pool and spa safety law and about the prevention of drowning or entrapment of children using swimming pools and spas; and (C) to defray administrative costs associated with such training and education programs. (e) Authorization of Appropriations.--There are authorized to be appropriated to the Commission for each of fiscal years 2008 through 2012 $10,000,000 to carry out this section, such sums to remain available until expended. SEC. 5. MINIMUM STATE LAW REQUIREMENTS. (a) In General.-- (1) Safety Standards.--A State meets the minimum State law requirements of this section if-- (A) the State requires by statute-- (i) the enclosure of all residential pools and spas by barriers to entry that will effectively prevent small children from gaining unsupervised and unfettered access to the pool or spa; (ii) that all pools and spas be equipped with devices and systems designed to prevent entrapment by pool or spa drains; (iii) that pools and spas built more than 1 year after the date of enactment of such statute have-- (I) more than 1 drain; (II) 1 or more unblockable drains; or (III) no main drain; and (iv) every swimming pool and spa that has a main drain, other than an unblockable drain, be equipped with a drain cover that meets the consumer product safety standard established by section 3; and (B) the State meets such additional State law requirements for pools and spas as the Commission may establish after public notice and a 30-day public comment period. (2) Use of minimum State law requirements.--The Commission-- (A) shall use the minimum State law requirements under paragraph (1) solely for the purpose of determining the eligibility of a State for a grant under section 4 of this Act; and (B) may not enforce any requirement under paragraph (1) except for the purpose of determining the eligibility of a State for a grant under section 4 of this Act. (3) Requirements to reflect national performance standards and commission guidelines.--In establishing minimum State law requirements under paragraph (1), the Commission shall-- (A) consider current or revised national performance standards on pool and spa barrier protection and entrapment prevention; and (B) ensure that any such requirements are consistent with the guidelines contained in the Commission's publication 362, entitled ``Safety Barrier Guidelines for Home Pools'', the Commission's publication entitled ``Guidelines for Entrapment Hazards: Making Pools and Spas Safer'', and any other pool safety guidelines established by the Commission. (b) Standards.--Nothing in this section prevents the Commission from promulgating standards regulating pool and spa safety or from relying on an applicable national performance standard. (c) Basic Access-Related Safety Devices and Equipment Requirements To Be Considered.--In establishing minimum State law requirements for swimming pools and spas under subsection (a)(1), the Commission shall consider the following requirements: (1) Covers.--A safety pool cover. (2) Gates.--A gate with direct access to the swimming pool that is equipped with a self-closing, self-latching device. (3) Doors.--Any door with direct access to the swimming pool that is equipped with an audible alert device or alarm which sounds when the door is opened. (4) Pool alarm.--A device designed to provide rapid detection of an entry into the water of a swimming pool or spa. (d) Entrapment, Entanglement, and Evisceration Prevention Standards To Be Required.-- (1) In general.--In establishing additional minimum State law requirements for swimming pools and spas under subsection (a)(1), the Commission shall require, at a minimum, 1 or more of the following (except for pools constructed without a single main drain): (A) Safety vacuum release system.--A safety vacuum release system which ceases operation of the pump, reverses the circulation flow, or otherwise provides a vacuum release at a suction outlet when a blockage is detected, that has been tested by an independent third party and found to conform to ASME/ANSI standard A112.19.17 or ASTM standard F2387. (B) Suction-limiting vent system.--A suction- limiting vent system with a tamper-resistant atmospheric opening. (C) Gravity drainage system.--A gravity drainage system that utilizes a collector tank. (D) Automatic pump shut-off system.--An automatic pump shut-off system. (E) Drain disablement.--A device or system that disables the drain. (F) Other systems.--Any other system determined by the Commission to be equally effective as, or better than, the systems described in subparagraphs (A) through (E) of this paragraph at preventing or eliminating the risk of injury or death associated with pool drainage systems. (2) Applicable standards.--Any device or system described in subparagraphs (B) through (E) of paragraph (1) shall meet the requirements of any ASME/ANSI or ASTM performance standard if there is such a standard for such a device or system, or any applicable consumer product safety standard. SEC. 6. EDUCATION PROGRAM. (a) In General.--The Commission shall establish and carry out an education program to inform the public of methods to prevent drowning and entrapment in swimming pools and spas. In carrying out the program, the Commission shall develop-- (1) educational materials designed for pool manufacturers, pool service companies, and pool supply retail outlets; (2) educational materials designed for pool owners and operators; and (3) a national media campaign to promote awareness of pool and spa safety. (b) Authorization of Appropriations.--There are authorized to be appropriated to the Commission for each of fiscal years 2008 through 2012 $5,000,000 to carry out the education program authorized by subsection (a). SEC. 7. DEFINITIONS. In this Act: (1) ASME/ANSI standard.--The term ``ASME/ANSI standard'' means a safety standard accredited by the American National Standards Institute and published by the American Society of Mechanical Engineers. (2) ASTM standard.--The term ``ASTM standard'' means a safety standard issued by ASTM International, formerly known as the American Society for Testing and Materials. (3) Barrier.--The term ``barrier'' includes a natural or constructed topographical feature that prevents unpermitted access by children to a swimming pool, and, with respect to a hot tub, a lockable cover. (4) Commission.--The term ``Commission'' means the Consumer Product Safety Commission. (5) Main drain.--The term ``main drain'' means a submerged suction outlet typically located at the bottom of a pool or spa to conduct water to a re-circulating pump. (6) Safety vacuum release system.--The term ``safety vacuum release system'' means a vacuum release system capable of providing vacuum release at a suction outlet caused by a high vacuum occurrence due to a suction outlet flow blockage. (7) Unblockable drain.--The term ``unblockable drain'' means a drain of any size and shape that a human body cannot sufficiently block to create a suction entrapment hazard. (8) Swimming pool; spa.--The term ``swimming pool'' or ``spa'' means any outdoor or indoor structure intended for swimming or recreational bathing, including in-ground and above-ground structures, and includes hot tubs, spas, portable spas, and non-portable wading pools. SEC. 8. CPSC REPORT. Within 1 year after the close of each fiscal year for which grants are made under section 4, the Commission shall submit a report to the Congress evaluating the effectiveness of the grant program authorized by that section. Passed the Senate December 6, 2006. Attest: Secretary. 109th CONGRESS 2d Session S. 3718 _______________________________________________________________________ AN ACT To increase the safety of swimming pools and spas by requiring the use of proper anti-entrapment drain covers and pool and spa drainage systems, by establishing a swimming pool safety grant program administered by the Consumer Product Safety Commission to encourage States to improve their pool and spa safety laws and to educate the public about pool and spa safety, and for other purposes.
Pool and Spa Safety Act - (Sec. 3) Requires each swimming pool or spa drain cover manufactured, distributed, or entered into commerce in the United States to conform to specified ASME/ANSI entrapment protection standards. Considers that requirement to be a consumer product safety rule issued by the Consumer Product Safety Commission (CPSC) under certain provisions of the Consumer Product Safety Act. (Sec. 4) Establishes a program of grants to states to: (1) hire and train enforcement personnel; and (2) educate pool construction, installation, and service companies, pool owners and operators, and other members of the public. Conditions grants on a state imposing certain requirements by statute, including: (1) enclosure of residential pools and spas to prevent small children from gaining unsupervised access; and (2) drain entrapment prevention devices and systems on all pools and spas. (Sec. 6) Requires the CPSC to establish and carry out a public education program on methods to prevent drowning and entrapment in pools and spas. Authorizes appropriations.
Exclusive: Source: Beyonce Is Pregnant! Beyonce Knowles better brush up on her lullabies. The 29-year-old singer is pregnant with her first child, the new Us Weekly reports. Despite the happy news, no one was more surprised than the singer herself. PHOTOS: Who's due next? "B was shocked. She loves kids, but she wasn't ready to be a mother just yet," says a source of the singer, who married rapper Jay-Z in 2008. "She really wanted to get her album done and tour the world again." Still, another insider says that the singer, who is in her first trimester, realizes that "this is a gift from God and she's so happy." VIDEO: Inside Bachelorette star Jen Schefft's Alice in Wonderland-themed baby nursery Friends of the couple are already expressing their well-wishes for the parents-to-be. "Jay has been all about family since I met him, and he's always going to be," record executive Kevin Liles, who has known the rapper for years, tells Us. "I wish them the best." PHOTOS: Adorable star babies Knowles' sister Solange -- and mom to Julez, 6 -- agrees. "She's got the most beautiful heart," she tells Us of her big sis. "She'll be a great mom." For more baby details -- including the celeb pals already lining up to baby-sit; how Knowles plans to juggle a baby and a career; and why Jay-Z persuaded his wife to try for a family sooner -- pick up the new Us Weekly today. ||||| bump watch Coolest Kid in the World May Now Be in Utero Usually, we'd leave a story like this alone until it was more officially confirmed, but it's just too exciting a possibility for us to keep our heads/standards: Us Weekly is reporting that Beyoncé and Jay-Z are pregnant. Obviously, if this is true, this is the best. We feel confident Beyoncé is going to do some amazing things for maternitywear, dance routines performed in maternitywear, and, also, baby names. [Us] ||||| Beyonce's Mom: Pregnancy Rumors Are 'Not True' Email This UPDATE: In an interview airing tomorrow, Beyonce's mother Tina Knowles tells talk show host Ellen Degeneres that the pregnancy rumors are "not true," TMZ is reporting. PREVIOUSLY: Here we go again. Another round of rumors claiming "B was shocked. She loves kids, but she wasn't ready to be a mother just yet ... She really wanted to get her Even record executive Kevin Liles, who has known Jay-Z for years, is speaking out about the couple. "Jay has been all about family since I met him, and he's always going to be," he said. "I wish them the best." But is it true? In March, "rock solid sources" In an interview airing tomorrow, Beyonce's mother Tina Knowles tells talk show host Ellen Degeneres that the pregnancy rumors are "not true," TMZ is reporting.: Here we go again. Another round of rumors claiming Beyonce is pregnant with her first child are circulating online. Us Weekly 's source claims the pregnancy came as a total surprise to B and her husband, rapper-mogul Jay-Z."B was shocked. She loves kids, but she wasn't ready to be a mother just yet ... She really wanted to get her album done and tour the world again," the source said.Even record executive Kevin Liles, who has known Jay-Z for years, is speaking out about the couple. "Jay has been all about family since I met him, and he's always going to be," he said. "I wish them the best."But is it true? In March, "rock solid sources" claimed B was expecting. But alas, Beyonce's publicist shot down the "untrue" rumor. The couple, who married in 2008 following a six-year courtship, have been plagued by pregnancy rumors for, well, as long as we can remember. We're pretty sure it was reported that Beyonce was pregnant on the set of the video for their 2003 duet ''03 Bonnie & Clyde.' That's a LONG pregnancy. The last seven or so years, which have been packed with albums and tours and elaborate videos, must have been very uncomfortable for the diva, who is said to be hard at work on her next record with a host of big-name producers.
– Here we go again: Is Beyoncé Knowles expecting a baby with hubby Jay-Z? Us Weekly is convinced she really is pregnant, citing a source who, amusingly, recalls that “B was shocked” to get the news. Even though she “wasn’t ready to be a mother just yet,” says that source, another claims she’s “so happy.” Even so, PopEater notes, Beyoncé has been plagued with pregnancy rumors since 2003—so don’t go buying a baby gift just yet. “Usually, we'd leave a story like this alone until it was more officially confirmed, but it's just too exciting a possibility for us to keep our heads/standards,” writes Willa Paskin in New York. “Obviously, if this is true, this is the best. We feel confident Beyoncé is going to do some amazing things for maternitywear, dance routines performed in maternitywear, and, also, baby names.”
SECTION 1. SHORT TITLE. This Act may be cited as the ``American Land Sovereignty Protection Act of 1996''. SEC. 2. FINDINGS AND PURPOSE. (a) Findings.--Congress finds the following: (1) The power to dispose of and make all needful rules and regulations governing lands belonging to the United States is vested in the Congress under article IV, section 3, of the Constitution. (2) Some Federal land designations made pursuant to international agreements concern land use policies and regulations for lands belonging to the United States which under article IV, section 3, of the Constitution can only be implemented through laws enacted by the Congress. (3) Some international land designations, such as those under the United States Biosphere Reserve Program and the Man and Biosphere Program of the United Nations Scientific, Educational, and Cultural Organization, operate under independent national committees, such as the United States National Man and Biosphere Committee, which have no legislative directives or authorization from the Congress. (4) Actions by the United States in making such designations may affect the use and value of nearby or intermixed non-Federal lands. (5) The sovereignty of the States is a critical component of our Federal system of government and a bulwark against the unwise concentration of power. (6) Private property rights are essential for the protection of freedom. (7) Actions by the United States to designate lands belonging to the United States pursuant to international agreements in some cases conflict with congressional constitutional responsibilities and State sovereign capabilities. (8) Actions by the President in applying certain international agreements to lands owned by the United States diminishes the authority of the Congress to make rules and regulations respecting these lands. (b) Purpose.--The purposes of this Act are the following: (1) To reaffirm the power of the Congress under article IV, section 3, of the Constitution over international agreements which concern disposal, management, and use of lands belonging to the United States. (2) To protect State powers not reserved to the Federal Government under the Constitution from Federal actions designating lands pursuant to international agreements. (3) To ensure that no United States citizen suffers any diminishment or loss of individual rights as a result of Federal actions designating lands pursuant to international agreements for purposes of imposing restrictions on use of those lands. (4) To protect private interests in real property from diminishment as a result of Federal actions designating lands pursuant to international agreements. (5) To provide a process under which the United States may, when desirable, designate lands pursuant to international agreements. SEC. 3. CLARIFICATION OF CONGRESSIONAL ROLE IN WORLD HERITAGE SITE LISTING. Section 401 of the National Historic Preservation Act Amendments of 1980 (16 U.S.C. 470a-1) is amended-- (1) in subsection (a) in the first sentence, by-- (A) inserting ``(in this section referred to as the `Convention')'' after ``1973''; and (B) inserting ``and subject to subsections (b), (c), (d), (e), and (f)'' before the period at the end; (2) in subsection (b) in the first sentence, by inserting ``, subject to subsection (d),'' after ``shall''; and (3) adding at the end the following new subsections: ``(d) The Secretary of the Interior shall not nominate any lands owned by the United States for inclusion on the World Heritage List pursuant to the Convention unless such nomination is specifically authorized by a law enacted after the date of enactment of the American Land Sovereignty Protection Act of 1996. The Secretary may from time to time submit to the Speaker of the House and the President of the Senate proposals for legislation authorizing such a nomination. ``(e) The Secretary of the Interior shall object to the inclusion of any property in the United States on the list of World Heritage in Danger established under Article 11.4 of the Convention unless-- ``(1) the Secretary has submitted to the Speaker of the House and the President of the Senate a report describing the necessity for including that property on the list; and ``(2) the Secretary is specifically authorized to assent to the inclusion of the property on the list, by a joint resolution of the Congress enacted after the date that report is submitted. ``(f) The Secretary of the Interior shall submit an annual report on each World Heritage Site within the United States to the Chairman and Ranking Minority member of the Committee on Resources of the House of Representatives and the Committee on Energy and Natural Resources of the Senate, that contains the following information for each site: ``(1) An accounting of all money expended to manage the site. ``(2) A summary of Federal full time equivalent hours related to management of the site. ``(3) A list and explanation of all nongovernmental organizations contributing to the management of the site. ``(4) A summary and account of the disposition of complaints received by the Secretary related to management of the site.''. SEC. 4. PROHIBITION AND TERMINATION OF UNITED NATIONS BIOSPHERE RESERVES. Title IV of the National Historic Preservation Act Amendments of 1980 (16 U.S.C. 470a-1 et seq.) is amended by adding at the end the following new section: ``Sec. 403. (a) No Federal official may nominate any lands in the United States for designation as a Biosphere Reserve under the Man and Biosphere Program of the United Nations Educational, Scientific, and Cultural Organization. ``(b) Any designation of an area in the United States as a Biosphere Reserve under the Man and Biosphere Program of the United Nations Educational, Scientific, and Cultural Organization shall not have, and shall not be given, any force or effect, unless the Biosphere Reserve-- ``(1) is specifically authorized by a law enacted after the date of enactment of the American Land Sovereignty Protection Act of 1996 and before December 31, 1999; ``(2) consists solely of lands that on the date of that enactment are owned by the United States; and ``(3) is subject to a management plan that specifically ensures that the use of intermixed or adjacent non-Federal property is not limited or restricted as a result of that designation. ``(c) The Secretary of State shall submit an annual report on each Biosphere Reserve within the United States to the Chairman and Ranking Minority member of the Committee on Resources of the House of Representatives and the Committee on Energy and Natural Resources of the Senate, that contains the following information for each reserve: ``(1) An accounting of all money expended to manage the reserve. ``(2) A summary of Federal full time equivalent hours related to management of the reserve. ``(3) A list and explanation of all nongovernmental organizations contributing to the management of the reserve. ``(4) A summary and account of the disposition of the complaints received by the Secretary related to management of the reserve.''. SEC. 5. INTERNATIONAL AGREEMENTS IN GENERAL. Title IV of the National Historic Preservation Act Amendments of 1980 (16 U.S.C. 470a-1 et seq.) is further amended by adding at the end the following new section: ``Sec. 404. (a) No Federal official may nominate, classify, or designate any lands owned by the United States and located within the United States for a special or restricted use under any international agreement unless such nomination, classification, or designation is specifically authorized by law. The President may from time to time submit to the Speaker of the House of Representatives and the President of the Senate proposals for legislation authorizing such a nomination, classification, or designation. ``(b) A nomination, classification, or designation of lands owned by a State or local government, under any international agreement shall have no force or effect unless the nomination, classification, or designation is specifically authorized by a law enacted by the State or local government, respectively. ``(c) A nomination, classification, or designation of privately owned lands under any international agreement shall have no force or effect without the written consent of the owner of the lands. ``(d) This section shall not apply to-- ``(1) sites nominated under the Convention on Wetlands of International Importance Especially as Waterfowl Habitat (popularly known as the Ramsar Convention); ``(2) agreements established under section 16(a) of the North American Wetlands Conservation Act (16 U.S.C. 4413); and ``(3) conventions referred to in section 3(h)(3) of the Fish and Wildlife Improvement Act of 1978 (16 U.S.C. 712(2)). ``(e) In this section, the term `international agreement' means any treaty, compact, executive agreement, convention, or bilateral agreement between the United States or any agency of the United States and any foreign entity or agency of any foreign entity, having a primary purpose of conserving, preserving, or protecting the terrestrial or marine environment, flora, or fauna.''. SEC. 6. CLERICAL AMENDMENT. Section 401(b) of the National Historic Preservation Act Amendments of 1980 (16 U.S.C. 470a-1(b)) is amended by striking ``Committee on Natural Resources'' and inserting ``Committee on Resources''.
American Land Sovereignty Protection Act of 1996 - Amends the National Historic Preservation Act Amendments of 1980 to prohibit the Secretary of the Interior from nominating any Federal lands for inclusion on the World Heritage List pursuant to the Convention Concerning the Protection of the World Cultural and Natural Heritage unless such nomination is specifically authorized by law. Authorizes the Secretary to submit proposals for legislation authorizing such a nomination. Requires the Secretary to object to the inclusion of any property in the United States on the list of World Heritage in Danger (established under the Convention) unless the Secretary: (1) has submitted to specified congressional officials a report describing the necessity for such inclusion; and (2) is specifically authorized to assent to the inclusion by a joint resolution of the Congress enacted after the report is submitted. Requires the Secretary to report annually to the Chairman and Ranking Minority Member of specified congressional committees on each World Heritage Site within the United States regarding: (1) an accounting of all money expended to manage the Site; (2) a summary of Federal full time equivalent hours related to its management; (3) a list and explanation of all nongovernmental organizations contributing to such management; and (4) a summary and account of the disposition of complaints received by the Secretary related to it. (Sec. 4) Amends the National Historic Preservation Act Amendments of 1980 to prohibit Federal officials from nominating lands in the United States for designation as a Biosphere Reserve under the Man and Biosphere Program of the United Nations Educational, Scientific, and Cultural Organization. Provides that such designation of an area in the United States shall not have, and shall not be given, any force or effect, unless the Biosphere Reserve: (1) is specifically authorized by a law enacted before December 31, 1999; (2) consists solely of federally-owned lands; and (3) is subject to a management plan that specifically ensures that the use of intermixed or adjacent non-Federal property is not limited or restricted as a result of that designation. Requires the Secretary of State to report annually to the Chairman and Ranking Minority Member of specified congressional committees on each Biosphere Reserve within the United States regarding: (1) an accounting of all money expended to manage the Reserve; (2) a summary of Federal full time equivalent hours related to its management; (3) a list and explanation of all nongovernmental organizations contributing to such management; and (4) a summary and account of the disposition of complaints received by the Secretary related to it. (Sec. 5) Prohibits, under any international agreement, the nomination, classification, or designation of: (1) federally-owned lands located within the United States for a special or restricted use unless authorized by law; (2) State or local government lands unless authorized by State or local law; or (3) privately owned lands without the owner's consent. Provides that such prohibition shall not apply to: (1) sites nominated under the Convention on Wetlands of International Importance Especially as Waterfowl Habitat (popularly known as the Ramsar Convention); (2) agreements established under the North American Wetlands Conservation Act; and (3) conventions referred to in the Fish and Wildlife Improvement Act of 1978.
ovarian cancer has the highest mortality rate of all gynaecologic malignancies in europe and in the united states . this happens primarily because ovarian cancer presents at a relatively advanced stage of the disease . dissemination of ovarian cancer is most common by the intra - peritoneal route so that in the majority of patients disease most frequently remains confined to the peritoneal cavity . nevertheless , ovarian cancer may also metastasize through lymphatic channels ; for this reason a lymph node metastasis classification has been introduced into the figo staging . the prognostic significance of lymph node metastasis(es ) in ovarian cancer is still controversial and researchers have paid most attention to investigating the prognostic impact of para - aortic lymph node metastasis . indeed , very few papers describe extra - abdominal lymph node involvement in this type of tumour [ 5 , 6 ] . positron emission tomography ( pet ) with [ f]fluoro-2-deoxy - d - glucose ( fdg ) has emerged as an extremely useful technique in clinical oncology . a high accuracy of f - fdg pet has been proven for evaluating patients with ovarian tumours , both for the assessment of ovarian neoplasms before treatment and for the evaluation of cancer recurrence [ 7 , 8 ] . recently hybrid pet / computed tomography ( ct ) has been introduced in clinical practice , combining the information derived from the functional imaging of f - fdg pet with the detailed morphological data of ct . preliminary reports indicate a potential usefulness of this technique for the precise identification of cancer spread . in particular , in ovarian cancer f - fdg pet / ct was found to be particularly useful in identifying abdominal recurrence . f - fdg pet / ct allowed us to identify extra - abdominal lymph node involvement in the left supra - clavicular space . each patient , fasted for at least 6 h , was intravenously injected with 5.3 mbq /kg of f - fdg ; images were recorded 6090 min after tracer administration using a discovery ls - st4 scanner ( general electric medical system , waukesha , wi ) . the ct parameters were as follows : 140 kv , 90 ma , 0.8 s per tube rotation , 30 mm table speed per gantry rotation . multi - slice technology allowed the acquisition of four slices per tube rotation with a thickness of 5 mm . pet studies were performed acquiring 4 min of emission data per bed position and 35 transaxial images were reconstructed for each bed . scaled ct images were used for non - uniform attenuation correction of pet emission scans . pre - surgical diagnostic work - up included abdominal - pelvic ct and ultrasound ; a large pelvic mass likely originating from the left ovary was depicted . at laparoscopy a 10 cm mass characterised by a combination of solid and cystic areas with involvement of both annexials was found . the patient was operated on by laparotomy , and total hysterectomy plus bilateral annexial resection and peritoneal washing were performed . histology indicated a poorly differentiated cancer , infiltrating the myometrium ; the peritoneal fluid was positive for cancerous cells . the patient was then scheduled for chemotherapy . before an f - fdg pet / ct scan pet / ct revealed multiple sites of increased glycolytic metabolism in lymph nodes : iliac bilaterally , left lumbar , para - caval and also left supra - clavicular / latero - cervical ( figs 1 and 2 ) . the presence of supra - clavicular lymph nodes was confirmed by ultrasound and ct , and the final diagnosis of metastatic ovarian cancer was reached by biopsy . the patient was then treated by chemotherapy ( carboplatin plus taxol plus topotecan ) obtaining a good response ( a reduction of ca125 from 273 to 14 ) . a 65-year - old patient was admitted to our hospital in july 2003 for complete staging and subsequently surgery for a pelvic tumour . ultrasound demonstrated the presence of a pelvic mass with non - homogeneous echogenicity ; abdominal - pelvic ct confirmed the presence of a large mass ( 171214 cm ) and also showed lymph nodal involvement at para - aortic , para - caval , retro - crural levels . pet / ct showed an intense radio - tracer uptake in the pelvic mass , and also revealed multiple hypermetabolic lymph nodes in the para - aortic , para - caval , retro - crural and left supra - clavicular regions ( figs 3 and 4 ) . the patient was then operated on by laparotomy , with mass excision and bilateral annexiectomy . each patient , fasted for at least 6 h , was intravenously injected with 5.3 mbq /kg of f - fdg ; images were recorded 6090 min after tracer administration using a discovery ls - st4 scanner ( general electric medical system , waukesha , wi ) . the ct parameters were as follows : 140 kv , 90 ma , 0.8 s per tube rotation , 30 mm table speed per gantry rotation . multi - slice technology allowed the acquisition of four slices per tube rotation with a thickness of 5 mm . pet studies were performed acquiring 4 min of emission data per bed position and 35 transaxial images were reconstructed for each bed . scaled ct images were used for non - uniform attenuation correction of pet emission scans . a 51-year - old patient was operated on june 2003 for a pelvic tumour . pre - surgical diagnostic work - up included abdominal - pelvic ct and ultrasound ; a large pelvic mass likely originating from the left ovary was depicted . at laparoscopy a 10 cm mass characterised by a combination of solid and cystic areas with involvement of both annexials was found . the patient was operated on by laparotomy , and total hysterectomy plus bilateral annexial resection and peritoneal washing were performed . histology indicated a poorly differentiated cancer , infiltrating the myometrium ; the peritoneal fluid was positive for cancerous cells . the patient was then scheduled for chemotherapy . before an f - fdg pet / ct scan pet / ct revealed multiple sites of increased glycolytic metabolism in lymph nodes : iliac bilaterally , left lumbar , para - caval and also left supra - clavicular / latero - cervical ( figs 1 and 2 ) . the presence of supra - clavicular lymph nodes was confirmed by ultrasound and ct , and the final diagnosis of metastatic ovarian cancer was reached by biopsy . the patient was then treated by chemotherapy ( carboplatin plus taxol plus topotecan ) obtaining a good response ( a reduction of ca125 from 273 to 14 ) . a 65-year - old patient was admitted to our hospital in july 2003 for complete staging and subsequently surgery for a pelvic tumour . ultrasound demonstrated the presence of a pelvic mass with non - homogeneous echogenicity ; abdominal - pelvic ct confirmed the presence of a large mass ( 171214 cm ) and also showed lymph nodal involvement at para - aortic , para - caval , retro - crural levels . pet / ct showed an intense radio - tracer uptake in the pelvic mass , and also revealed multiple hypermetabolic lymph nodes in the para - aortic , para - caval , retro - crural and left supra - clavicular regions ( figs 3 and 4 ) . the patient was then operated on by laparotomy , with mass excision and bilateral annexiectomy . ovarian cancer tends physiologically to metastasise via the lymphatic channels , mainly the intra - peritoneal route and understanding of the sites of spread is crucial for disease treatment . lymphatic diffusion of ovarian cancer is commonly retained to involve mainly pelvic lymph nodes and , subsequently , retro - peritoneal lymph nodes . data from the literature concerning extra - abdominal lymph node involvement are very poor . in this respect , it is worth noting that a recent review on this topic did not take into account the prevalence of extra - abdominal positive nodes . conversely , cormio and colleagues reviewed 162 patients with epithelial ovarian carcinoma and reported five cases of extra - abdominal lymphatic spread . interestingly , zang and colleagues described their 10-years of experience of patients with ovarian cancer presenting initially with extra - abdominal metastases and reported a significant number of cases with supra - clavicular and inguinal lymph nodes metastases ( 6 and 5 cases , respectively ) . apart from these two papers , a case report described left internal mammary lymph node involvement in a patient with serious borderline ovarian tumour ; two cases of intra - mammary lymph node metastases were described : in a case of early - staged ovarian cancer and in a case of a borderline papillary ovarian tumour . lastly , we found few cases of axillary lymph nodes calcification in patients with metastatic ovarian carcinoma . these sporadic reports in the literature clearly indicate that extra - abdominal lymph node spread in ovarian cancer is possible although rare . on the basis of our findings , it may be speculated that f - fdg pet / ct is a useful tool and it may be particularly useful to detect unusual extra - abdominal nodal involvement in ovarian cancer . further studies on a higher number of patients are required to elucidate the potential role of f - fdg pet / ct in evaluating lymphatic metastatic spread of ovarian cancer . f - fdg pet / ct scan showing multiple sites of increased fdg uptake in abdominal lymph nodes but also in the left supra - clavicular region . f - fdg pet / ct scan : ( a ) ct image , ( b ) pet image , ( c ) fusion pet / ct image . focal area of highly increased fdg uptake is shown in the left supra - clavicular lymph nodes . f - fdg pet / ct scan revealing increased fdg uptake in the pelvic mass and in multiple abdominal lymph nodes ( para - aortic , para - caval , retro - crural ) , but also in the left supra - clavicular lymph nodes . f - fdg pet / ct scan : coronal images , ( a ) ct image , ( b ) pet image , ( c ) pet / ct fusion image . a focal area of intense fdg uptake is shown in the left supra - clavicular lymph nodes .
tumoral dissemination of ovarian cancer most commonly occurs through the intra - peritoneal route ; nevertheless , although it is rare , ovarian cancer may also metastasise through the lymphatic channels . lymphatic diffusion of ovarian cancer usually involves pelvic and retro - peritoneal lymph nodes . extra - abdominal lymph nodes are rarely involved and their detection may represent a challenge for the oncologist . we describe here two patients studied for ovarian cancer by [ 18f]fluoro-2-deoxy - d - glucose ( 18f - fdg ) positron emission tomography ( pet)/computed tomography ( ct ) : one case during pre - operative staging , the other for restaging after surgery . in both cases pet examination identified extra - abdominal lymph node tumoral spread in the left supra - clavicular space ; biopsy led to a final diagnosis of recurrent ovarian cancer . previous reports in the literature on tumoral spread of ovarian cancer to the supra - clavicular nodes are rare , however this possible site of metastatic involvement has to be kept in mind by oncologists and our data show that the 18f - fdg pet / ct may be useful to disclose this unusual supra - diaphragmatic lymphatic diffusion of metastatic lymphatic ovarian cancer .
SECTION 1. SHORT TITLE. This Act may be cited as the ``NIST Grants for Energy Efficiency, New Job Opportunities, and Business Solutions Act of 2010'' or the ``NIST GREEN JOBS Act of 2010''. SEC. 2. FINDINGS. Congress finds the following: (1) Over its 20-year existence, the Hollings Manufacturing Extension Partnership Program has proven its value to manufacturers as demonstrated by the resulting impact on jobs and the economies of all 50 States and the Nation as a whole. (2) The Hollings Manufacturing Extension Partnership Program has helped thousands of companies reinvest in themselves through process improvement and business growth initiatives leading to more sales, new markets, and the adoption of technology to deliver new products and services. (3) Manufacturing is an increasingly important part of the construction sector as the industry moves to the use of more components and factory built sub-assemblies. (4) Construction practices must become more efficient and precise if the United States is to construct and renovate its building stock to reduce related carbon emissions to levels that are consistent with combating global warming. (5) Many companies involved in construction are small, without access to innovative manufacturing techniques, and could benefit from the type of training and business analysis activities that the Manufacturing Extension Partnership routinely provides to the Nation's manufacturers and their supply chains. (6) Broadening the competitiveness grant program under section 25(f) of the National Institute of Standards and Technology Act (15 U.S.C. 278k(f)) could help develop and diffuse knowledge necessary to capture a large portion of the estimated $100 billion dollars or more in energy savings if buildings in the United States met the level and quality of energy efficiency now found in buildings in certain other countries. (7) It is therefore in the national interest to expand the capabilities of the Manufacturing Extension Partnership to be supportive of the construction and green energy industries. SEC. 3. NATIONAL INSTITUTE OF STANDARDS AND TECHNOLOGY COMPETITIVE GRANT PROGRAM. (a) In General.--Section 25(f)(3) of the National Institute of Standards and Technology Act (15 U.S.C. 278k(f)(3)) is amended-- (1) by striking ``to develop'' in the first sentence and inserting ``to add capabilities to the MEP program, including the development of''; and (2) by striking the last sentence and inserting ``These themes-- ``(A) shall be related to projects designed to increase the viability both of traditional manufacturing sectors and other sectors, such as construction, that increasingly rely on manufacturing through the use of manufactured components and manufacturing techniques, including supply chain integration and quality management; ``(B) shall be related to projects related to the transfer of technology based on the technological needs of manufacturers and available technologies from institutions of higher education, laboratories, and other technology producing entities; and ``(C) may extend beyond these traditional areas to include projects related to construction industry modernization.''. (b) Selection.--Section 25(f)(5) of the National Institute of Standards and Technology Act (15 U.S.C. 278k(f)(5)) is amended to read as follows: ``(5) Selection.--Awards under this section shall be peer reviewed and competitively awarded. The Director shall endeavor to select at least one proposal in each of the 9 statistical divisions of the United States (as designated by the Bureau of the Census). The Director shall select proposals to receive awards that will-- ``(A) create jobs or train newly hired employees; ``(B) promote technology transfer and commercialization of environmentally focused materials, products, and processes; ``(C) increase energy efficiency; and ``(D) improve the competitiveness of industries in the region in which the Center or Centers are located.''. (c) Other Modifications.--Section 25(f) of the National Institute of Standards and Technology Act (15 U.S.C. 278k(f)) is amended-- (1) by adding at the end the following: ``(7) Duration.--Awards under this section shall last no longer than 3 years. ``(8) Eligible participants.--In addition to manufacturing firms eligible to participate in the Centers program, awards under this subsection may be used by the Centers to assist small or medium-sized construction firms. ``(9) Authorization of appropriations.--In addition to any amounts otherwise authorized or appropriated to carry out this section, there are authorized to be appropriated to the Secretary of Commerce $7,000,000 for each of the fiscal years 2011 through 2014 to carry out this subsection.''.
NIST Grants for Energy Efficiency, New Job Opportunities, and Business Solutions Act of 2010 or the NIST GREEN JOBS Act of 2010 - Amends the National Institute of Standards and Technology Act to require the themes under the competitive grant program within the Regional Centers for the Transfer of Manufacturing Technology program to be related to projects: (1) designed to increase the viability both of traditional manufacturing sectors and other sectors, such as construction, that increasingly rely on manufacturing through the use of manufactured components and manufacturing techniques, including supply chain integration and quality management; and (2) related to the transfer of technology based on the technological needs of manufacturers and available technologies from institutions of higher education, laboratories, and other technology producing entities. Authorizes such themes to extend beyond such areas to include projects related to construction industry modernization. Revises the selection criteria for such grants. Requires the Director of the National Institute of Standards and Technology (NIST) to: (1) endeavor to select at least one proposal in each of the nine statistical divisions of the United States (as designated by the Bureau of the Census) for a grant; and (2) award grants to proposals that will create jobs or train newly hired employees, promote technology transfer and commercialization of environmentally focused materials, products, and processes, increase energy efficiency, and improve the competitiveness of industries in regions in which the Centers are located. Limits award duration to three years. Authorizes awards to be used by Centers to assist small or medium-sized construction firms. Authorizes appropriations for FY2011-FY2014.
the fundamental significance of the opening of the new `` window '' of high energy @xmath2 astronomy has already been remarkably successful . this discussion will only consider high energy ( @xmath3 gev ) neutrinos . ] for the observation of the universe is beyond discussion . neutrinos have properties that are profoundly different from those of photons , and observations with this new `` messenger '' will allow us to develop a deeper understanding of known astrophysical objects , and also likely lead to the discovery of new unexpected classes of sources , in the same way as the opening of each new window in photon wavelength has lead to remarkably interesting discoveries . if there are few doubts that in the future neutrino astronomy will mature into an essential field of observational astrophysics , it is less clear how long and difficult the history of this development will be , and in particular what will be the scientific significance of the results of the planned km@xmath0 telescopes . these telescopes are `` discovery '' instruments and only _ a posteriori _ , when their data analysis is completed , we will be able to appreciate their scientific importance . it is however interesting to attempt an estimate , based on our present knowledge , of the expected event rates and number of detectable sources . as a warning , it can be amusing to recall that when ( in june 1962 ) bruno rossi , riccardo giacconi and their colleagues flew the first @xmath4ray telescope @xcite opening the x ray photon window to observations , the most promising @xmath4ray source was the sun , followed by the moon ( that could scatter the solar wind ) . it is now known that the moon surface does indeed emit x rays , but in fact it appears as a dark shadow in the @xmath4ray sky , because it eclipses the emission of the ensemble of active galactic nuclei ( agn ) that can now be detected with a density of @xmath5 per square degree @xcite . so the @xmath4ray sky was very generous to the observers , and embarassed the theorists who had worked on _ a priori _ predictions . the lesson here is that predictions about the unknown are difficult . one should expect the unexpected , and it is often wise to take ( calculated ) scientific risks . we all hope that history will repeat itself , and that also the @xmath2 sky will be generous to the brave scientists who , with great effort , are constructing the new telescopes . on the other hand , we are in the position to make some rather well motivated predictions about the intensity of the astrophysical @xmath2 sources , because high energy @xmath2 production is intimately related to @xmath1ray emission and cosmic ray ( c.r . ) production , and the observations of cosmic rays and high energy ( gev and tev range ) photons do give us very important guidance for the prediction of the neutrino fluxes . this review will concentrate only on high energy astrophysics . there are other important scientific topics about km@xmath0 telescopes that will not be covered here . a subject of comparable importance is `` indirect '' search for cosmological dark matter via the observations of high energy @xmath2 produced at the center of the sun and the earth by the annihilation of dm particles @xcite . the km@xmath0 telescopes can also be used to look for different `` new physics '' effects such as the existence of large extra dimensions @xcite . the instruments also have the potential to perform interesting interdisciplinary studies . the observable neutrino flux can be schematically written as the sum of several components : @xmath6 & + & \phi_{\rm galactic } ( e , \omega ) + \phi_{\rm extra~gal } ( e , \omega ) \nonumber \\[0.2 cm ] & + & \sum_{\rm galactic } \phi_j ( e ) ~ \delta [ \omega - \omega_j ] \nonumber \\[0.2 cm ] & + & \sum_{\rm extra~gal } \phi_k ( e ) ~ \delta [ \omega - \omega_k ] \nonumber\end{aligned}\ ] ] the first two components describe atmospheric neutrinos that are generated in cosmic ray showers in the earth s atmosphere . they are an important foreground to the more interesting observations of the astrophysical components . atmospheric @xmath2 s can be split into two components , the first `` standard '' one is due to the decay of charged pions and kaons , while the second one is due to the weak decays of short lived ( hence `` prompt '' ) particles containing heavy quarks , with charmed particles accounting for essentially all of the flux . the prompt contribution is expected to be dominant in the atmospheric neutrino fluxes at high energy . this component has not yet been identified , and its prediction is significantly more difficult than the standard flux , because of our poor knowledge of the dynamics of charmed particles production in hadronic interactions . , @xmath7 and @xmath8 show the angle averaged fluxes of atmospheric neutrinos and anti neutrinos as a function of energy . the associated dashed lines are the fluxes neglecting the `` prompt '' contribution of charmed hadrons . the lines labeled @xmath2gzk describes gzk neutrinos @xcite . the points ( and fitted lines ) describe cr direct and indirect measurements . the horizontal lines labeled amanda and icecube are the current limit and the predicted sensitivity of km@xmath0 @xmath2 telescopes for an isotropic @xmath2 flux . the line labeled anita lite is the limit on the @xmath2 flux obtained with radio methods @xcite . the isotropic @xmath1ray flux measured by egret @xcite is also shown . [ fig : cr_nu ] , width=196 ] figure [ fig : cr_nu ] shows the angle averaged atmospheric @xmath2 fluxes @xcite with an estimate of the `` prompt '' component @xcite that `` overtakes '' the standard one at @xmath9 tev for @xmath10 and @xmath11 tev for @xmath7 . the next two components of the neutrino fluxes are `` diffuse '' fluxes coming from @xmath2 production in interstellar space in our own galaxy , and in intergalactic space . the diffuse galactic emission is due to the interaction of cosmic rays ( confined inside the milky way by the galactic magnetic fields ) with the gas present in interstellar space . the angular distribution of this emission is expected to be concentrated in the galactic plane , very likely with a distribution similar to the one observed for gev photons by egret @xcite . the extragalactic diffuse emission is dominated by the decay of pions created in @xmath12 interactions by ultra high energy protons ( @xmath13 ev ) interacting with the 2.7k cosmic radiation . these `` gzk '' neutrinos are present only at very high energy ( with the flux peaking at @xmath14 ev ) . together with the diffuse fluxes one expects the contribution of an ensemble of point like ( or quasi point like ) sources of galactic and extragalactic origin . neutrinos travel along straight lines and allow the imaging of these sources . it is expected that most of the extragalactic sources will not be resolved , and therefore the ensemble of the extragalactic point sources ( with the exception of the closest and brightest sources ) will appear as a diffuse , isotropic flux that can in principle be separated from the atmospheric @xmath2 foreground because of a different energy spectrum , and flavor composition . in the standard mechanism for the production of high energy @xmath2 in astrophysical sources a population of relativistic hadrons ( that is cosmic rays ) interacts with a target ( gas or radiation fields ) creating weakly decaying particles ( mostly @xmath15 , and kaons ) that produce @xmath2 in their decay . the energy spectrum of the produced neutrinos obviously reflects the spectrum of the parent cosmic rays . a well known consequence of the approximate feynman scaling of the hadronic interaction inclusive cross sections is the fact that if the parent c.r . have a power law spectrum of form @xmath16 , and their interaction probability is energy independent diffuse in different ways inside the source and have different space distributions . for a non homogeneous target , this can be reflected in an energy dependent interaction probability . ] the @xmath2 spectrum , to a good approximation , is also a power law of the same slope . the current favored models for 1st order fermi acceleration of charged hadrons near astrophysical shocks predict a generated spectrum with a slope @xmath17 . this expectation is in fact confirmed by the observations of young snr by hess , and leads the expectations that astrophysical @xmath2 sources are also likely to have power law spectra with slope close to 2 . such a slope is also predicted in gamma rays bursts models @xcite where relativistic hadrons interact with a power law photon field . it is important to note that the high energy cutoff of the parent c.r . distribution is reflected in a much more gradual steepening of the @xmath2 spectrum for @xmath18 . the hadronic interactions that are the sources of astrophysical neutrinos also create a large number of @xmath19 and @xmath20 particles that decay in a @xmath21 mode generating high energy @xmath1rays . in general , it is possible that these photons are absorbed inside the source , and the energy associate with them can emerge at lower frequency , however for a transparent source the relation between the photon and neutrino fluxes is remarkably robust . for a power law c.r . spectrum , the @xmath1rays are also created with a power law spectrum of the same slope . the approximately constant @xmath22 ratio is shown in fig . [ fig : ratio ] as a function of the slope @xmath23 . between the neutrino and photon fluxes at @xmath24 tev , for a power law c.r . population of slope @xmath23 and p / n ratio of 10% interacting with a low density gas target . photon absorption is neglected . the calculation uses the sibyll @xcite hadronic interaction model . the thick line sums over all neutrino type . the @xmath2 flavor is calculated at the source before the inclusion of oscillations . [ fig : ratio ] , width=194 ] the @xmath22 ratio is approximately constant in energy when @xmath25 is much smaller than the c.r . energy cutoff , and depends only weakly on the hadronic interaction model . at the source , the relative importance of the different @xmath2 types is : @xmath26 where @xmath27 depends on the slope of the spectrum , the relative importance of protons and heavy nuclei in the c.r . parent population , and the nature of the c.r . target ( gas or radiation field ) . the ratio @xmath28 at the source is a well known consequence of the fact that the chain decay of a charged pion : @xmath29 followed by @xmath30 generates two @xmath2 of @xmath31flavor and one of @xmath32type . the presence of a @xmath7 and an @xmath33 in the final state insures that the ratio @xmath34 , while the ratio @xmath35 is controled by the relative importance of @xmath36 over @xmath37 production that is not symmetric for a @xmath38 rich c.r . population . measurements of solar and atmospheric neutrinos have recently established the existence of neutrino oscillations , a quantum mechanical phenomenon that is a consequence of the non identity of the @xmath2 flavor \{@xmath10 , @xmath7 , @xmath8 } and mass \{@xmath39 , @xmath40 , @xmath41 } eigenstates . a @xmath2 created with energy @xmath25 and flavor @xmath23 can be detected after a distance @xmath42 with a different flavor @xmath43 with a probability that depends periodically on the ratio @xmath44 . this probability oscillates according to three frequencies that are proportional to the difference between the @xmath2 squared masses @xmath45 , and different amplitudes related to the mixing matrix @xmath46 that relates the flavor and mass eigenstates . the shortest ( longest ) oscillation length , corresponding to the largest ( smallest ) @xmath47 ) can be written as : @xmath48 these lengths are long with respect to the earth s radius ( @xmath49 cm ) , but are very short with respect to the typical size of astrophysical sources . therefore oscillations are negligible for atmospheric @xmath2 above @xmath50 tev , but can be safely averaged for ( essentially all ) astrophysical neutrinos . after space averaging , the oscillation probability matrix can be written as : @xmath51 where we have used the best fit choice for the mixing parameters ( @xmath52 , @xmath53 , @xmath54 ) . the most important consequence of ( [ eq : probosc - ave ] ) is a robust prediction , valid for essentially all astrophysical sources , for the flavor composition of the observable @xmath2 signal : @xmath55 the tev energy range ( the highest avaliable ) for photons is the most interesting one for neutrino astronomy , because the @xmath2 signal for the future telescopes is expected to be dominated by neutrinos of one ( or a few ) order(s ) of magnitude higher energy . recently the new cherenkov @xmath1ray telescopes have obtained remarkable results , and the catalogue of high energy gamma ray sources has dramatically increased . of particular importance has been the scan of the galactic plane performed by the hess telescope @xcite , because for the first time a crucially important region of the sky has been observed with an approximately uniform sensitivity with tev photons . the three brightest galactic tev sources detected by the hess telescope have integrated fluxes above 1 tev ( in units of @xmath56 ( @xmath57 s)@xmath58 ) of approximately 2.1 ( crab nebula ) , 2.0 ( rx j1713.73946 ) and 1.9 ( vela junior ) the fundamental problem in the interpretation of the @xmath1ray sources is the fact that it is not known if the observed photons have hadronic ( @xmath19 decay ) or leptonic ( inverse compton scattering of relativistic electrons on radiation fields ) origin . if the leptonic mechanism is acting , the hadronic component is poorly ( or not at all ) constrained , and the @xmath2 emission can be much smaller than the @xmath1ray flux . for the hadronic mechanism , the @xmath2 flux is at least as large as the photon flux , and higher if the @xmath1rays in the source are absorbed ( see sec . [ sec : nuhadgam ] ) . the @xmath1ray tev sources , belong to several different classes . the crab is a pulsar wind nebula , powered by the spin down by the central neutron star . the emission from these objects is commonly attributed to leptonic processes , and in particular the crab is well described by the self synchrotron comptons model ( ssc ) . the next two brightest sources @xcite are young supernova remnants ( snr ) , and there are good reasons to believe that the photon emission from these objects is of hadronic origin . the @xmath1 spectra are power law with a slope @xmath592.2 which is consistent with the expectation of the spectra of hadrons accelerated with 1st order fermi mechanism by the sn blast wave . the extrapolation from the photon to the neutrino flux is then robust , the main uncertainty being the possible presence of a high energy cutoff in the spectrum . other tev sources that are very promising for @xmath2 astronomy are the galactic center @xcite with a measured flux @xmath60 ( same units : @xmath56 ( @xmath57 s)@xmath58 ) and the micro quasar ls5039 @xcite , the weakest observed tev source with a flux @xmath61 . these sources are not particularly bright in tev @xmath1rays , but there are reasons to believe that they could have significant internal absorption for photons , and therefore have strong @xmath2 emission . in general , under the hypothesis of ( i ) hadronic emission and ( ii ) negligible @xmath1 absorption ( that correspond to @xmath62 ) , even we assume that the @xmath1 and @xmath2 spectra extends as a power law up to 100 tev or more , the hess sources are just at ( or below ) the level of sensitivity of the new @xmath2 telescopes as will be discussed in the next section . for extra galactic sources the constraints from tev photon observations are less stringent because very high energy photons are severely absorbed over extra galactic distances due to @xmath63 interactions on the infrared photons ( from redshifted starlight ) that fill intergalactic space . active galactic nuclei ( agn ) are strongly variables emitters of high energy radiation . the egret detector on the cgro satellite has detected over 90 agn of the blazar class , where @xmath64 is the bulk lorentz factor of the jet ) to the line of sight . ] with @xmath65 mev @xcite . the brightest agn sources in the present tev @xmath1ray catalogue , are the nearby ( @xmath66 ) agn s mkn 421 and 501 that are strongly variable on all the time scales considered from few minutes to years . their average flux can be , during some periods of time , several times the crab . the extrapolation to the @xmath2 flux has significant uncertainties because the origin ( hadronic or leptonic ) of the detected @xmath1rays is not established . leading candidates as sources of high energy neutrinos are also gamma ray bursts ( grb ) @xcite . it is possible that individual grb emits neutrino fluences that are sufficiently high to give detectable rates in the neutrino telescopes . the most promising technique for the detection of neutrino point sources is the detection of @xmath2induced muons ( @xmath67 ) . these particles are produced in the charged current interactions of @xmath7 and @xmath68 in the matter below the detector . to illustrate this important point we can consider a `` reference '' @xmath2 point source point source as the flux ( summed over all @xmath2 types ) above a minimum energy of @xmath69 tev . the reason for this choice is that it allows an immediate comparison with the the sources measured by tev @xmath1ray telescopes , that are commonly stated as flux above @xmath70 tev . in case of negligible @xmath1 absorption one has @xmath71 . since we consider power law fluxes , it is trivial to restate the normalization in other forms . as discussed later the km@xmath0 telescopes sensitivity peaks at @xmath72 tev . ] with an unbroken power law spectrum of slope @xmath73 , and an absolute normalization ( summing over all @xmath2 types ) @xmath74 ( @xmath57 s)@xmath58 . the reference source flux corresponds to approximately one half the flux of the two brightest snr detected by hess . the event rates from the reference source of @xmath2 interactions with vertex in the detector volume , and for @xmath2induced muons are shown in fig . [ fig : point1 ] , point source . the source has spectrum @xmath75 with slope @xmath76 and normalization @xmath77 ( @xmath57 s)@xmath58 equaly divided among 6 @xmath2 types . the thin lines describe the rate of @xmath2interactions in the detector volume for @xmath32 , @xmath31 and @xmath78 like cc interactions . the thick lines are fluxes of @xmath2induced muons with 1 gev and 1 tev threshold . [ fig : point1 ] , width=196 ] like cc interactions ( thin lines ) and @xmath2induced muons with a 1 gev threshold ( thick lines ) . the source is the same as in fig . [ fig : point1 ] ) . the solid ( dashed ) curve are for no ( maximum ) absorption in the earth . [ fig : point3 ] , width=196 ] the event rates for @xmath32 , @xmath31 and @xmath78 like @xmath2 interactions are : 10.3 , 9.6 and 2.9 events/(km@xmath0 yr ) ( assuming a water filled volume ) , while the @xmath2induced muon flux is @xmath79 ( km@xmath80 yr)@xmath58 . the signal depends on the zenith angle of the source , because of @xmath2 absorption in the earth as shown in fig . [ fig : point3 ] . the rate of @xmath32like events has a small contribution from the `` glashow resonance '' ( the process @xmath81 ) , visible as a peak at @xmath82 gev . neglecting the earth absorption the resonance contributes a small rate @xmath83 ( km@xmath0 yr)@xmath58 to the source signal , however this contribution quickly disappears when the source drops much below the horizon , because of absorption in the earth . the @xmath67 rate is much easier to measure and to disentangle from the atmospheric @xmath2 foreground . a crucial advantage is that the detected muon allows a high precision reconstruction of the @xmath2 direction , because the angle @xmath84 is small . in general the relation between the neutrino and the @xmath2induced muon fluxes is of order : @xmath85 ~ ( { \rm km}^2 \ ; { \rm yr})^{-1}\ ] ] the exact relation ( shown , when we consider a spectrum with no cutoff . the calculation in @xcite underestimates the effect of a cutoff in the proton parent spectrum . ] in fig . [ fig : signal_mu ] ) induced muons , calculated for a @xmath2 point source with a power law spectrum of slope @xmath23 , plotted as a function of the high energy cutoff of the parent proton spectrum . the thick ( thin ) lines are calculated for a threshold @xmath86 gev ( 1 tev ) . the absolute normalization is chosen so that in the absence of cutoff the total @xmath2 flux above 1 tev has a flux @xmath87 ( @xmath57 s)@xmath58 . [ fig : signal_mu ] , width=196 ] depends on ( i ) the slope of the @xmath2 spectrum , ( ii ) the presence of a high energy cutoff , ( iii ) the threshold energy used for @xmath31 detection and ( iv ) ( more weakly ) on the zenith angle of the source ( because of absorption effects ) . the peak of the `` response '' curve for the @xmath2induced muons ( see fig . [ fig : point1 ] ) is at @xmath88 tev ( 20 tev for a threshold of 1 tev for the muons ) , with a total width extending approximately two orders of magnitude . in other words the planned @xmath2 telescopes should be understood mostly as telescopes for @xmath89100 tev neutrino sources . this energy range is reasonably well `` connected '' to the observations of the atmospheric cherenkov @xmath1-ray observations that cover the 0.110 tev range in @xmath90 , and the @xmath2 sources observable in the planned km@xmath0 telescopes are likely to be appear as bright objects for tev @xmath1ray instruments . because of the background of atmospheric muons , the @xmath2induced @xmath31 s are only detectable when the source is below the horizon this reduces the sky coverage of a telescope as illustrated in fig . [ fig : signal2 ] , that gives the time averaged signal obtained by two detectors , placed at the south pole and in the mediterranean sea , when the reference source we are discussing is placed at different celestial declinations . for a south pole detector the source remains at a fixed zenith angle @xmath91 , and the declination dependence of the rate is only caused by difference in @xmath2 absorption in the earth for different zenith angles . the other curves includes the effect of the raising and setting of the source below the horizon . some of the most promising galactic sources are only visible for a neutrino telescope in the northern hemisphere since the galactic center is at declination @xmath92 . the interesting source rx j1713.73946 is at @xmath93 , vela junior at @xmath94 . point source of fixed luminosity , plotted as a function of declination for two @xmath2 telescopes placed at the south pole ( thick line ) and in the mediterranean sea ( thin line ) . the flux is considered detectable only when the source is below the horizon . the effects of the ( variable ) absorption in the earth is taken into account . [ fig : signal2 ] , width=196 ] since the prediction of the signal size for from the ( expected ) brightest @xmath2 sources is of only few events per year , it is clearly essential to reduce all sources of background to a very small level . this is a possible but remarkably difficult task . the background problem is illustrated in fig . [ fig : comp_mu ] that shows the energy spectrum of the muon signal from our `` reference '' point source , comparing it with the spectrum of the atmospheric @xmath2 foreground integrated in a small cone of semi angle 0.3@xmath95 . the crucial point is that the energy spectrum from astrophysical sources is harder than the atmospheric @xmath2 one , with a median energy of approximately 1 tev , an order of magnitude higher . it is for this reason that for the detection of astrophysical neutrinos it is planned to use an `` offline '' threshold of @xmath96 tev . the atmospheric background above this threshold is small but still potentially dangerous , it depends on the zenith angle and is maximum ( minimum ) for the horizontal ( vertical ) direction at the level of 4 ( 1 ) @xmath31/(km@xmath80 yr ) . induced muons . the thin lines are the atmospheric @xmath2 background integrated in a cone of semi angle @xmath97 ( dashed lines do not consider oscillations ) . the thick lines are for a point source ( @xmath73 , @xmath77 ( @xmath57 s)@xmath58 . the high ( low ) line is for horizontal ( vertical ) muons . [ fig : comp_mu ] , width=196 ] the angular window for the integration of the muon signal is determined by three factors : ( i ) the angular shape of the source , ( ii ) the intrinsic angle @xmath84 , and ( iii ) the angular resolution of the instrument . the source dimension can be important for the galactic sources , in fact the tev @xmath1ray sources have a finite size , in particular the snr rx j1713.73946 has a radius @xmath98 , and vela junior is twice as large . the detailed morphology of these sources measured by the hess telescope indicates that most of the emission is coming from only some parts of the shell ( presumably where the gas density is higher ) , and the detection of @xmath2 emission from these sources could require the careful selection of the angular region of the @xmath1ray signal . the distribution of the @xmath84 angle defines the minimum angular dimension of a perfect point source signal . this is shown in fig . [ fig : ang0 ] that plots the semi angle of the cone that contains 50 , 75 or 90% of such a signal ( for a @xmath99 spectrum ) as a function of the muon energy . the size the muon signal shrinks with increasing energy @xmath100 . this is easily understood noting that the dominant contribution to @xmath84 is the muon neutrino scattering angle at the @xmath2interaction point : @xmath101 ( @xmath102 is the muon energy at the interaction point ) , and expanding for small angle one finds @xmath103 . power law flux of slope @xmath99 . the curves give the semi angle angle of the cone that ( for a fixed value of @xmath104 ) contains 50% , 75% and 90% of the signal . the thick line gives the cone that maximize the @xmath105 . [ fig : ang0 ] , width=196 ] the 50% containement cone angle shrinks to @xmath106 for @xmath107 tev that is probably smaller than the experimental resolution . qualitatively the experimental strategy for the maximization of the point source sensitivity is clear : ( i ) selection of high energy @xmath108 tev muons , and ( ii ) selection of the narrowest angular window compatible with the experimental resolution and the signal angular size . these cuts will reduce the total signal by at least a factor of 2 . the optimization of the cuts for signal extraction is a non trivial problem , that we can not discuss here . in principle one would like to maximize the quantity @xmath105 ( where @xmath109 and @xmath110 are respectively the signal and background event numbers ) . for the angular window ( where @xmath111 ) this problem has a well defined solution that is shown in fig . [ fig : ang0 ] , that corresponds to the choice of a window that contains approximately 50% of the signal . this is the numerically obtained generalization of the well known fact that for a gaussian angular resolution of width @xmath112 , @xmath105 is maximized choosing the angular window @xmath113 that contains a fraction 0.715 of the signal . for the energy cut , the quantity @xmath105 grows monotonically when the threshold energy is increased , and therefore the optimum is determined by the condition that the background approaches zero . the conclusion of this discussion on the point source sensitivity is that after careful work for background reduction , the minimum @xmath2 flux ( above 1 tev ) detectable as a point source for the km@xmath0 telescopes is of order ( or larger than ) @xmath114 ( @xmath57 s)@xmath58 , that is approximately the @xmath2 flux of the reference source considered above . this is tantalizingly ( and frustratingly ) close to the flux predicted from the extrapolation ( performed under the assumption of a transparent source ) of the brightest sources in the tev @xmath1ray catalogue . the remarkable recent results of hess and magic on galactic sources are certainly very exciting for the @xmath2 telescope builders , because they show the richness of the `` high energy '' sky , but they are also sobering , because they lead to a prediction of the scale of the brightest galactic @xmath2 sources , that is just at the limit of the sensitivity of the planned instruments . as discussed previously , strong @xmath2 sources could lack a bright @xmath1ray counterparts because of internal photon absorption , or because they are at extra galactic distances . the considerations outlined in this section can be immediately applied to a burst like transient sources like grb s . in the presence of a photon `` trigger '' that allows a time coincidence , the background problem disappears and the detectability of one burst is only defined by the signal size . a burst with an @xmath2 energy fluence @xmath115 erg/@xmath57 per decade of energy ( in the 10100 tev range ) will produce an average of one @xmath2induced muon in a km@xmath0 telescope . the flux from the ensemble of all extragalactic sources , with the exception of a few sources will appear as an unresolved isotropic flux , characterized by its energy spectrum @xmath116 . the identification of this unresolved component can rely on three signatures : ( i ) an energy spectrum harder that the atmospheric flux , ( ii ) an isotropic angular distribution , and ( iii ) approximately equal fluxes for all 6 neutrino types . this last point is a consequence of space averaged flavor oscillations . the flux of prompt ( charm decay ) atmospheric neutrinos is also approximately isotropic in the energy range considered , and is characterized by equal fluxes of @xmath10 and @xmath7 , with however a significant smaller flux of @xmath8 ( that are only produced in the decay of @xmath117 ) . the disentangling of the astrophysical and an prompt atmospheric fluxes is therefore not trivial , and depends on a good determination of the neutrino energy spectrum , together with a convincing model for charmed particle production in hadronic interactions . a model independent method for the identification of the astrophysical flux requires the separate measurement of the fluxes of all three neutrino flavors , including the @xmath8 . the flux of prompt atmospheric @xmath7 is also accompanied by an approximately equal flux of @xmath118 ( the differences at the level of 10% are due to the very well understood differences in the spectra produced in weak decays ) . going muon flux is in principle measurable , determining the prompt component experimentally and eliminating a potentially dangerous background . this measurement is therefore important and efforts should be made to perform it . the most transparent way to discuss the @xmath2 extragalactic flux is to consider its energetics . the observable @xmath2 flux @xmath116 is clearly related to a number density @xmath119 and to an energy density @xmath120 , that fill uniformly the universe . the @xmath2 energy density at the present epoch has been generated by a power source acting during the history of the universe . integrating over all @xmath121 , the relation between the power injection density @xmath122 and the @xmath2 energy density at the present epoch is given by : @xmath123 } ~ = ~ \int_0^{\infty } \ ; dz~ \left | { dt \over dz } \right | ~ { \ldens ( z ) \over ( 1 + z ) } \nonumber \\ & ~ & ~ \nonumber \\ & = & \int_0^{\infty } \ ; dz~ { \ldens ( z ) \over h(z ) \ ; ( 1+z)^2 } ~ = ~ { \ldens_0 \over h_0 } ~ \xi \label{eq : energy - balance}\end{aligned}\ ] ] where @xmath124 is the age of the unverse , @xmath125 and @xmath126 are the @xmath2 power density and the hubble constant at the present epoch , and @xmath127 is the adimensional quantity : @xmath128~ \left [ \frac{\ldens(z)}{\ldens_0}\right ] \;(1+z)^{-2 } ~~. \label{eq : xi - def}\ ] ] that depends on the cosmological parameters , and more crucially on the cosmic history of the neutrino injection . the quantity @xmath129 can be understood as the effective `` average power density '' operating during the hubble time @xmath130 , and depends on the cosmological parameters , and more strongly on the cosmological history of the injection . for a einstein de sitter universe ( @xmath131 , @xmath132 ) with no evolution for the sources one has @xmath133 , for the `` concordance model '' cosmology ( @xmath134 , @xmath135 ) this becomes @xmath136 . for the same concordance model cosmology , if it is assumed that the cosmic time dependence of the neutrino injection is similar to the one fitted to the star formation history @xcite , one obtains @xmath137 , for a time dependence equal to the one fitted to the agn luminosity evolution @xcite one finds @xmath138 . if we consider the power and energy density not bolometric , but only integrated above the threshold energy @xmath139 , the redshift effects depend on the shape of the injection spectrum . for a power law of slope @xmath23 independent from cosmic time , one has still a relation of form : @xmath140 very similar to ( [ eq : energy - balance ] ) , with : @xmath141~ \left [ \frac{\ldens(z)}{\ldens_0}\right ] \;(1+z)^{-\alpha } \label{eq : xi - def1}\ ] ] for @xmath142 one has @xmath143 . the energy density ( [ eq : energy - balance1 ] ) corresponds to the @xmath2 isotropic differential flux : @xmath144 where @xmath145 is the kinematic adimensional factor : @xmath146^{-1 } \label{eq : kalpha}\ ] ] with @xmath147 the high energy cutoff of the spectrum . the limit of @xmath145 when @xmath148 is : @xmath149^{-1}$ ] . one can use equation ( [ eq : flux - extra ] ) to relate the observed diffuse @xmath2 flux to the ( time and space ) averaged power density of the neutrino sources : @xmath150 the case @xmath142 corresponds to an equal power emitted per energy decade , and numerically : @xmath151 ~\frac{\rm gev}{{\rm cm}~{\rm s}~{\rm sr } } \label{eq : k - energy-2}\ ] ] ( @xmath152 is the solar luminosity ) . the current limit on the existence of a diffuse , isotropic @xmath2 flux of slope @xmath17 from the amanda and baikal detectors corresponds to @xmath153 gev/(@xmath57 s sr)@xmath58 , while the sensitivity of the future km@xmath0 telescopes is estimated ( in the same units ) of order @xmath154 . this implies that the average power density for the creation of high energy neutrinos is : @xmath155 the future km@xmath0 telescopes will detect the extragalactic flux if it has been generated with an `` average '' power density @xmath156 ( same units ) . to evaluate the astrophysical significance of the current limits on the diffuse @xmath2 flux and of the expected sensitivity of the km@xmath0 telescopes , one can consider the energetics of the most plausible sources of high energy neutrinos , namely the death of massive stars and the activity around super massive black holes at the center of galaxies . the bolometric average power density associated with stellar light has been estimated by hauser and dwek @xcite as : @xmath157 this is also in good agreement with the estimate of fukugita and peebles @xcite for the b band optical luminosity of stars . this power implies that the total mass density gone into the star formation @xcite is : @xmath158 ( that implies @xmath159 ) . most of this power has of course little to do with cosmic rays and high energy @xmath2 production , however c.r . acceleration has been associated with the final stages of the massive stars life . a well established mechanism is acceleration in the spherical blast waves of supernova explosions , a more speculative one is acceleration in relativistic jets that are emitted in all or a subclass of the sn , and that is the phenomenon behind the ( long duration ) gamma ray bursts . the current estimate of the rate of gravitational collapse supernova rate obtained averaging over different surveys @xcite is : @xmath160 this is reasonably consistent with the the estimate of the power emitted by stars , if one assumes that all stars with @xmath161 end their history with a gravitational collapse . fukugita and peebles estimate ( @xmath162 is the sfr at thre present epoch ) : @xmath163 assuming that the average kinetic energy released in a sn explosion is of order @xmath164 this implies an average power density : @xmath165 it is commonly believed that a fraction of order @xmath166% of this kinetic energy is converted into relativistic hadrons . one can see that if on average a fraction of order 1% is converted in neutrinos , this could result in @xmath167 , that could give a a detectable signal from the ensemble of ordinary galaxies . active galactic nuclei have often been suggested as a source of ultra high energy cosmic rays , and at the same time of high energy neutrinos . estimates of the present epoch power density from agn in the [ 2,10 kev ] @xmath4ray band are of order @xmath168 } \simeq 2.0 \times 10^{5 } ~ \lum0\ ] ] with a cosmic evolution @xcite characterized by @xmath169 . ueda @xcite also estimates the relation between this energy band and the bolometric luminosity is : @xmath170}. $ ] and therefore the total power associated with agn is @xmath171 of order @xmath172% of the power associated with star light . it is remarkable that this agn luminosity is well matched to the average mass density in super massive black holes , that is estimated @xcite : @xmath173 when a mass @xmath174 falls into a black hole ( bh ) of mass @xmath175 the bh mass is increased by an amount : @xmath176 while the energy @xmath177 is radiated away in different forms . the total accreted mass can therefore be related to the amount of radiated energy , if one knows the radiation efficiency @xmath178 : @xmath179 and therefore @xmath180 the estimates ( [ eq : lumagn ] ) and ( [ eq : rhobh ] ) are consistent for an efficiency @xmath181 . assuming radiated energy is the kinetic energy accumulated in the fall down to a radius @xmath182 that is @xmath183 times the bh schwarzschild radius @xmath184 : @xmath185 this corresponds to @xmath186 . the ensemble of agn has sufficient power to generate a diffuse @xmath2 flux at the level of the existing limit . a very important guide for the expected flux of extragalactic @xmath2 is the diffuse @xmath1ray flux measured above 100 mev measured by the egret instrument @xcite that can be described ( integral flux above a minimum energy @xmath90 in gev ) as : @xmath187 ( for a critical view of this interpretation see @xcite ) . this corresponds to the energy density in the decade between @xmath90 and @xmath188 ( with @xmath90 in gev ) : @xmath189 and to the average power density per decade : @xmath190 this diffuse @xmath1ray has been attributed @xcite to the emission of unresolved blazars . the sum of the measured fluxes ( for @xmath191 mev ) for all identified egret blazars @xcite amounts to : @xmath192 this has to be compared with the diffuse flux ( [ eq : egret - diffuse ] ) that can be re - expressed as : @xmath193 . other authors @xcite arrive to the interesting conclusion that unresolved blazars can account for at most 25% of the diffuse flux , and that there must be additional sources of gev photons . most models for the blazar emission favor leptonic mechanism , with the photons emitted by the inverse compton scattering of relativistic electrons with radiation fields present in the jet , however it is possible that hadrons account for a significant fraction ( or even most ) of the @xmath1ray flux , and therefore that the @xmath2 flux is of comparable intensity to the @xmath1 flux . the test of the `` egret level '' for the diffuse @xmath2 flux is one of the most important goals for the neutrino telescopes already in operation . waxman and bahcall @xcite have suggested the existence of an upper bound for the diffuse flux of extra galactic neutrinos that is valid if the @xmath2 sources are transparent for cosmic rays , based on the observed flux of ultra high energy cosmic rays . the logic ( and the limits ) of the wb bound are simple to grasp . neutrinos are produced by cosmic ray interactions , and it is very likely ( and certainly economic ) to assume that the c.r . and @xmath2 sources are the same , and that the injection rates for the two type of particles are related . the condition of `` transparency '' for the source means that a c.r . has a probability @xmath194 of interacting in its way out of the source . the transparency condition obviously sets an upper bound on the @xmath2 flux , that can be estimated from a knowledge of the cosmic ray extragalactic spectrum , calculating for each observed c.r . the spectrum of neutrinos produced in the shower generated by the interaction of one particle of the same mass and energy , and integrating over the c.r . spectrum . if the c.r . spectrum is a power law of form @xmath195 , the `` upper bound '' @xmath2 flux obtained saturating the transparency condition ( for a c.r . flux dominated by protons ) is : @xmath196 with @xmath197 the wb upper bound ( shown in fig . [ fig : energy_density ] ) has been the object of several criticisms see for example @xcite . there are two problems with it . the first one is conceptual : the condition of transparency is plausible but is not physically necessary . sources can be very transparent , for example in snr s the interaction probability of the hadrons accelerated by the blast wave , is small ( @xmath198 ) ( with a value that depends on the density of the local ism ) , but `` thick sources '' are possible , and have in fact been advocated for a long time , the best example is acceleration in the vicinity of the horizon of a smbh . the search for `` thick '' neutrino sources is after all one of the important motivations for @xmath2 astronomy . the second problem is only quantitative . in order to estimate the bound one has to know the spectrum of extra galactic cosmic rays . this flux is `` hidden '' behind the foreground of galactic cosmic rays , that have a density enhanced by magnetic confinement effects . k cmbr radiation , starlight ( with reprocession by dust ) , the x - ray diffuse flux ( attributable to agn , the egret ( @xmath65 mev ) diffuse flux , and the grb contribution from the batse data . the detected cosmic ray energy density ( points ) is enhanced by magnetic confinement in our galaxy and is not cosmological , except ( possibly ) at the highest energy . the lines labeled wb(cr ) and wb(@xmath2 ) are the extragalactic c.r . and upper bound @xmath2 fluxes estimated in @xcite . the line labeled amanda is the current limit on an isotropic extragalactic @xmath2 flux , and the line labeled icecube is the predicted sensitivity on km@xmath0 telescopes . [ fig : energy_density ] , width=196 ] the separation of the galactic and extragalactic component of the c.r . is a central unsolved problem for cosmic ray science . the extragalactic c.r . is dominant and therefore visible only at the very highest energies , perhaps only above the `` ankle '' ( at @xmath199 ev ) as assumed in @xcite . more recently it has been argued @xcite that extra galactic @xmath38 dominate the c.r . flux for @xmath200 ev . waxman and bahcall have fitted the c.r . flux above the ankle with an a @xmath201 injection spectrum ( correcting for energy degradation in the cmbr ) and extrapolated the flux with the same form to lower energy . the power needed to generate this c.r . density was @xmath202(mpc@xmath203 decade ) . the extrapolation of the c.r . flux ( that is clearly model dependent ) and the corresponding @xmath2 upper bound are shown in fig . [ fig : energy_density ] , where they can be compared to the sensitivity of the @xmath2 telescopes . because of the uncertainties in the fitting of the c.r . extragalactic component and its extrapolation to low energy ( and the possible loophole of the existence of thick souces ) the wb estimate can not really be considered an true upper bound of the @xmath2 flux . however the `` wb @xmath2flux '' is important as reasonable ( and indeed in many senses optimistic ) estimate of the order of magnitude of the true flux . the existence of cosmic rays with energy as large as @xmath204 ev is the best motivation for neutrino astronomy , since these particles must be accelerated to relativistic energies somewhere in the universe , and _ unavoidably _ some c.r . will interact with target near ( or in ) the source producing neutrinos . an interesting question is the relation between the intensity of the total extragalactic @xmath2 contribution , and the potential to identify extra galactic point sources . this clearly depends on the luminosity function and cosmological evolution of the sources . assuming for simplicity that all sources have energy spectra of the same shape , and in particular power spectra with slope @xmath23 , then each source can be fully described by its distance and its @xmath2 luminosity @xmath42 ( above a fixed energy threshold @xmath139 ) . the ensemble of all extra galactic sources is then described by the function @xmath205 that gives the number of sources with luminosity in the interval between @xmath42 and @xmath206 contained in a unit of comoving volume at the epoch corresponding to redshift @xmath207 . the power density due to the ensemble of all sources is given by : @xmath208 it is is possible ( and in fact very likely ) that most high energy neutrino sources are not be isotropic . this case is however contained in our discussion if the luminosity @xmath42 is understood as an orientation dependent `` isotropic luminosity '' : @xmath209 . for a random distribution of the viewing angles it is easy to show that @xmath210 the @xmath2 flux ( above the threshold energy @xmath139 ) received from a source described by @xmath42 and @xmath207 is : @xmath211 where @xmath212 is the luminosity distance and @xmath145 is the adimensional factor given in ( [ eq : kalpha ] ) . if @xmath213 is the sensitivity of a neutrino telescope , that is the minimum flux for the detection of a point source , then a source of luminosity @xmath42 can be detected only if is closer than a maximum distance corresponding to redshift @xmath214 . inspecting equation ( [ eq : phipoint ] ) it is simple to see that @xmath215 is a function of the adimensional ratio @xmath216 where : @xmath217 is the order of magnitude of the luminosity of a source that gives the minimum detectable flux when placed at @xmath218 . the explicit solution for @xmath219 depends on the cosmological parameters ( @xmath220 , @xmath221 ) and on the spectral slope @xmath23 . a general closed form analytic solution for @xmath219 does not exist , @xmath222 and @xmath223 the exact solution is @xmath224 . ] , but it can be easily obtained numerically . the solution for the concordance model cosmology is shown in fig . [ fig : horizon ] . calculated for the cosmology @xmath134 , @xmath225 , and sources with power law spectra of slope @xmath223 , assuming the a minimum detectable flux @xmath226 ( @xmath57s)@xmath58 . the points corresponds to the egret blazars @xcite with the luminosity measured in the 0.110 gev range . [ fig : horizon ] , width=196 ] it is also easy and useful to write @xmath215 as a power law expansion in @xmath227 : @xmath228\ , x^2 + \ldots\ ] ] the leading term of this expansion is simply @xmath227 independently from the cosmological parameters and the slope @xmath23 . this clearly reflects the fact that for small redshift @xmath207 one probes only the near universe , where and when redshift effects and cosmological evolution are negligible , and the flux simply scales as the inverse square of the distance . the total flux from all ( resolved and unresolved ) sources can be obtained integrating over @xmath42 and @xmath207 : @xmath229 where @xmath230 is the comoving volume contained between redshift @xmath207 and @xmath231 : @xmath232 substituting the definitions ( [ eq : phipoint ] ) and ( [ eq : vol1 ] ) , integrating in @xmath42 and using ( [ eq : ldens ] ) the total flux corresponds exactly to the result ( [ eq : flux - extra ] ) . the resolved ( unresolved ) flux can be obtained changing the limits of integration in redshift to the interval @xmath233 $ ] ( @xmath234 $ ] ) . the total number of detectable sources is obtained integrating the source density in the comoving volume contained inside the horizon : @xmath235 a model for @xmath236 to describe the luminosity distribution and cosmic evolution of the sources allows to predict the fraction of the extra galactic associated with the resolved flux , and the corresponding number of sources . here there is no space for a full discussion , but it may be instructive to consider a simple toy model where all sources have identical luminosity ( that is @xmath237 $ ] ) . in this model , is the ( unique ) luminosity @xmath42 of the sources at the present epoch is not too large , ( that is for @xmath238 with @xmath239 given in ( [ eq : lstar ] ) ) , for a _ fixed _ total flux for the ensemble of sources , the number of objects that can be resolved is ( for @xmath240 ) : @xmath241 where @xmath242 is the coefficient of the diffuse @xmath2 flux ( @xmath243 is in units @xmath244 gev/(@xmath57 s sr ) ) , @xmath245 is the power of an individual source per energy decade ( @xmath246 is in units of @xmath247 erg / s ) , and @xmath213 is the minum flux above energy @xmath248 tev for source identification ( @xmath249 is in units @xmath56 ( @xmath250s)@xmath58 ) . it is easy to understand the scaling laws . when the luminosity of the source increases the radius of the source horizon ( for @xmath42 not too large ) grows @xmath251 and the corresponding volume grows as @xmath252 , while ( for a fixed total flux ) the number density of the sources is @xmath253 , therefore the number of detectable sources is : @xmath254 . similarly , the ratio of the resolved to the total @xmath2 flux can be estimated as : @xmath255 the scaling @xmath256 also easily follows from the assumption of an euclidean near universe . the bottom line of this discussion , is that it is very likely that the ensemble of all extra galactic sources will give its largest contribution as an unresolved , isotropic contribution , with only a small fraction of this total flux resolved in the contribution of few individual point sources . the number of the detectable extra galactic point sources will obviously grow linearly with total extra galactic flux , but also depends crucially on the luminosity function and cosmological evolution of the @xmath2 sources . if a reasonable fraction of the individual sources are sufficiently powerful ( @xmath257 erg / s ) , an interesting number of objects can be detected as point sources . emission for blazars is a speculative but very exciting possibility ( see fig . [ fig : horizon ] ) . for lack of space we can not discuss here the neutrinos of `` gzk '' origin and other speculative sources @xcite . this field is of great interest , and is in many ways complementary to the science that can be performed with km@xmath0 telescopes . gzk neutrinos ( see fig.[fig : cr_nu ] ) have energy typically in the 10@xmath25810@xmath259 ev , and the predicted fluxes are so small that km@xmath0 detectors can only be very marginal , and larger detector masses and new detection methods are in order . several interesting ideas are being developed ( for a review see @xcite ) , these include acoustic @xcite , radio @xcite and air shower @xcite detection . the gzk neutrinos are a guaranteed source , and their measurement carry important information on the maximum energy and cosmic history of the ultra high energy cosmic rays . perhaps the most promising detection technique uses radio detectors on balloons . an 18.4 days test flight of the anita experiment @xcite ( see fig . [ fig : cr_nu ] ) has obtained the best limits on the diffuse flux of neutrinos at very high energy . as photon astronomy is articulated in different fields according to the range of wavelength observed , for neutrino astronomy one can already see the formation of ( at least ) two different subfields : the `` km@xmath0 neutrino science '' that aims at the study of @xmath2 in the 10@xmath26010@xmath261 ev energy range , and `` ultra high energy neutrino science '' that studies @xmath2 above 10@xmath258 ev , with the detection of gzk neutrinos as the primary goal . the potential of the km@xmath0 telescopes to open the extraordinarily interesting new window of high energy neutrino astronomy is good . the closest thing to a guaranteed @xmath2 source are the young snr observed by hess in tev photons . the expected @xmath2 fluxes from these sources are probably above the sensitivity of a km@xmath0 telescopes in the northern hemisphere . other promising galactic @xmath2 sources are the galactic center and @xmath31quasars . blazars and grb s are also intensely discussed as extragalactic @xmath2 sources . the combined @xmath2 emission of all extragalactic sources should also be detectable as a diffuse flux distinguishable from the atmospheric @xmath2 foreground . because of the important astrophysical uncertainties , the clear observations of astrophysical neutrinos in the km@xmath0 telescopes is however not fully guaranteed . there are currently plans to build two different instruments of comparable performances ( based on the water cherenkov technique in water and ice ) in the northern and southern hemisphere . such instruments allow a complete coverage of the celestial sphere . this is a very important scientific goal , if the detector sensitivity is sufficient to perform interesting observations . if the deployment of one detector anticipates the second one , it is necessary to be prepared to modify the design of the second one , on the basis of the lessons received . this is particularly important if the first observations give no ( or marginal ) evidence for astrophysical neutrinos , indicating the need of an enlarged acceptance . a personal `` guess '' about the most likely outcome for the operation of the km@xmath0 telescopes , is that they will play for neutrino astronomy a role similar to what the first @xmath4ray rocket of rossi and giacconi@xcite played for @xmath4ray astronomy in 1962 . that first glimpse of the x ray sky revealed one single point source , the agn sco - x1 ( that a the moment was in a high state of activity ) , and obtained evidence for an isotropic @xmath4ray light glow of the sky . detectors of higher sensitivity ( a factor 10@xmath262 improvement in 40 years ) soon started to observe a large number of sources belonging to different classes . even in the most optimistic scenario , the planned km@xmath0 telescopes will just `` scratch the surface '' of the rich science that the neutrino messenger will carry . to explore this field it will be obviously necessary to develop higher acceptance detectors , and it is not too soon to think in this direction . * acknowledgments*. i m very grateful to emilio migneco , piera sapienza , paolo piattelli and the colleagues from catania for the organization of a very fruitful workshop . i have benefited from conversations with many colleagues . special thanks to tonino capone and felix aharonian .
this work discusses the perspectives to observe fluxes of high energy astrophysical neutrinos with the planned km@xmath0 telescopes . on the basis of the observations of gev and tev @xmath1rays , and of ultra high energy cosmic rays , it is possible to construct well motivated predictions that indicate that the discovery of such fluxes is probable . however the range of these predictions is broad , and the very important opening of the `` neutrino window '' on the high energy universe is not guaranteed with the current design of the detectors . the problem of enlarging the detector acceptance using the same ( water / ice cherenkov ) or alternative ( acoustic / radio ) techniques is therefore of central importance .
SECTION 1. SHORT TITLE. This Act may be cited as the ``Women's Business Centers Sustainability Act of 1999''. SEC. 2. PRIVATE NONPROFIT ORGANIZATIONS. Section 29 of the Small Business Act (15 U.S.C. 656) is amended-- (1) in subsection (a)-- (A) by redesignating paragraphs (2) and (3) as paragraphs (3) and (4), respectively; and (B) by inserting after paragraph (1) the following: ``(2) the term `private nonprofit organization' means an entity described in section 501(c) of the Internal Revenue Code of 1986 that is exempt from taxation under section 501(a) of such Code;''; and (2) in subsection (b), by inserting ``nonprofit'' after ``private''. SEC. 3. INCREASED MANAGEMENT OVERSIGHT AND REVIEW OF WOMEN'S BUSINESS CENTERS. Section 29 of the Small Business Act (15 U.S.C. 656) is amended-- (1) by striking subsection (h) and inserting the following: ``(h) Program Examination.-- ``(1) In general.--The Administration shall-- ``(A) develop and implement procedures to annually examine the programs and finances of each women's business center established pursuant to this section, pursuant to which each such center shall provide to the Administration-- ``(i) an itemized cost breakdown of actual expenditures for costs incurred during the preceding year; and ``(ii) documentation regarding the amount of matching assistance from non-Federal sources obtained and expended by the center during the preceding year in order to meet the requirements of subsection (c) and, with respect to any in-kind contributions described in subsection (c)(2) that were used to satisfy the requirements of subsection (c), verification of the existence and valuation of those contributions; and ``(B) analyze the results of each such examination and, based on that analysis, make a determination regarding the viability of the programs and finances of each women's business center. ``(2) Extension of contracts.--In determining whether to extend or renew a contract with a women's business center, the Administration-- ``(A) shall consider the results of the most recent examination of the center under paragraph (1); and ``(B) may withhold such extension or renewal, if the Administration determines that-- ``(i) the center has failed to provide any information required to be provided under clause (i) or (ii) of paragraph (1)(A), or the information provided by the center is inadequate; or ``(ii) the center has failed to provide any information required to be provided by the center for purposes of the report of the Administration under subsection (j), or the information provided by the center is inadequate.''; and (2) by striking subsection (j) and inserting the following: ``(j) Management Report.-- ``(1) In general.--The Administration shall prepare and submit to the Committees on Small Business of the House of Representatives and the Senate a report on the effectiveness of all projects conducted under this section. ``(2) Contents.--Each report submitted under paragraph (1) shall include information concerning, with respect to each women's business center established pursuant to this section-- ``(A) the number of individuals receiving assistance; ``(B) the number of startup business concerns formed; ``(C) the gross receipts of assisted concerns; ``(D) the employment increases or decreases of assisted concerns; ``(E) to the maximum extent practicable, increases or decreases in profits of assisted concerns; ``(F) documentation detailing the most recent analysis undertaken under subsection (h)(1)(B) and the determinations made by the Administration with respect to that analysis; and ``(G) demographic data regarding the staff of the center.''. SEC. 4. WOMEN'S BUSINESS CENTER SUSTAINABILITY PILOT PROGRAM. (a) In General.--Section 29 of the Small Business Act (15 U.S.C. 656) is amended by adding at the end the following: ``(l) Sustainability Pilot Program.-- ``(1) In general.--There is established a 4-year pilot program under which the Administration is authorized to make grants (referred to in this section as `sustainability grants') on a competitive basis for an additional 5-year project under this section to any private nonprofit organization (or a division thereof)-- ``(A) that has received financial assistance under this section pursuant to a grant, contract, or cooperative agreement; and ``(B) that-- ``(i) is in the final year of a 5-year project; or ``(ii) to the extent that amounts are available for such purpose under subsection (k)(4)(B), has completed a project financed under this section (or any predecessor to this section) and continues to provide assistance to women entrepreneurs. ``(2) Conditions for participation.--In order to receive a sustainability grant, an organization described in paragraph (1) shall submit to the Administration an application, which shall include-- ``(A) a certification that the applicant-- ``(i) is a private nonprofit organization; ``(ii) employs a full-time executive director or program manager to manage the women's business center for which a grant is sought; and ``(iii) as a condition of receiving a sustainability grant, agrees-- ``(I) to an annual examination by the Administration of the center's programs and finances; and ``(II) to the maximum extent practicable, to remedy any problems identified pursuant to that examination; ``(B) information demonstrating that the applicant has the ability and resources to meet the needs of the market to be served by the women's business center site for which a sustainability grant is sought, including the ability to raise financial resources; ``(C) information relating to assistance provided by the women's business center site for which a sustainability grant is sought in the area in which the site is located, including-- ``(i) the number of individuals assisted; ``(ii) the number of hours of counseling, training, and workshops provided; and ``(iii) the number of startup business concerns formed; ``(D) information demonstrating the effective experience of the applicant in-- ``(i) conducting financial, management, and marketing assistance programs, as described in paragraphs (1), (2), and (3) of subsection (b), designed to impart or upgrade the business skills of women business owners or potential owners; ``(ii) providing training and services to a representative number of women who are both socially and economically disadvantaged; ``(iii) using resource partners of the Administration and other entities, such as universities; ``(iv) complying with the cooperative agreement of the applicant; and ``(v) prudently managing finances and staffing, including the manner in which the performance of the applicant compared to the business plan of the applicant and the manner in which grants made under subsection (b) were used by the applicant; and ``(E) a 5-year plan that demonstrates the ability of the women's business center site for which a sustainability grant is sought-- ``(i) to serve women business owners or potential owners in the future by improving fundraising and training activities; and ``(ii) to provide training and services to a representative number of women who are both socially and economically disadvantaged. ``(3) Review of applications.-- ``(A) In general.--The Administration shall-- ``(i) review each application submitted under paragraph (2) based on the information provided under subparagraphs (D) and (E) of that paragraph, and the criteria set forth in subsection (f); and ``(ii) approve or disapprove applications for sustainability grants simultaneously with applications for grants under subsection (b). ``(B) Data collection.--Consistent with the annual report to Congress under subsection (j), each women's business center site that receives a sustainability grant shall, to the maximum extent practicable, collect the information relating to-- ``(i) the number of individuals assisted; ``(ii) the number of hours of counseling and training provided and workshops conducted; ``(iii) the number of startup business concerns formed; ``(iv) any available gross receipts of assisted concerns; and ``(v) the number of jobs created, maintained, or lost at assisted concerns. ``(C) Record retention.--The Administration shall maintain a copy of each application submitted under this subsection for not less than 10 years. ``(4) Non-federal contribution.-- ``(A) In general.--Notwithstanding any other provision of this section, as a condition of receiving a sustainability grant, an organization described in paragraph (1) shall agree to obtain, after its application has been approved under paragraph (3) and notice of award has been issued, cash and in-kind contributions from non-Federal sources for each year of additional program participation in an amount equal to 1 non-Federal dollar for each Federal dollar. ``(B) In-kind contributions.--Not more than 50 percent of the non-Federal assistance obtained for purposes of subparagraph (A) may be in the form of in- kind contributions that exist only as budget line items, including such contributions of office equipment and office space. ``(5) Timing of requests for proposals.--In carrying out this subsection, the Administration shall issue requests for proposals for women's business centers applying for the pilot program under this subsection simultaneously with requests for proposals for grants under subsection (b).''. (b) Authorization of Appropriations.--Section 29(k) of the Small Business Act (15 U.S.C. 656(k)) is amended-- (1) by striking paragraph (1) and inserting the following: ``(1) In general.--There is authorized to be appropriated, to remain available until the expiration of the pilot program under subsection (l)-- ``(A) $12,000,000 for fiscal year 2000; ``(B) $12,800,000 for fiscal year 2001; ``(C) $13,700,000 for fiscal year 2002; and ``(D) $14,500,000 for fiscal year 2003.''; (2) in paragraph (2)-- (A) by striking ``Amounts made'' and inserting the following: ``(A) In general.--Except as provided in subparagraph (B), amounts made''; and (B) by adding at the end the following: ``(B) Exception.--Of the total amount made available under this subsection for a fiscal year, the following amounts shall be available for costs incurred in connection with the selection of applicants for assistance under this subsection and with monitoring and oversight of the program authorized under this subsection: ``(i) For fiscal year 2000, 2 percent of such total amount. ``(ii) For fiscal year 2001, 1.9 percent of such total amount. ``(iii) For fiscal year 2002, 1.9 percent of such total amount. ``(iv) For fiscal year 2003, 1.6 percent of such total amount.''; and (3) by adding at the end the following: ``(4) Reservation of funds for sustainability pilot program.-- ``(A) In general.--Of the total amount made available under this subsection for a fiscal year, the following amounts shall be reserved for sustainability grants under subsection (l): ``(i) For fiscal year 2000, 17 percent of such total amount. ``(ii) For fiscal year 2001, 18.8 percent of such total amount. ``(iii) For fiscal year 2002, 30.2 percent of such total amount. ``(iv) For fiscal year 2003, 30.2 percent of such total amount. ``(B) Use of unawarded reserve funds.-- ``(i) Sustainability grants to other centers.--Of amounts reserved under subparagraph (A), the Administration shall use any funds that remain available after making grants in accordance with subsection (l) to make grants under such subsection to women's business center sites that have completed a project financed under this section (or any predecessor to this section) and that continue to provide assistance to women entrepreneurs. ``(ii) Additional grants.--The Administration shall use any funds described in clause (i) that remain available after making grants under such clause to make grants to additional women's business center sites, or to increase the grants to existing women's business center sites, under subsection (b).''. (c) Guidelines.--Not later than 30 days after the date of the enactment of this Act, the Administrator of the Small Business Administration shall issue guidelines to implement the amendments made by this section. SEC. 5. EFFECTIVE DATE. This Act and the amendments made by this Act shall take effect on October 1, 1999. Passed the House of Representatives October 19, 1999. Attest: JEFF TRANDAHL, Clerk.
(Sec. 3) Directs the Small Business Administration (SBA) to: (1) perform an annual examination of the programs and finances of each women's business center; and (2) determine the viability of each center after such examination. Requires a report from the SBA to the congressional small business committees on the effectiveness of all projects conducted under the women's business centers program. (Sec. 4) Establishes a four-year pilot program under which the SBA is authorized to make grants on a competitive basis to organizations that have received assistance for participation in the women's business centers program and that: (1) are in the final year of a five-year project; or (2) have completed a financed project and continue to provide assistance to women entrepreneurs. Outlines grant participation conditions, including certification that the organization is private and nonprofit and submission of a five-year plan that demonstrates the organization's ability to serve women business owners or potential owners and to provide training and services to women who are both socially and economically disadvantaged. Outlines matching fund requirements and application procedures. Authorizes appropriations for FY 2000 through 2003 for the pilot program, earmarking specified amounts for administrative costs related to the selection of grant participants and sustainability grants.
we study scaling limits of _ internal diffusion limited aggregation _ ( `` internal dla '' ) , a growth model introduced in @xcite . in internal dla , one inductively constructs an * occupied set * @xmath8 for each time @xmath9 as follows : begin with @xmath10 and @xmath11 , and let @xmath12 be the union of @xmath13 and the first place a random walk from the origin hits @xmath14 . the purpose of this paper is to study the growing family of sets @xmath13 . following the pioneering work of @xcite , it is by now well known that , for large @xmath1 , the set @xmath13 approximates an origin - centered euclidean lattice ball @xmath15 ( where @xmath16 is such that @xmath17 has volume @xmath1 ) . the authors recently showed that this is true in a fairly strong sense @xcite : the maximal distance from a point where @xmath18 is non - zero to @xmath19 is a.s . @xmath2 if @xmath3 and @xmath4 if @xmath5 . in fact , if @xmath20 is large enough , the probability that this maximal distance exceeds @xmath21 ( or @xmath22 when @xmath5 ) decays faster than any fixed ( negative ) power of @xmath1 . some of these results are obtained by different methods in @xcite . this paper will ask what happens if , instead of considering the maximal distance from @xmath19 at time @xmath1 , we consider the `` average error '' at time @xmath1 ( allowing inner and outer errors to cancel each other out ) . it turns out that in a distributional `` average fluctuation '' sense , the set @xmath13 deviates from @xmath17 by only a constant number of lattice spaces when @xmath23 and by an even smaller amount when @xmath5 . appropriately normalized , the fluctuations of @xmath13 , taken over time and space , define a distribution on @xmath24 that converges in law to a variant of the gaussian free field ( gff ) : a random distribution on @xmath24 that we will call the * augmented gaussian free field*. ( it can be constructed by defining the gff in spherical coordinates and replacing variances associated to spherical harmonics of degree @xmath25 by variances associated to spherical harmonics of degree @xmath26 ; see [ ss.augmentedgff ] . ) the `` augmentation '' appears to be related to a damping effect produced by the mean curvature of the sphere ( as discussed below ) . , with particles started uniformly on @xmath27 . though we do not prove this here , we expect the cluster boundaries to be approximately flat cross - sections of the cylinder , and we expect the fluctuations to scale to the _ ordinary _ gff on the half cylinder as @xmath28 . ] to our knowledge , no central limit theorem of this kind has been previously conjectured in either the physics or the mathematics literature . the appearance of the gff and its `` augmented '' variants is a particular surprise . ( it implies that internal dla fluctuations although very small have long - range correlations and that , up to the curvature - related augmentation , the fluctuations in the direction transverse to the boundary of the cluster are of a similar nature to those in the tangential directions . ) nonetheless , the heuristic idea is easy to explain . before we state the central limit theorems precisely ( [ ss.twostatement ] and [ ss.generalstatement ] ) , let us explain the intuition behind them . write a point @xmath29 in polar coordinates as @xmath30 for @xmath31 and @xmath32 on the unit sphere . suppose that at each time @xmath1 the boundary of @xmath13 is approximately parameterized by @xmath33 for a function @xmath34 defined on the unit sphere . write @xmath35 where @xmath36 is the volume of the unit ball in @xmath24 . the @xmath37 term measures the deviation from circularity of the cluster @xmath13 in the direction @xmath32 . how do we expect @xmath38 to evolve in time ? to a first approximation , the angle at which a random walk exits @xmath13 is a uniform point on the unit sphere . if we run many such random walks , we obtain a sort of poisson point process on the sphere , which has a scaling limit given by space - time white noise on the sphere . however , there is a smoothing effect ( familiar to those who have studied the continuum analog of internal dla : the famous hele - shaw model for fluid insertion , see the reference text @xcite ) coming from the fact that places where @xmath38 is small are more likely to be hit by the random walks , hence more likely to grow in time . there is also secondary damping effect coming from the mean curvature of the sphere , which implies that even if ( after a certain time ) particles began to hit all angles with equal probability , the magnitude of @xmath38 would shrink as @xmath1 increased and the existing fluctuations were averaged over larger spheres . the white noise should correspond to adding independent brownian noise terms to the spherical fourier modes of @xmath38 . the rate of smoothing / damping in time should be approximately given by @xmath39 for some linear operator @xmath40 mapping the space of functions on the unit sphere to itself . since the random walks approximate brownian motion ( which is rotationally invariant ) , we would expect @xmath40 to commute with orthogonal rotations , and hence have spherical harmonics as eigenfunctions . with the right normalization and parameterization , it is therefore natural to expect the spherical fourier modes of @xmath38 to evolve as independent brownian motions subject to linear `` restoration forces '' ( a.k.a . ornstein - uhlenbeck processes ) where the magnitude of the restoration force depends on the degree of the corresponding spherical harmonic . it turns out that the restriction of the ( ordinary or augmented ) gff on @xmath24 to a centered volume @xmath1 sphere evolves in time @xmath1 in a similar way . of course , as stated above , the `` spherical fourier modes of @xmath38 '' have not really been defined ( since the boundary of @xmath13 is complicated and generally _ can not _ be parameterized by @xmath41 ) . in the coming sections , we will define related quantities that ( in some sense ) encode these spherical fourier modes and are easy to work with . these quantities are the martingales obtained by summing discrete harmonic polynomials over the cluster @xmath13 . the heuristic just described provides intuitive interpretations of the results given below . theorem [ t.fluctuations ] , for instance , identifies the weak limit as @xmath42 of the internal dla fluctuations from circularity at a fixed time @xmath1 : the limit is the two - dimensional augmented gaussian free field restricted to the unit circle @xmath43 , which can be interpreted in a distributional sense as the random fourier series @xmath44\ ] ] where @xmath45 for @xmath46 and @xmath47 for @xmath48 are independent standard gaussians . the ordinary two - dimensional gff restricted to the unit circle is similar , except that @xmath49 is replaced by @xmath50 . the series unlike its counterpart for the one - dimensional gaussian free field , which is a variant of brownian bridge is a.s . divergent , which is why we use the dual formulation explained in [ ss.generalstatement ] . the dual formulation of amounts to a central limit theorem , saying that for each @xmath48 the real and imaginary parts of @xmath51 converge in law as @xmath52 to normal random variables with variance @xmath53 ( and that @xmath54 and @xmath55 are asymptotically uncorrelated for @xmath56 ) . see @xcite for numerical data on the moments @xmath55 in large simulations . before we set about formulating our central limit theorems precisely , we mention a previously overlooked fact . suppose that we run internal dla in continuous time by adding particles at poisson random times instead of at integer times : this process we will denote by @xmath57 ( or often just @xmath58 ) where @xmath59 is the counting function for a poisson point process in the interval @xmath60 $ ] ( so @xmath59 is poisson distributed with mean @xmath1 ) . we then view the entire history of the idla growth process as a ( random ) function on @xmath61 , which takes the value @xmath62 or @xmath63 on the pair @xmath64 accordingly as @xmath65 or @xmath66 . write @xmath67 for the set of functions @xmath68 such that @xmath69 whenever @xmath70 , endowed with the coordinate - wise partial ordering . let @xmath71 be the distribution of @xmath72 , viewed as a probability measure on @xmath67 . [ t.fkg ] _ ( fkg inequality ) _ for any two increasing functions @xmath73 , the random variables @xmath74 and @xmath75 are nonnegatively correlated . one example of an increasing function is the total number @xmath76 of occupied sites in a fixed subset @xmath77 at a fixed time @xmath1 . one example of a decreasing function is the smallest @xmath1 for which all of the points in @xmath78 are occupied . intuitively , theorem [ t.fkg ] means that on the event that one point is absorbed at a late time , it is conditionally more likely for all other points to be absorbed late . the fkg inequality is an important feature of the discrete and continuous gaussian free fields @xcite , so it is interesting ( and reassuring ) that it appears in internal dla at the discrete level . note that sampling a continuous time internal dla cluster at time @xmath1 is equivalent to first sampling a poisson random variable @xmath79 with expectation @xmath1 and then sampling an ordinary internal dla cluster of size @xmath79 . ( by the central limit theorem , @xmath80 has order @xmath81 with high probability . ) although using continuous time amounts to only a modest time reparameterization ( chosen independently of everything else ) it is aesthetically natural . our use of `` white noise '' in the heuristic of the previous section implicitly assumed continuous time . ( otherwise the total integral of @xmath38 would be deterministic , so the noise would have to be conditioned to have mean zero at each time . ) [ cols="^,^ " , ] the restriction to harmonic @xmath82 ( as opposed to a more general test function @xmath83 ) seems to be necessary in large dimensions because otherwise the derivative of the test function along @xmath43 appears to have a non - trivial effect on ( see [ ss.fixedtimeproof ] ) . this is because has a lot of positive mass just outside of the unit sphere and a lot of negative mass just inside the unit sphere . it may be possible to formulate a version of theorem [ t.highdconvergence ] ( involving some modification of the `` mean shape '' described by @xmath84 ) that uses test functions that depend only on @xmath32 in a neighborhood of the sphere ( instead of using only harmonic test functions ) , but we will not address this point here . deciding whether theorem [ t.lateness ] as stated extends to higher dimensions requires some number theoretic understanding of the extent to which the discrepancies between @xmath84 and @xmath85 ( as well as the errors that come from replacing a @xmath86 with a smooth test function @xmath83 ) average out when one integrates over a range of times . we will not address these points here either . we may write a general vector in @xmath24 as @xmath30 where @xmath87 and @xmath88 . we write the laplacian in spherical coordinates as @xmath89 a polynomial @xmath90 $ ] is called _ harmonic _ if @xmath91 is the zero polynomial . let @xmath92 denote the space of all homogenous harmonic polynomials in @xmath93 $ ] of degree @xmath25 , and let @xmath94 denote the space of functions on @xmath95 obtained by restriction from @xmath92 . if @xmath96 , then we can write @xmath97 for a function @xmath98 , and setting to zero at @xmath99 yields @xmath100 i.e. , @xmath101 is an eigenfunction of @xmath102 with eigenvalue @xmath103 . note that continues to be zero if we replace @xmath25 with the negative number @xmath104 , since the expression @xmath105 is unchanged by replacing @xmath25 with @xmath106 . thus , @xmath107 is also harmonic on @xmath108 . now , suppose that @xmath101 is normalized so that @xmath109 by scaling , the integral of @xmath110 over @xmath111 is thus given by @xmath112 . the @xmath113 norm on all of @xmath114 is then given by @xmath115 a standard identity states that the dirichlet energy of @xmath101 , as a function on @xmath95 , is given by the @xmath113 inner product @xmath116 . the square of @xmath117 is given by the square of its component along @xmath95 plus the square of its radial component . we thus find that the dirichlet energy of @xmath118 on @xmath114 is given by @xmath119 now suppose that we fix the value of @xmath118 on @xmath111 as above but harmonically extend it outside of @xmath114 by writing @xmath120 for @xmath121 . then the dirichlet energy of @xmath118 outside of @xmath114 can be computed as @xmath122 which simplifies to @xmath123 combining the inside and outside computations in the case @xmath124 , we find that the harmonic extension @xmath125 of the function given by @xmath101 on @xmath95 has dirichlet energy @xmath126 . if we decompose the gff into an orthonormal basis that includes this @xmath125 , we find that the component of @xmath125 is a centered gaussian with variance @xmath127 . if we replace @xmath125 with the harmonic extension of @xmath128 ( defined on @xmath111 ) , then by scaling the corresponding variance becomes @xmath129 . now in the augmented gff the variance is instead given by , which amounts to replacing @xmath127 with @xmath130 . considering the component of @xmath128 in a basis expansion the space of functions on @xmath111 requires us to divide by @xmath131 ( to account for the scaling of @xmath118 ) and by @xmath132 ( to account for the larger integration area ) , so that we again obtain a variance of @xmath133 for the augmented gff , versus @xmath129 for the gff . in some ways , the augmented gff is very similar to the ordinary gff : when we restrict attention to an origin - centered annulus , it is possible to construct independent gaussian random distributions @xmath134 , @xmath135 , and @xmath136 such that @xmath134 has the law of a constant multiple of the gff , @xmath137 has the law of the augmented gff , and @xmath138 has the law of the ordinary gff . in light of theorem [ t.fluctuations ] , the following implies that ( up to absolute continuity ) the scaling limit of fixed - time @xmath13 fluctuations can be described by the gff itself . [ p.abscont ] when @xmath3 , the law @xmath139 of the restriction of the gff to the unit circle ( modulo additive constant ) is absolutely continuous w.r.t . the law @xmath140 of the restriction of the augmented gff restricted to the unit circle . the relative entropy of a gaussian of density @xmath141 with respect to a gaussian of density @xmath142 is given by @xmath143 note that @xmath144 , and in particular @xmath145 . thus the relative entropy of a centered gaussian of variance @xmath62 with respect to a centered gaussian of variance @xmath146 is @xmath147 . this implies that the relative entropy of @xmath140 with respect to @xmath139 restricted to the @xmath148th component @xmath149 is @xmath150 . the same holds for the relative entropy of @xmath139 with respect to @xmath140 . because the @xmath149 are independent in both @xmath140 and @xmath139 , the relative entropy of one of @xmath140 and @xmath139 with respect to the other is the sum of the relative entropies of the individual components , and this sum is finite . we recall that increasing functions of a poisson point process are non - negatively correlated @xcite . ( this is easily derived from the more well known statement @xcite that increasing functions of independent bernoulli random variables are non - negatively correlated . ) let @xmath140 be the simple random walk probability measure on the space @xmath151 of walks @xmath152 beginning at the origin . then the randomness for internal dla is given by a rate - one poisson point process on @xmath153 where @xmath139 is lebesgue measure on @xmath154 . a realization of this process is a random collection of points in @xmath155 . it is easy to see ( for example , using the abelian property of internal dla discovered by diaconis and fulton @xcite ) that adding an additional point @xmath156 increases the value of @xmath57 for all times @xmath1 . the @xmath57 are hence increasing functions of the poisson point process , and are non - negatively correlated . since @xmath157 and @xmath158 are increasing functions of the @xmath57 , they are also increasing functions of the point process and are thus non - negatively correlated . let @xmath159 be a polynomial that is harmonic on @xmath24 , that is @xmath160 in this section we give a recipe for constructing a polynomial @xmath161 that closely approximates @xmath82 and is discrete harmonic on @xmath0 , that is , @xmath162 where @xmath163 is the symmetric second difference in direction @xmath164 . the construction described below is nearly the same as the one given by lovsz in @xcite , except that we have tweaked it in order to obtain a smaller error term : if @xmath82 has degree @xmath165 , then @xmath166 has degree at most @xmath167 instead of @xmath168 . discrete harmonic polynomials have been studied classically , primarily in two variables : see for example duffin @xcite , who gives a construction based on discrete contour integration . consider the linear map @xmath169 \to \r[x_1,\ldots , x_d]\ ] ] defined on monomials by @xmath170 where we define @xmath171 [ discretepolynomial ] if @xmath172 $ ] is a polynomial of degree @xmath165 that is harmonic on @xmath24 , then the polynomial @xmath173 is discrete harmonic on @xmath0 , and @xmath166 is a polynomial of degree at most @xmath167 . an easy calculation shows that @xmath174 from which we see that @xmath175 = \xi [ \frac{\partial^2}{\partial x_i^2 } \psi ] .\ ] ] if @xmath82 is harmonic , then the right side vanishes when summed over @xmath176 , which shows that @xmath177 $ ] is discrete harmonic . note that @xmath178 is even for @xmath165 even and odd for @xmath165 odd . in particular , @xmath179 has degree at most @xmath167 , which implies that @xmath180 has degree at most @xmath167 . to obtain a discrete harmonic polynomial @xmath86 on the lattice @xmath181 , we set @xmath182 where @xmath165 is the degree of @xmath82 . for each fixed @xmath82 with @xmath183 the process @xmath184 is a martingale in @xmath1 . each time a new particle is added , we can imagine that it performs brownian motion on the grid ( instead of a simple random walk ) , which turns @xmath185 into a continuous martingale , as in @xcite . by the martingale representation theorem ( see ( * ? ? ? * theorem v.1.6 ) ) , we can write @xmath186 , where @xmath187 is a standard brownian motion and @xmath188 is the quadratic variation of @xmath185 on the interval @xmath60 $ ] . to show that @xmath189 converges in law as @xmath28 to a gaussian with variance @xmath190 , it suffices to show that for fixed @xmath1 the random variable @xmath191 converges in law to @xmath192 . by standard riemann integration and the @xmath13 fluctuation bounds in @xcite ( the weaker bounds of @xcite would also suffice here ) we know that @xmath193 converges in law to @xmath194 as @xmath28 . thus it suffices to show that @xmath195 converges in law to zero . this expression is actually a martingale in @xmath1 . its expected square is the sum of the expectations of the squares of its increments , each of which is @xmath196 . the overall expected square of is thus @xmath197 , which indeed tends to zero . recall and note that if we replace @xmath1 with @xmath59 , this does not change the convergence in law of @xmath191 when @xmath183 . however , when @xmath198 , it introduces an asymptotically independent source or randomness which scales to a gaussian of variance @xmath199 ( simply by the central limit theorem for the poisson point process ) , and hence remains correct in this case . similarly , suppose we are given @xmath200 and functions @xmath201 . the same argument as above , using the martingale in @xmath1 , @xmath202 implies that @xmath203 converges in law to a gaussian with variance @xmath204 the theorem now follows from a standard fact about gaussian random variables on a finite dimensional vector spaces ( proved using characteristic functions ) : namely , a sequence of random variables on a vector space converges in law to a multivariate gaussian if and only if all of the one - dimensional projections converge . the law of @xmath205 is determined by the fact that it is a centered gaussian with covariance given by . recall that @xmath13 for @xmath206 denotes the discrete - time idla cluster with exactly @xmath1 sites , and @xmath207 for @xmath208 denotes the continuous - time cluster whose cardinality is poisson - distributed with mean @xmath1 . define @xmath209 and @xmath210 fix @xmath211 , and consider a test function of the form @xmath212 where the @xmath213 are smooth functions supported in an interval @xmath214 . we will assume , furthermore , that @xmath215 is real - valued . that is , the complex numbers @xmath213 satisfy @xmath216 it follows from lemma [ dual ] below ( with @xmath220 , @xmath221 ) , that theorem [ gffdisk0 ] can be interpreted as saying that @xmath222 tends weakly to the gaussian random distribution associated to the hilbert space @xmath223 with norm @xmath224\frac{dr}{r}\ ] ] where @xmath225 and @xmath226 . ( the subscript @xmath227 means nonradial : @xmath223 is the orthogonal complement of radial functions in the sobolev space @xmath228 . ) [ dual ] let @xmath229 and let @xmath82 be a real valued function on @xmath230 . denote @xmath231 where the supremum is over real - valued @xmath118 , compactly supported and subject to the constraint @xmath232 then @xmath233 in the case @xmath234 , replace @xmath118 in @xmath235 with @xmath236 change order of integration and apply the cauchy - schwarz inequality . for the case @xmath237 , multiply by the appropriate factors @xmath238 and @xmath239 to deduce this from the case @xmath234 . theorem [ gffdisk1 ] is a restatement of theorem [ t.lateness ] . ( as in the proof of theorem [ t.highdconvergence ] , the convergence in law of all one - dimensional projections to the appropriate normal random variables implies the corresponding result for the joint distribution of any finite collection of such projections . ) theorem [ gffdisk1 ] says that @xmath245 tends weakly to a gaussian distribution for the hilbert space @xmath228 with the norm @xmath246\frac{dr}{r}.\ ] ] by way of comparison , the usual gaussian free field is the one associated to the dirichlet norm @xmath247\frac{dr}{r}.\ ] ] comparing these two norms , we see that the second term in @xmath248 has an additional @xmath249 , hence our choice of the term `` augmented gaussian free field . '' as derived in [ ss.augmentedgff ] , this @xmath249 results in a smaller variance @xmath250 in each spherical mode of degree @xmath25 of the augmented gff , as compared to @xmath251 for the usual gff . the surface area of the sphere is implicit in the normalization , and is accounted for here in the factors @xmath252 above . let @xmath254 , and for @xmath48 let @xmath255 , where @xmath256\ ] ] is the discrete harmonic polynomial associated to @xmath257 as described in [ s.polynomials ] . the sequence @xmath258 begins @xmath259 for instance , to compute @xmath260 , we expand @xmath261\ ] ] and apply @xmath262 to each monomial , obtaining @xmath263\ ] ] which simplifies to @xmath264 . one readily checks that this defines a discrete harmonic function on @xmath265 . ( in fact , @xmath266 is itself discrete harmonic , but @xmath267 is not for @xmath268 . ) to define @xmath258 for negative @xmath165 , we set @xmath269 . part ( a ) of this lemma was proved by van der corput in the 1920s ( see @xcite , theorem 87 p. 484 ) . part ( b ) follows from the same method , as proved below . part ( c ) follows from part ( b ) and the stronger estimate of lemma [ discretepolynomial ] , @xmath280 for @xmath281 ( and @xmath282 ) . we prove part ( b ) in all dimensions . let @xmath283 be a harmonic polynomial on @xmath24 of homogeneous of degree @xmath165 . normalize so that @xmath284 where @xmath285 is the unit ball . in this discussion @xmath165 will be fixed and the constants are allowed to depend on @xmath165 . consider @xmath294 a smooth , radial function on @xmath24 with integral @xmath62 supported in the unit ball . then define @xmath295 characteristic function of the unit ball . denote @xmath296 then @xmath297 this is because @xmath298 is nonzero only in the annulus of width @xmath299 around @xmath300 in which ( by the van der corput bound ) there are @xmath301 lattice points . the poisson summation formula implies @xmath302*\hat p_k(\xi)/r^k\ ] ] in the sense of distributions . the fourier transform of a polynomial is a derivative of the delta function , @xmath303 . because @xmath286 and @xmath178 is harmonic , its average with repect to any radial function is zero . this is expressed in the dual variable as the fact that when @xmath304 , @xmath305 ) = 0\ ] ] so we our sum equals @xmath306*\hat p_k ( \xi)/r^k\ ] ] next look at @xmath307 @xmath308 all the terms in which fewer derivatives fall on @xmath309 and more fall on @xmath310 give much smaller expressions : the factor @xmath311 corresponding to each such differentiation is replaced by an @xmath312 . the asymptotics of this oscillatory integral above are well known . for any fixed polynomial @xmath313 they are of the same order of magnitude as for @xmath314 , namely @xmath315 this is proved by the method of stationary phase and can also be derived from well known asymptotics of bessel functions . denote @xmath320 applying the formula above for @xmath321 , @xmath322 to estimate the error term @xmath323 , note first that the coefficients @xmath213 are supported in a fixed annulus , the integrand above is supported in the range @xmath324 . furthermore , by @xcite , there is an absolute constant @xmath20 such that for all sufficiently large @xmath311 and all @xmath1 in this range , the difference @xmath325 is supported on the set of @xmath326 such that @xmath327 . thus @xmath328 moreover , lemma [ discrete - error ] applies and @xmath329 next , lemma [ vandercorput](a ) says ( since @xmath330 ) @xmath331 thus replacing @xmath82 by @xmath332 gives an additional error of size at most @xmath333 in all , @xmath334 for @xmath335 , consider the process @xmath336 note that @xmath337 as @xmath338 . note also that lemma [ vandercorput](c ) implies @xmath339 because @xmath258 are discrete harmonic and @xmath340 for all @xmath341 , @xmath342 is a martingale . it remains to show that @xmath343 in law . as outlined below , this will follow from the martingale central limit theorem ( see , e.g. , @xcite or @xcite ) . for sufficiently large @xmath311 , the difference @xmath344 is nonzero only for @xmath345 in the range @xmath346 ; and @xmath347 . we now show that this implies @xmath348 so that the martingale central limit theorem applies . to prove , observe that @xmath349 where @xmath350 is the @xmath351th point of @xmath13 . then @xmath352 implies @xmath353 , and hence @xmath354 recalling that @xmath355 unless @xmath356 , we have @xmath357 which confirms . because @xmath13 fills the lattice @xmath265 as @xmath358 , we have @xmath359 we prove in three steps : replace @xmath360 by @xmath267 ( or @xmath361 if @xmath362 ) ; replace the lower limit @xmath363 by @xmath364 ; replace the sum of @xmath350 over lattice sites with the integral with respect to lebesgue measure in the complex @xmath350-plane . we begin the proof of by noting that the error term introduced by replacing @xmath258 with @xmath267 is @xmath365 in the integral this is majorized by @xmath366 since there are @xmath367 such terms , this change contributes order @xmath368 to the sum . next , we change the lower limit from @xmath363 to @xmath369 . since @xmath370 , the integral inside @xmath371 is changed by @xmath372 thus the change in the whole expression is majorized by the order of the cross term @xmath373 again there are @xmath374 terms in the sum over @xmath350 , so the sum of the errors is @xmath375 . lastly , we replace the value at each site @xmath376 by the integral @xmath377 where @xmath378 is the unit square centered at @xmath376 and @xmath379 . because the square has area @xmath62 , the term in the lattice sum is the same as this integral with @xmath380 replaced by @xmath376 at each occurrence . since @xmath381 , @xmath382 after we divide by @xmath383 , the order of error is @xmath384 . adding all the errors contributes at most order @xmath384 to the sum . we must also take into account the change in the lower limit of the integral , @xmath385 is replaced by @xmath386 . since @xmath381 , @xmath387 recall that in the previous step we previously changed the lower limit by @xmath388 . thus by the same argument , this smaller change gives rise to an error of order @xmath384 in the sum over @xmath376 . the proof of is now reduced to evaluating @xmath389 integrating in @xmath32 and changing variables from @xmath7 to @xmath390 , @xmath391 then change variables from @xmath1 to to @xmath392 to obtain @xmath393 this ends the proof of theorem [ gffdisk0 ] . the proof of theorem [ gffdisk1 ] follows the same idea . we replace @xmath13 by the poisson time region @xmath394 ( for @xmath395 ) , and we need to find the limit as @xmath217 of @xmath396 the error terms in the estimation showing this quantity is within @xmath397 of @xmath398 are nearly the same as in the previous proof . we describe briefly the differences . the difference between poisson time and ordinary counting is @xmath399 . it follows that for @xmath400 , @xmath401 as in the previous proof for @xmath363 . further errors are also controlled since we then have the estimate analogous to the one above for @xmath13 , namely @xmath402 we consider the continuous time martingale @xmath403 instead of using the martingale central limit theorem , we use the martingale representation theorem . this says that the martingale @xmath404 when reparameterized by its quadratic variation has the same law as brownian motion . we must show that almost surely the quadratic variation of @xmath185 on @xmath405 is @xmath406 . @xmath407 integrating with respect to @xmath345 gives the quadratic variation @xmath406 after a suitable change of variable as in the previous proof . theorem [ t.fluctuations ] follows almost immediately from the @xmath3 case of theorem [ t.highdconvergence ] and the estimates above . consider @xmath408 where @xmath409 is as in . what happens if we replace @xmath83 with a function @xmath410 that is discrete harmonic on the rescaled mesh @xmath411 within a @xmath412 neighborhood of @xmath413 ? clearly , if @xmath83 is smooth , we will have @xmath414 . since there are at most @xmath415 non - zero terms in , the discrepancy in @xmath416 which tends to zero as long as @xmath417 . the fact that replacing @xmath418 with @xmath409 has a negligible effect follows from the above estimates when @xmath3 . this may also hold when @xmath419 , but we will not prove it here . instead we remark that theorem [ t.fluctuations ] holds in three dimensions provided that we replace with , and that the theorem as stated probably fails in higher dimensions even if we make a such a replacement . the reason is that is positive at points slightly outside of @xmath420 ( or outside of the support of @xmath84 ) and negative at points slightly inside . if we replace a discrete harmonic polynomial @xmath82 with a function that agrees with @xmath82 on @xmath413 but has a different derivative along portions of @xmath43 , this may produce a non - trivial effect ( by the discussion above ) when @xmath421 . r. durrett , _ probability : theory and examples _ , 2nd ed . , 1995 . c. m. fortuin , p. w. kasteleyn and j. ginibre , correlation inequalities on some partially ordered sets , _ comm . * 22 * : 89103 , 1971 . a. ivi , e. krtzel , m. khleitner , and w.g . nowak , lattice points in large regions and related arithmetic functions : recent developments in a very classic topic , _ elementare und analytische zahlentheorie _ , schr . wiss . ges . johann wolfgang goethe univ . frankfurt am main , 20 , 89128 , 2006 . d. jerison , l. levine and s. sheffield , internal dla : slides and audio . _ midrasha on probability and geometry : the mathematics of oded schramm . _ http://iasmac31.as.huji.ac.il:8080/groups/midrasha_14/weblog/855d7/images/bfd65.mov , 2009 . d. jerison , l. levine and s. sheffield , logarithmic fluctuations for internal dla . d. jerison , l. levine and s. sheffield , internal dla in higher dimensions . http://arxiv.org/abs/1012.3453[arxiv:1012.3453 ] g. lawler , _ intersections of random walks _ , birkhuser , 1996 . l. levine and y. peres , strong spherical asymptotics for rotor - router aggregation and the divisible sandpile , _ potential anal . _ * 30 * ( 2009 ) , 127 . http://arxiv.org/abs/0704.0688[arxiv:0704.0688 ] l. lovsz , discrete analytic functions : an exposition , _ surveys in differential geometry _ * ix*:241273 , 2004 .
in previous works , we showed that the internal dla cluster on @xmath0 with @xmath1 particles is almost surely spherical up to a maximal error of @xmath2 if @xmath3 and @xmath4 if @xmath5 . this paper addresses `` average error '' : in a certain sense , the average deviation of internal dla from its mean shape is of _ constant _ order when @xmath3 and of order @xmath6 ( for a radius @xmath7 cluster ) in general . appropriately normalized , the fluctuations ( taken over time and space ) scale to a variant of the gaussian free field .
catastrophic polyethylene failure is a rare complication of ceramic - on - polyethylene total hip arthroplasty due to the favorable tribological characteristics of ceramic . failure of the polyethylene liner can be disastrous , increasing periprosthetic osteolysis , metallosis , and risk of dislocation . complications associated with ceramic - on - polyethylene articulations have been studied extensively , however , only few reports have described its catastrophic wear . we report such a case of complete wear of the acetabular liner in a ceramic - on - polyethylene prosthesis in a 57-year adult male . a 57-year adult male with a history of bilateral total hip arthroplasty presented to our institution with bilateral hip pain worst on the right . range of motion was limited by pain on the right hip at the extremes of motion . radiographs revealed severe osteolysis , heterotopic ossification , complete wear of the acetabular liner , bony impingement of the femoral greater trochanter on the acetabular rim and superior migration of the femoral head . revision of the acetabular components was performed , which successfully alleviated the patient s symptoms . failure of the ceramic - on - polyethylene liner in our patient is due to the use of a non - cross linked polyethylene liner , a highly active lifestyle , and poor follow up . arthroplasty surgeons should be aware of this complication especially in highly active patients with a conventional polyethylene liner and chronic hip pain . failure of the polyethylene liner depends on patient , implant , and surgery related factors . when metal femoral heads are used , there is an increased risk of complete wear through of the polyethylene acetabular liner and metal shell , increasing the risk of wear induced osteolysis , which is one of the most important factors contributing to failure of total hip arthroplasty . in efforts to reduce polyethylene wear , ceramic 's higher resistance to scratching compared to chromium cobalt ( cocr ) , and inert qualities in an aqueous environment contribute to the lower linear and volumetric polyethylene wear rates seen in ceramic on polyethylene bearings compared to cocr on polyethylene bearings . alumina and zirconia comprise the two main ceramic bearings available , with the former having lower polyethylene wear rates due to phase transformation seen in zirconia causing increased surface roughness . several studies have shown alumina ceramic heads to produce consistent polyethylene wear rates of 0.03mm / year [ 6 - 8 ] , while other studies report variable outcomes in polyethylene wear reporting 0.01 - 0.34mm / year . a wear rate of 0.2mm / year and reports in the literature of catastrophic wear of the ceramic - polyethylene articulation have implicated acetabular inclination > 450 , use of gamma sterilized polyethylene , increased activity level , age < 50 , and backside wear as causes of failure . we report a case of catastrophic polyethylene failure where the ceramic femoral head completely wore through the polar region of the polyethylene liner and imprinted in to the metal acetabular shell resulting in aseptic loosening , severe osteolysis , and periarticular metallosis . we suggest that this failure is mainly due to the use of a non - crosslinked polyethylene liner , the patient s highly active lifestyle , and poor follow up . a 57-year - old african american male with a history of bilateral total hip arthroplasty , presented to our clinic with progressively worsening bilateral hip pain that is worst on the right . the patient underwent bilateral total hip arthroplasty in 1994 and 1995 with a wright technology interseal ceramic - on - non - crosslinked polyethylene hip system ( wright technologies ; arlington , tn ) for a diagnosis of avascular necrosis . the acetabular cup measured 54mm(o.d)/28mm(id ) and the ceramic head measured 28 mm . the patient underwent replacement of the acetabular components and femoral head of his left total hip arthroplasty in 1999 at an outside facility due to implant failure after an athletic injury . in 2004 , he began complaining of pain , which was localized to both his hips but worst on the right . despite this pain , he remained engaged in light athletic activities 7 days a week and did not follow up with his physician for annual check - ups . 1 month prior to his presentation , the patient noticed pain radiating into his right groin . on physical examination there was no gross deformity or swelling present , however , there was diffuse tenderness to palpation over bilateral hips . in the right hip , range of motion was limited to 40 of flexion , 100 of internal rotation , and 25 of abduction . in the left hip , range of motion was limited to 70 of flexion , 15 of internal rotation , and 30 of abduction with pain elicited in the extremes of motion . infection work up was negative revealing a slightly elevated esr of 16 mm / hr ( n 0 - 15 ) but normal crp of 0.5 mg / dl ( n 0 to 0.7 ) , hip joint aspiration reveled a cell count of 212 white blood cells ( wbc ) cells / cu mm and 63% polymorphonuclear cells . radiographs of bilateral hips revealed severe osteolysis , heterotopic ossification , wear of the acetabular liner , bony impingement of the femoral greater trochanter on the acetabular rim and superior migration of the femoral head [ fig 1 ] . acetabular cup abduction angles were 21 and 50 in the right and left hip respectively . femoral offset was 5.6 cm ( right ) and 4.0 cm ( left ) . due to the severe wear and pain in both hips with failure to respond to conservative non - surgical treatment , the patient was indicated for staged bilateral total hip arthroplasty revision . increased wear is seen of the right acetabular liner and new superolateral migration of left femoral head , c ) unilateral radiograph of right hip at presentation illustrating severe wearof the acetabular liner . on the right , intraoperative findings revealed complete wear of the ceramic femoral head through the polar aspect of the polyethylene liner into the acetabular cup [ fig 2a ] , causing severe metallic wear and indentation in the cup with severe metallosis and tissue destruction in the periarticular region [ fig 2b ] . severe osteolysis and bone loss was present behind the acetabular cup , especially at the medial and superior posterior walls as well as part of the posterior column . cancellous bone was used to repair contained areas of gross osteolysis . a new acetabular trabecular metal cup ( zimmer ; warsaw , in ) was press fit , in the position of maximum bony contact , with the support of multiple screws . a cemented highly cross - linked polyethylene liner was cemented into the acetabular cup at 45 of inclination and 20 of anteversion . the ceramic bearing was replaced with a versys hip system 36 mm femoral head with a 12/14 taper ( zimmer ; warsaw , in ) . revision of the left hip was performed a few months following the right hip revision , moderate proximal femur osteolysis was present with moderate polyethylene liner wear . on both the left and the right hip it was decided to retain the femoral stem after it was inspected intra operatively and found to be stable and in proper version . a ) retrieved components from the right hip illustrating complete wear through the acetabular liner and metallosis on the ceramic bearing . b ) intraoperative illustration of severe metallosis periartcularly due to long term wear of the acetabular cup by the ceramic bearing . analysis of the severely worn right hip components revealed > 50% wear on the articular surface of the polyethylene insert and ceramic bearing according to the hss scoring system ( table 1 and 2 ) . the articular surface of the ceramic femoral head showed an area of ceramic - on - metal contact , severe metal transfer , and a severe loss of surface smoothness [ fig 2c ] . the unworn areas of the ceramic head had an average surface roughness ( ra ) of 1 nm to 2 nm while the worn areas had ra of 2500 nm to 3500 nm . with regards to the cemented titanium acetabular cup , the non - articular surface showed abundant bony ingrowth with good interdigitation , and no in vivo damage . the articular surface showed an area of central wear caused by the ceramic - on - metal contact . scores are defined as follows : 0=no wear , 1=less than 10% , 2=10%-50% , 3=greater than 50% . scores are defined as follows : 0=no wear , 1=less than 10% , 2=10%-50% , 3=greater than 50% . post - operative weight bearing status was foot flat weight bearing for a period of 6 weeks with weight bearing progressively increased during a period of 6 weeks to full weight bearing at 3 months . at last follow up ( 12 months ) , the patient was ambulating with no assistive devices without any complaints or pain . postoperative radiographs showed no signs of loosening , misalignment , fracture or increased osteolysis [ fig 3 ] . polyethylene wear debris , and the resulting inflammatory response leading to osteolysis and loosening , is the primary mode of failure limiting the longevity of total hip arthroplasty patients . in efforts to decrease polyethylene wear debris , improvements have been made to polyethylene articulating surfaces ( decreasing polyethylene oxidation and increasing crosslinking ) and femoral head bearing surfaces ( ceramic - on - polyethylene , ceramic - on - ceramic , and metal - on - metal ) . compared to cocr , ceramic bearings are harder and more resistant to scratching , have superior surface characteristics and a more rounded surface profile with fewer sharp edges , and are chemically inert in the aqueous environment of the body . ceramic bearings have been shown to cause less polyethylene wear , osteolysis and loosening compared to cocr . despite their superior resistance to wear , lab studies have shown that a single scratch with an ra of 10 - 20 nm on an articulating femoral head surface can significantly increase polyethylene failure through third body wear . our patient had an ra of 1 - 2 nm on unworn areas , making surface roughness a less likely contributor to failure in this case . in catastrophic polyethylene failure the femoral head completely penetrates the polyethylene liner resulting in articulation of the head with the metal acetabular cup causing metallosis , osteolysis , and tissue damage in the periarticular area . catastrophic failure of total hip arthroplasty is a rare occurrence with reported rates of 0.29% to 10.9% . survival of primary ceramic - on - polyethylene arthroplasty has been promising at up to 10 years with survival rates of 95% to 98.1% . however , long - term survival rates from 10 to 20 years have been more variable ranging from 70% to 89% . reports of catastrophic failure of the ceramic - on - polyethylene arthroplasty , including our patient , fall into this latter time range with failure being attributed to patient , surgical , and implant related factors . the use of non - crosslinked polyethylene was a major risk factor for catastrophic failure in this patient . non - crosslinked polyethylene has been shown to have higher wear rates than crosslinked polyethylene . engh and colleagues performed a prospective randomized control trial comparing 10 year outcomes in non crosslinked and crosslinked polyethylene liners and reported a survivorship of 94.7% in non - crosslinked and 100% in crosslinked polyethylene over 10 years . average , penetration rates were 0.22mm / year for non - crosslinked liners and 0.06mm / year for crosslinked liners . 91% of the non - crosslinked group had a wear rate of 0.1mm / year while 10% of the crosslinked group had a wear rate of 0.1mm / year . a polyethylene wear rate of more than 0.1mm / year has been correlated with a risk for osteolysis with a 43% risk in hips with a wear rate of 0.1mm / year to 0.2mm / year , a 80% risk in hips with a wear rate of 0.2mm / year to 0.3mm / year , and a 100% risk in hips with a wear rate > 0.3mm / year . given this , the use of a non - crosslinked polyethylene liner in our patient placed him at high risk of failure over the course of the 18-year implantation time . many of the reports of catastrophic polyethylene failure have also attributed the acetabular abduction angle of more than 45 to be a risk factor for polyethylene wear , which is consistent with reports of a 5% to 8% increase in the linear wear rate when the abduction angle is raised from 45 to 55. however , our patient had an abduction angle of 21 , which would not explain the severe wear seen in this case . other mechanisms of wear related to abduction angle have been proposed . kligman and colleagues suggested that a difference of > 18.3 of acetabular inclination between contralateral sides increased risk for polyethylene wear . this may explain one mechanism of wear in our patient , who had a 29 difference in abduction angles between contralateral sides . the polar polyethylene wear pattern and degree of volumetric wear observed in our patient was surprising [ fig 2a ] . it is plausible that the continued participation in athletic activities despite the patient s chronic hip pain contributed to a boring mechanism of wear , which together with a 21 abduction angle would concentrate wear towards the center of the cup . charnley and colleagues were the first to report that prosthetic femoral heads bored into polyethylene liners , creating for itself a cylindrical path . multiple wear vectors most likely added another component to the degree of volumetric wear in this patient . the multiple wear vectors may have been caused by minor loosening and shifting of the acetabular liner , a change in the gait cycle due to pain or muscle weakness , and progressive changes in biomechanical conditions such as contact stress , sliding distance , and friction coefficient . while it is rare to have failure of the ceramic - on - polyethylene articulation , arthroplasty surgeons should be aware of this complication especially in highly active patients , who have had a total hip arthroplasty with non - crosslinked polyethylene . this case also stresses the importance of routine follow up , including radiological studies , in total hip arthroplasty with more vigilant observation recommended in patients with longer than 10 year implantation time and among patients complaining of chronic hip pain . ceramic - on - polyethylene prosthetic wear is a rare complication with grave consequences such as metallosis , osteolysis , and local tissue damage . the use of non - crosslinked polyethylene increases the risk of wear and should be avoided in patients undergoing total hip arthroplasty . close follow up of total hip arthroplasty patients with non - crosslinked polyethylene should be instituted to prevent complications associated with polyethylene wear . though rare , ceramic - on - polyethylene wear presents a major challenge for the arthroplasty surgeon . routine follow up for total hip arthroplasty patients is a basic part of patient care . it is essential to evaluate patients complaining of hip pain for component wear and early detection may help prevent associated complications .
introduction : catastrophic polyethylene failure is a rare complication of ceramic - on - polyethylene total hip arthroplasty due to the favorable tribological characteristics of ceramic . failure of the polyethylene liner can be disastrous , increasing periprosthetic osteolysis , metallosis , and risk of dislocation . complications associated with ceramic - on - polyethylene articulations have been studied extensively , however , only few reports have described its catastrophic wear . we report such a case of complete wear of the acetabular liner in a ceramic - on - polyethylene prosthesis in a 57-year adult male.case report : a 57-year adult male with a history of bilateral total hip arthroplasty presented to our institution with bilateral hip pain worst on the right . range of motion was limited by pain on the right hip at the extremes of motion . radiographs revealed severe osteolysis , heterotopic ossification , complete wear of the acetabular liner , bony impingement of the femoral greater trochanter on the acetabular rim and superior migration of the femoral head . all findings were confirmed intraoperatively . revision of the acetabular components was performed , which successfully alleviated the patient s symptoms.conclusion:failure of the ceramic - on - polyethylene liner in our patient is due to the use of a non - cross linked polyethylene liner , a highly active lifestyle , and poor follow up . arthroplasty surgeons should be aware of this complication especially in highly active patients with a conventional polyethylene liner and chronic hip pain .
the all - sky monitor on rxte ( @xcite ) has the capability to locate gamma - ray bursts ( grbs ) to within a few arcminutes in two dimensions . this can occur if the burst falls within the @xmath0 by @xmath1 parallelogram on the sky which is viewed simultaneously by the two azimuthal shadow cameras of the asm ( see @xcite ) . the solid angle for such detections is actually somewhat larger in some cases because the burst intensity ( or that of its immediate afterglow ) may remain above asm threshold as the asm steps to its next celestial position , _ i.e. _ to its next `` dwell '' . this will sometimes bring the burst into the field of view of a camera that had not yet detected it . the asm takes data for @xmath2 of the orbital time . it is more probable that the burst will fall within the fov of only one of the three shadow cameras , each with a field of view of @xmath3 . in the case of a detection in only one collimator , the error region will be a few arcminutes wide and a few degrees long . the rxte does not carry a dedicated grb detector , and the asm detects only the x - ray portion of grb spectra , 1.512 kev . it is therefore difficult to distinguish a rapid x - ray transient from a grb . our original method for securely identifying a new transient was to wait until several sightings confirmed the detection . this would distinguish a genuine transient from background events . in the case of grb , one may have only one sighting . this requires one to impose a higher threshold for detection . in addition to `` position data '' that yields intensities and locations of a source based on 90-s integrations , the asm records `` time - series data '' in 1/8 s time bins for each of the three energy channels of each of the three detectors . if a new source is detected in the position data of a given asm dwell , the presence of rapid variability in the corresponding time - series data improves the likelihood that the source is a grb . the rxte / asm has detected and positioned 14 grb since feb . each of the 14 has been confirmed as a burst with detections from grb detectors on one or more other satellites . seven of these were detected in two of the asm shadow cameras , thus yielding positions accurate to a few arcmintues in two dimensions . of the 14 , eight were located in searches of archived data . one of these ( grb 961216 ) was near the edge of the asm camera field of view and thus led to a large ( and uncertain ) error region . the other 7 archival detections and positions are reported in smith et al . ( 1999 ) together with detections and positions of these events from other satellites . they are grb 960416 , 960529 , 960727 , 961002 , 961019 , 961029 , and 961230 . beginning in august 1997 , bursts detected by the asm were analyzed and positions were reported in near real time . six events have been so reported at times ranging from 2 to 32 hours after the burst . the error regions reported by us and by others for these six bursts are shown in figure 1 . notable among these are ( 1 ) grb 970828 ( @xcite ) which had an easily detected x - ray afterglow but no optical or radio signal ( @xcite ) , ( 2 ) grb980703 ( @xcite ) which led to a radio / optical transient with a redshift z = 0.9660 ( @xcite ) , and ( 3 ) grb981220 ( @xcite ) which led to the discovery of a highly variable radio source as a possible counterpart candidate ( @xcite , @xcite ) . the radio source was associated with a faint galaxy at r = 26.4 ( @xcite ) . the grb 971214 led to a redshift of + figure 1 . asm positions of the six bursts reported in near real time by the asm group together with refined batse and interplanetary network positions from other groups as reported or referenced in smith et al . 1999 .
the rxte / asm has detected and positioned 14 confirmed grb bursts ( at this writing , jan . 1999 ) including six whose positions were comunicated to the community 2 to 32 hours after the burst . two of these latter bursts led to measurements of optical red shifts but one , despite an easily detected x - ray afterglow , produced no detectable optical or radio afterglow .
SECTION 1. SHORT TITLE. This Act may be cited as the ``Maximum Economic Growth for America Through the Highway Trust Fund Act'' or the ``MEGA Trust Act''. SEC. 2. ALL ALCOHOL FUELS TAXES TRANSFERRED TO HIGHWAY TRUST FUND. (a) In General.--Section 9503(b)(4) of the Internal Revenue Code of 1986 (relating to certain taxes not transferred to Highway Trust Fund) is amended-- (1) by adding ``or'' at the end of subparagraph (C), (2) by striking the comma at the end of subparagraph (D)(iii) and inserting a period, and (3) by striking subparagraphs (E) and (F). (b) Effective Date.--The amendments made by this section shall apply to taxes received in the Treasury after September 30, 2003. SEC. 3. GENERAL FUND TRANSFER TO HIGHWAY TRUST FUND OF AMOUNT EQUAL TO UNTAXED PORTION OF GASOHOL CONTAINING ETHANOL. (a) In General.--Section 9503(b) of the Internal Revenue Code of 1986 (relating to transfer to Highway Trust Fund of amounts equivalent to certain taxes) is amended-- (1) by redesignating paragraph (5) as paragraph (6), (2) by inserting after paragraph (4) the following new paragraph: ``(5) General revenue transfer equal to untaxed portion of gasohol containing ethanol.--There are hereby appropriated to the Highway Trust Fund with respect to any qualified alcohol mixture described in section 4081(c)(4)(A)(i), amounts equivalent to the excess of the rate of tax which would (but for section 4081(c)) be determined under section 4081(a) over the alcohol mixture rate determined under section 4081(c)(4)(A)(i) and imposed on such mixture, as determined by the Secretary, after consultation with the Secretary of Transportation. Such amounts shall be appropriated and transferred from the general fund in the manner in which taxes determined under section 4081(a) would have been transferred by the Secretary of the Treasury and such amounts shall be treated as taxes received in the Treasury under such section.''. (b) Effective Date.--The amendments made by this section shall apply with respect to the removal or entry of any mixture after September 30, 2003. SEC. 4. INTEREST CREDITED TO HIGHWAY TRUST FUND. (a) In General.--Section 9503 of the Internal Revenue Code of 1986 (relating to Highway Trust Fund) is amended by striking subsection (f). (b) Effective Date.--The amendment made by this section shall apply with respect to obligations held by the Highway Trust Fund after September 30, 2003. SEC. 5. EXTENSION OF HIGHWAY-RELATED TAXES AND TRUST FUND. (a) Extension of Taxes.-- (1) In general.--The following provisions of the Internal Revenue Code of 1986 are each amended by striking ``2005'' each place it appears and inserting ``2011'': (A) Section 4041(a)(1)(C)(iii)(I) (relating to rate of tax on certain buses). (B) Section 4041(a)(2)(B) (relating to rate of tax on special motor fuels). (C) Section 4041(m)(1)(A) (relating to certain alcohol fuels). (D) Section 4051(c) (relating to termination of tax on heavy trucks and trailers). (E) Section 4071(d) (relating to termination of tax on tires). (F) Section 4081(d)(1) (relating to termination of tax on gasoline, diesel fuel, and kerosene). (G) Section 4481(e) (relating to period tax in effect). (H) Section 4482(c)(4) (relating to taxable period). (I) Section 4482(d) (relating to special rule for taxable period in which termination date occurs). (2) Floor stocks refunds.--Section 6412(a)(1) of such Code (relating to floor stocks refunds) is amended-- (A) by striking ``2005'' each place it appears and inserting ``2011'', and (B) by striking ``2006'' each place it appears and inserting ``2012''. (b) Extension of Certain Exemptions.--The following provisions of the Internal Revenue Code of 1986 are each amended by striking ``2005'' and inserting ``2011'': (1) Section 4221(a) (relating to certain tax-free sales). (2) Section 4483(g) (relating to termination of exemptions for highway use tax). (c) Extension of Deposits Into, and Certain Transfers From, Trust Fund.-- (1) In general.--Subsection (b), and paragraphs (2) and (3) of subsection (c), of section 9503 of the Internal Revenue Code of 1986 (relating to the Highway Trust Fund) are each amended-- (A) by striking ``2005'' each place it appears and inserting ``2011'', and (B) by striking ``2006'' each place it appears and inserting ``2012''. (2) Motorboat and small-engine fuel tax transfers.-- (A) In general.--Paragraphs (4)(A)(i) and (5)(A) of section 9503(c) of such Code are each amended by striking ``2005'' and inserting ``2011''. (B) Conforming amendments to land and water conservation fund.--Section 201(b) of the Land and Water Conservation Fund Act of 1965 (16 U.S.C. 460l- 11(b)) is amended-- (i) by striking ``2003'' and inserting ``2009'', and (ii) by striking ``2004'' each place it appears and inserting ``2010''. SEC. 6. NATIONAL SURFACE TRANSPORTATION INFRASTRUCTURE FINANCING COMMISSION. (a) Establishment.--There is established a National Surface Transportation Infrastructure Financing Commission (in this section referred to as the ``Commission''). The Commission shall hold its first meeting within 90 days of the appointment of the eighth individual to be named to the Commission. (b) Function.-- (1) In general.--The Commission shall-- (A) make a thorough investigation and study of revenues flowing into the Highway Trust Fund under current law, including the individual components of the overall flow of such revenues; (B) consider whether the amount of such revenues is likely to increase, decline, or remain unchanged, absent changes in the law, particularly by taking into account the impact of possible changes in public vehicular choice, fuel use, or travel alternatives that could be expected to reduce or increase revenues into the Highway Trust Fund; (C) consider alternative approaches to generating revenues for the Highway Trust Fund, and the level of revenues that such alternatives would yield; (D) consider highway and transit needs and whether additional revenues into the Highway Trust Fund, or other Federal revenues dedicated to highway and transit infrastructure, would be required in order to meet such needs; and (E) study such other matters closely related to the subjects described in the preceding subparagraphs as it may deem appropriate. (2) Time frame of investigation and study.--The time frame to be considered by the Commission shall extend through the year 2015, to the extent data is reasonably available. (3) Preparation of report.--Based on such investigation and study, the Commission shall develop a final report, with recommendations and the bases for those recommendations, indicating policies that should be adopted, or not adopted, to achieve various levels of annual revenue for the Highway Trust Fund and to enable the Highway Trust Fund to receive revenues sufficient to meet highway and transit needs. Such recommendations shall address, among other matters as the Commission may deem appropriate-- (A) what levels of revenue are required by the Federal Highway Trust Fund in order for it to meet needs to-- (i) maintain, and (ii) improve the condition and performance of the Nation's highway and transit systems; (B) what levels of revenue are required by the Federal Highway Trust Fund in order to ensure that Federal levels of investment in highways and transit do not decline in real terms; and (C) the extent, if any, to which the Highway Trust Fund should be augmented by other mechanisms or funds as a Federal means of financing highway and transit infrastructure investments. (c) Membership.-- (1) Appointment.--The Commission shall be composed of 15 members, appointed as follows: (A) 7 members appointed by the Secretary of Transportation, in consultation with the Secretary of the Treasury. (B) 2 members appointed by the Chairman of the Committee on Ways and Means of the House of Representatives. (C) 2 members appointed by the Ranking Minority Member of the Committee on Ways and Means of the House of Representatives. (D) 2 members appointed by the Chairman of the Committee on Finance of the Senate. (E) 2 members appointed by the Ranking Minority Member of the Committee on Finance of the Senate. (2) Qualifications.--Members appointed pursuant to paragraph (1) shall be appointed from among individuals knowledgeable in the fields of public transportation finance or highway and transit programs, policy, and needs, and may include representatives of interested parties, such as State and local governments or other public transportation authorities or agencies, representatives of the transportation construction industry (including suppliers of technology, machinery and materials), transportation labor (including construction and providers), transportation providers, the financial community, and users of highway and transit systems. (3) Terms.--Members shall be appointed for the life of the Commission. (4) Vacancies.--A vacancy in the Commission shall be filled in the manner in which the original appointment was made. (5) Travel expenses.--Members shall serve without pay but shall receive travel expenses, including per diem in lieu of subsistence, in accordance with sections 5702 and 5703 of title 5, United States Code. (6) Chairman.--The Chairman of the Commission shall be elected by the members. (d) Staff.--The Commission may appoint and fix the pay of such personnel as it considers appropriate. (e) Funding.--Funding for the Commission shall be provided by the Secretary of the Treasury and by the Secretary of Transportation, out of funds available to those agencies for administrative and policy functions. (f) Staff of Federal Agencies.--Upon request of the Commission, the head of any department or agency of the United States may detail any of the personnel of that department or agency to the Commission to assist in carrying out its duties under this section. (g) Obtaining Data.--The Commission may secure directly from any department or agency of the United States, information (other than information required by any law to be kept confidential by such department or agency) necessary for the Commission to carry out its duties under this section. Upon request of the Commission, the head of that department or agency shall furnish such nonconfidential information to the Commission. The Commission shall also gather evidence through such means as it may deem appropriate, including through holding hearings and soliciting comments by means of Federal Register notices. (h) Report.--Not later than 2 years after the date of its first meeting, the Commission shall transmit its final report, including recommendations, to the Secretary of Transportation, the Secretary of the Treasury, and the Committee on Ways and Means of the House of Representatives, the Committee on Finance of the Senate, the Committee on Transportation and Infrastructure of the House of Representatives, the Committee on Environment and Public Works of the Senate, and the Committee on Banking, Housing, and Urban Affairs of the Senate. (i) Termination.--The Commission shall terminate on the 180th day following the date of transmittal of the report under subsection (h). All records and papers of the Commission shall thereupon be delivered to the Administrator of General Services for deposit in the National Archives.
Maximum Economic Growth for America Through the Highway Trust Fund Act (or MEGA Trust Act) - Amends the Internal Revenue Code to transfer all excise taxes imposed on alcohol fuels to the Highway Trust Fund (the "Fund").Authorizes the transfer to the Fund from the general fund of the Treasury of the amount of money equal to the untaxed portion of gasohol containing ethanol, effective with respect to the removal or entry of any mixture after September 30, 2003.Eliminates provision of Code stating that obligations of the Fund shall not be interest bearing, thus allowing the Fund to earn interest, effective with respect to obligations held by the Fund after September 30, 2003.Extends various highway-related taxes, floor stock refunds, certain tax-free sales, exemption from tax for use of highway vehicles by States and local governments and for use of certain transit-type buses, deposits into and certain specified transfers from the Fund, transfers from the Fund for motorboat fuel taxes and small-engine fuel taxes, and refunds of certain specified funds from the land and water conservation fund into the general fund.Establishes a National Surface Transportation Infrastructure Financing Commission (the "Commission"). Permits any department or agency to detail personnel to the Commission, and requires such bodies to furnish nonconfidential materials to the Commission upon request.
direct imaging searches for extrasolar planets have typically placed only upper limits on the frequency of giant planets on orbits between @xmath7 au and @xmath8 au ( nielson et al . 2008 ; nielsen & close 2010 ) or @xmath9 au and @xmath10 au ( lafrenire et al . 2007 ) , for planets with masses above 4 @xmath11 or 2 @xmath11 , respectively . these surveys led to upper limits on the frequency of giant planet companions on such orbits of @xmath0 10% to @xmath0 20% . these upper limits are , however , comparable to estimates of the frequency of detected giant planets on orbits inside @xmath0 3 au of fgkm dwarfs by doppler spectroscopy ( cumming et al . 2008 ) . gravitational microlensing detections of ice and gas giant planets orbiting beyond 3 au imply an even higher frequency of planets , about 35% ( gould et al . hence , significant numbers of giant planets on wide orbits might very well exist . recently , persuasive evidence has begun to appear that wide giant planets do indeed exist in significant numbers . the a3v star ( 2.06 @xmath1 ) fomelhaut appears to have a planetary companion 119 au away with a mass less than 3 @xmath11 , based on the planet s failure to disrupt the cold dust belt in which it is embedded ( kalas et al . 2008 ) . the a5v star ( 1.5 @xmath1 ) hr 8799 appears to have a system of at least four gas giant planets , orbiting at projected distances of 14 , 24 , 38 , and 68 au , with minimum masses of 7 , 7 , 7 , and 5 @xmath11 , respectively , based on their luminosities and an estimated age of the system of 30 myr ( marois et al . 2008 , 2010 ) . the four hr 8799 exoplanets are also embedded in a dust debris disk ( su et al . a companion with a mass in the range of 10 to 40 @xmath11 has been detected at a projected separation of 29 au from the g9 star ( 0.97 @xmath1 ) gj 758 ( thalmann et al . 2009 ) , and other good candidates for wide planetary companions have been proposed as well ( e.g. , oppenheimer et al . 2008 ; lafrenire , jayawardhana , & van kerkwijk 2008 ) . heinze et al . ( 2010a , b ) , however , estimate that no more than 8.1% of the 54 sun - like stars studied in their planet imaging survey could have planets similar to those of hr 8799 . doppler surveys have been extended to a range of stellar masses , providing the first estimates of how the planetary census depends on stellar type . a - type stars appear to have a significantly higher frequency of giant planets with orbits inside 3 au compared to solar - type stars ( bowler et al . m dwarfs , on the other hand , appear to have a significantly lower frequency of giant planets inside 2.5 au than fgk dwarfs ( johnson et al . thus there is a clear indication that the frequency of giant planets increases with stellar mass , at least for relatively short period orbits . assuming that protoplanetary disk masses increase with increasing stellar mass , such a correlation is consistent with the core accretion mechanism for giant planet formation , as a higher surface density of solids leads to proportionately larger mass cores that could become gas giant planets ( e.g. , wetherill 1996 ; ida & lin 2005 ) . however , microlensing detections ( gould et al . 2010 ) imply a considerably higher frequency ( @xmath0 35% ) of giant planets around early m dwarf stars ( @xmath12 ) than that found by the doppler surveys , again orbiting at larger distances than those probed by the doppler surveys . evidently even early m dwarfs might also have a significant population of relatively wide gas giant planets . core accretion is unable , however , to form massive planets beyond @xmath0 35 au , even in the most favorable circumstances ( e.g. , levison & stewart 2001 ; thommes , duncan , & levison 2002 ; chambers 2006 ) , and gravitational scattering outward appears to be unable to lead to stable wide orbits ( dodson - robinson et al . 2009 ; raymond , armitage , & gorelick 2010 ) . disk instability ( boss 1997 ) is then the remaining candidate mechanism for forming wide gas giant planets ( dodson - robinson et al . 2009 ; boley 2009 ) . previous models found that disk instability could readily produce giant planets at distances of 20 au to 30 au ( boss 2003 ) , but not at distances of 100 au to 200 au ( boss 2006a ) . here we present results for intermediate - size disks ( 20 au to 60 au ) for a range of central protostar masses ( 0.1 to 2.0 @xmath1 ) , to learn if the disk instability mechanism for giant planet formation is consistent with the results of the doppler and direct imaging surveys to date . the calculations were performed with a numerical code that solves the three dimensional equations of hydrodynamics and radiative transfer in the diffusion approximation , as well as the poisson equation for the gravitational potential . this same basic code has been used in all of the author s previous studies of disk instability . the code is second - order - accurate in both space and time . a complete description of the entire code , including hydrodynamics and radiative transfer , may be found in boss & myhill ( 1992 ) , with the following exceptions : the central protostar is assumed to move in such a way as to preserve the location of the center of mass of the entire system ( boss 1998 ) , which is accomplished by altering the location of the point mass source of the star s gravitational potential to balance the center of mass of the disk . the pollack et al . ( 1994 ) rosseland mean opacities are used for the dust grains that dominate the opacities in these models . the energy equation of state in use since 1989 is described by boss ( 2007 ) . a flux - limiter for the diffusion approximation radiative transfer was not employed , as it appears to have only a modest effect on midplane temperatures ( boss 2008 ) . recent tests of the radiative transfer scheme are described in boss ( 2009 ) . the equations are solved on a spherical coordinate grid with @xmath13 ( including the central grid cell , which contains the central protostar ) , @xmath14 in @xmath15 , and @xmath16 , with @xmath17 being increased to 512 once fragments begin forming . the radial grid is uniformly spaced with @xmath18 au between 20 and 60 au . the @xmath19 grid is compressed into the midplane to ensure adequate vertical resolution ( @xmath20 at the midplane ) . the @xmath21 grid is uniformly spaced . the number of terms in the spherical harmonic expansion for the gravitational potential of the disk is @xmath22 when @xmath16 , while @xmath23 when @xmath24 . the jeans length criterion ( e.g. , boss et al . 2000 ) and the toomre length criterion ( nelson 2006 ) are both monitored throughout the evolutions to ensure that any clumps that might form are not numerical artifacts . the jeans length criterion consists of requiring that all of the grid spacings in the spherical coordinate grid remain smaller than 1/4 of the jeans length @xmath25 , where @xmath26 is the local sound speed , @xmath27 the gravitational constant , and @xmath28 the density . similarly , the toomre length criterion consists of requiring that all of the grid spacings remain smaller than 1/4 of the toomre length @xmath29 , where @xmath30 is the mass surface density . once well - defined fragments form , these criteria may be violated at the maximum densities of the clumps , due to the non - adaptive nature of the spherical coordinate grid , as is expected to be the case for self - gravitating clumps that are trying to contract to higher densities on a fixed grid . however , provided that the jeans and toomre constraints are satisfied at the time that well - defined clumps appear , these clumps are expected to be genuine and not spurious artifacts . the boundary conditions are chosen at both 20 and 60 au to absorb radial velocity perturbations , to simulate the continued existence of the disk inside and outside the active numerical grid . as discussed in detail by boss ( 1998 ) , the use of such non - reflective boundary conditions should err on the side of caution regarding the growth of perturbations , as found by adams , ruden , & shu ( 1989 ) . mass and momentum that enters the innermost shell of cells at 20 au are added to the central protostar , whereas mass or momentum that reaches the outermost shell of cells at 60 au remains on the active hydrodynamical grid . the controversy over whether or not disk instability can lead to protoplanet formation inside about 20 au continues unabated ( see the recent reviews by durisen et al . 2007 and mayer , boss , & nelson 2010 ) . attempts to find a single reason for different numerical outcomes for disk instability models have been largely unsuccessful to date ( e.g. , boss 2007 , 2008 ) , implying that the reason can not be traced to a single code difference , but rather to the totality of differences , such as spatial resolution , gravitational potential accuracy , artificial viscosity , stellar irradiation effects , radiative transfer , numerical heating , equations of state , initial density and temperature profiles , disk surface boundary conditions , and time step size , to name a few . comparison calculations on nearly identical disk models have led boss ( 2007 ) and cai et al . ( 2010 ) to reach different conclusions . while boss ( 2007 ) concluded that fragmentation was possible inside 20 au , cai et al . ( 2010 ) found no evidence for fragmentation in their models , which included numerous improvements over their previous work , such as a better treatment of radiative transfer in optically thin regions of the disk and elimination of the spurious numerical heating in the inner disk regions where boss ( 2007 ) found fragments to form . cai et al . ( 2010 ) suggested that the main difference might be artificially fast cooling in the boss models as a result of the thermal bath boundary conditions used in boss models , which could not be duplicated with the cai et al . ( 2010 ) code because of numerical stability problems . analytical test cases have been advanced as one means for testing radiative transfer in the numerical codes ( e.g. , boley et al . boss ( 2009 ) derived two new analytical radiative transfer solutions and showed that boss code does an excellent job of handling the radiative boundary conditions of a disk immersed in a thermal bath ; the boss code relaxes to the analytical solutions for both a spherically symmetric cloud and an axisymmetric disk . recently , boss ( 2010 ) published models showing that disk instability is considerably less robust inside 20 au in disks with half the mass of previous models ( e.g. , boss 2007 ) , but still possible . inutsuka , machida , & matsumoto ( 2010 ) found in their magnetohydrodynamic collapse calculations that the massive disks that formed were subject to gravitational instability and fragment formation , even inside 20 au . arguments against inner disk fragmentation are often based on simple cooling time estimates ( e.g. , cai et al . however , meru & bate ( 2010 , 2011 ) have emphasized that many previous numerical calculations with fixed cooling times are likely to have reached incorrect results , in part as a result of insufficient spatial resolution . meru & bate ( 2010 , 2011 ) presented numerous disk instability models that underwent fragmentation inside 20 au for a variety of initial conditions . while the debate over inner disk fragmentation is likely to continue , the present models should be considerably less controversial , given their restriction to fragmentation at distances greater than 20 au . table 1 lists the initial conditions chosen for the five disk models presented here . models 2.0 , 1.5 , 1.0 , 0.5 , and 0.1 depict disks around protostars with masses of @xmath31 , 1.5 , 1.0 , 0.5 , and 0.1 @xmath1 , representing future a3 , a5 , g2 , early m , and late m dwarfs , respectively , depending on their subsequent accretion of mass . the disk envelopes are taken to have temperatures ( @xmath32 ) between 50 k and 30 k , in all cases hotter than the disks themselves , which begin their evolutions uniformly isothermal at the initial temperatures ( @xmath33 ) shown in table 1 . the critical density for differentiating between the disk and the disk envelope is taken to be @xmath34 g @xmath35 for models 2.0 , 1.5 , 1.0 , and 0.5 , and @xmath36 g @xmath35 for model 0.1 , which effectively determines the onset of the envelope thermal bath . variations in these parameters have been tested by boss ( 2007 ) and found to have relatively minor effects . envelope temperatures of 30 to 50 k appear to be reasonable bounds for low - mass protostars during quiescent periods ( chick & cassen 1997 ) . observations of the dm tau outer disk , on scales of 50 to 60 au , imply midplane temperatures of 13 to 20 k ( dartois , dutrey , & guilloteau 2003 ) . hence the envelope and disk initial temperatures chosen in table 1 appear to be reasonable choices for real disks . initially the disks have the density distribution ( boss 1993 ) of an adiabatic , self - gravitating , thick disk in near - keplerian rotation about a stellar mass @xmath37 @xmath38,\ ] ] where @xmath39 and @xmath40 are cylindrical coordinates , @xmath41 is the midplane density , and @xmath42 is the surface density . the adiabatic constant is @xmath43 ( cgs units ) and @xmath44 for the initial model ; thereafter , the disk evolves in a nonisothermal manner governed by the energy equation and radiative transfer ( boss & myhill 1992 ) . the first adiabatic exponent ( @xmath45 ) derived from the energy equation of state for these models varies from 5/3 for temperatures below 100 k to @xmath0 1.4 for higher temperatures ( see figure 1 in boss 2007 ) . the radial variation of the initial midplane density is a power law that ensures near - keplerian rotation throughout the disk @xmath46 where @xmath47 g @xmath35 and @xmath48 au . this disk structure is the continuation to 60 au of the same disk used in the @xmath49 models of , e.g. , boss ( 2001 , 2003 , 2005 , 2006a , 2010 ) . while each disk is initially close to centrifugal balance in the radial direction , the use of the boss ( 1993 ) analytical density distribution , with varied initial disk temperatures , means that the disks initially contract vertically until a quasi - equilibrium state is reached ( boss 1998 ) . table 1 lists the resulting disk masses @xmath50 ( from 20 au to 60 au ) , the disk mass to stellar mass ratios @xmath51 , the initial disk temperatures @xmath33 , and the initial minimum and maximum values of the toomre ( 1964 ) @xmath2 gravitational stability criterion , increasing monotonically outward from unstable @xmath52 values at 20 au to marginally stable @xmath53 at 60 au . these values of @xmath2 were chosen to be low enough in the inner disk regions to err on the side of clump formation ; higher initial @xmath2 values are expected to stifle disk fragmentation . these models thus represent a first exploration of parameter space for large - scale disks to establish feasibility . further work should investigate higher @xmath2 initial conditions , as disks are expected to evolve starting from marginally gravitationally unstable ( @xmath54 ) initial conditions ( e.g. , boley 2009 ) . such disks typically also fragment , but only after a period of dynamical evolution toward @xmath55 in limited regions , such as dense rings ( e.g. , boss 2002 ) . large , massive disks have been detected in regions of low - mass star formation , such as the 300-au - scale , @xmath56 disk around the class o protostar serpens firs 1 ( enoch et al . observations of 11 low- and intermediate - mass pre - main - sequence stars imply that their circumstellar disks formed with masses in the range from 0.05 @xmath1 to 0.4 @xmath1 ( isella , carpenter , & sargent 2009 ) . these and other observations support the choice of the disk masses and sizes assumed in the present models . all of the models dynamically evolve in much the same way . beginning from nearly axisymmetric configurations ( with initial @xmath57 density perturbations of amplitude 1% ) , the disks develop increasingly stronger spiral arm structures . eventually these trailing spiral arms become distinct enough , through self - gravitational growth and mutual collisions , that reasonably well - defined clumps appear and maintain their identities for some fraction of an orbital period . however , because the fixed - grid nature of these calculations prevents the clumps from contracting to much higher densities , the clumps are doomed to eventual destruction by a combination of thermal pressure , tidal forces from the protostar , and keplerian shear . however , new clumps continue to form and orbit the protostar , suggesting that clump formation is inevitable . previous work ( boss 2005 ) has shown that as the numerical spatial resolution is increased , the survival of clumps formed by disk instability is enhanced . while an adaptive - mesh - refinement code would be desirable for demonstrating that clumps can contract and survive , the present models , combined with the previous work by boss ( 2005 ) , are sufficient for a first exploration of this region of disk instability parameter space . figures 1 through 10 show the midplane density and temperature contours for all five models at a time of @xmath58 , where @xmath59 is the keplerian orbital period at the distance of the inner grid boundary of 20 au for a protostar with the given mass . for models 0.1 , 0.5 , 1.0 . 1.5 , and 2.0 , respectively , @xmath59 is equal to 283 yr , 126 yr , 89.4 yr , 73.0 yr , and 63.2 yr . it is clear that clumps have formed by this time in all five models . however , in order to become a giant planet , clumps must survive long enough to contract toward planetary densities . the spherically symmetric protoplanet models of helled & bodenheimer ( 2011 ) suggest contraction time scales ranging from @xmath4 yr to @xmath60 yr , depending on the metallicity , for protoplanets with masses from 3 to 7 @xmath11 , so these clumps must survive for many orbital periods in order to become planets . table 2 lists the estimated properties for those clumps that appear to be self - gravitating at the earlier time of @xmath61 , while table 3 lists the estimates for the fragments at the time of @xmath58 depicted in figures 1 through 10 . at both of these times , clump formation was relatively well - defined , so that the clump masses and other properties could be estimated . clumps typically first become apparent at @xmath62 . candidate clumps are identified by eye from the equatorial density contour plots , and then interrogated with a program that allows the user to select cells adjoining the cell with the maximum density in order to achieve a candidate clump with an approximately spherical appearance , in spite of the obvious banana - shape of many clumps . the fragment masses @xmath63 are then estimated in units of the jupiter mass @xmath11 and compared to the jeans mass @xmath64 ( e.g. , spitzer 1968 ) necessary for gravitational stability at the mean density and temperature of the fragment . fragments with masses less than the jeans mass are not expected to be stable ( boss 1997 ) . the times presented in figures 1 through 10 range from 387 yr to 1843 yr , i.e. , timescales of order 1000 yr . compared to core accretion , where the time scales involved are measured typically in millions of yr ( e.g. , ida & lin 2005 ) and low mass stars are in danger of not being able to form gas giant planets at all ( laughlin , bodenheimer , & adams 2004 ) , the disks around even low mass stars are able to form clumps on time scales short enough to permit gas giant protoplanet formation to occur in the shortest - lived protoplanetary disks . figures 1 through 10 demonstrate that while clump formation occurs for all of the disks , the clumps that form becoming increasingly numerous as the mass of the protostar ( and of the corresponding disk ) increases , even though all disks begin their evolution with essentially the same range of toomre ( 1964 ) @xmath2 values . clearly more massive disks are able to produce more numerous protoplanets , all other things being equal . the temperature contour plots show that significant compressional heating occurs in these initially isothermal disks as a result of spiral arm formation , with the most significant heating occuring near the edges of the arms and clumps , as more disk gas seeks to infall onto the spiral structures ; the local temperature maxima do not necessarily fall at the local density maxima . a similar effect was found by boley & durisen ( 2008 ) . this suggests that clump formation in these relatively cold outer disks occurs in an opportunistic manner , pulling cold disk gas together wherever possible . even the lowest mass disk in model 0.1 is optically thick , with a vertical optical depth of @xmath65 , while vertical optical depths of @xmath4 characterize the more massive disks , so some combination of vertical radiation transport , dynamical motions , and/or convection ( e.g. , boss 2004 , boley & durisen 2006 , mayer et al . 2007 ) is necessary for cooling the disk midplane and allowing the clumps to continue their contraction toward planetary densities . tables 2 and 3 also list the estimated orbital semimajor axes @xmath66 and eccentricities @xmath67 for the fragments at the same times as the other fragment properties are estimated . needless to say , these values should be taken solely as initial values , as interactions with the massive disk ( e.g. , boss 2005 ) and the other fragments will result in substantial further orbital evolution . the fragment orbital parameters @xmath68 and @xmath69 were calculated using each fragment s radial distance @xmath70 , average radial velocity @xmath71 , and average azimuthal velocity @xmath72 ( both derived from the total momentum of the clump ) , along with the model s stellar mass @xmath37 , and inserting these values into these equations for a body on a keplerian orbit ( danby 1988 ) : @xmath73 @xmath74 where @xmath27 is the gravitational constant . figures 11 and 12 depict the midplane density and temperature profiles for two of the clumps that form in model 1.0 , as shown in figures 5 and 6 . the fragment in figure 11 is less well - defined than that in figure 12 , yet still has an estimated mass of 3.8 @xmath11 , well above its relevant jeans mass of 1.8 @xmath11 . the fragment in figure 12 has an estimated mass of 2.5 @xmath11 , also well above its relevant jeans mass of 2.2 @xmath11 . these figures show that the higher density fragment in figure 12 has resulted in a higher temperature interior , while the lower density fragment in figure 11 has not yet reached similar internal temperatures , though in both cases the maximum fragment temperatures occur close to their edges . similar plots characterize all of the fragments found in these models . figures 13 , 14 , 15 , and 16 plot the resulting estimates of the initial protoplanet masses , semimajor axes , and eccentricities as a function of protostar or protoplanetary disk mass , for the fragments in tables 2 and 3 where the fragment mass is equal to or greater than the jeans mass . given the uncertain future evolution of the fragments as they attempt to survive and become true protoplanets , these values should be taken only as reasonable estimates based on the present set of models , subject to the inherent assumptions about the initial disk properties . boley et al . ( 2010 ) , for example , found that clumps on highly eccentric orbits could be tidally disrupted at periastron . in addition , surviving fragments are likely to gain substantially more disk gas mass during their orbital evolution ( `` type iv non - migration '' ) in a marginally gravitationally unstable disk ( boss 2005 ) . nevertheless , figure 13 makes it clear that disk instability is capable of leading to gas giant protoplanet formation around protostars with masses in the range from 0.1 to 2.0 @xmath1 , with more protoplanets forming as the mass of the protostar and its disk increases : perhaps only a single protoplanet for a 0.1 @xmath1 protostar , but as many as six for a 2.0 @xmath1 protostar . the masses of the protoplanets appear to increase with the stellar and disk mass ( figures 13 and 14 ) ; the typical initial protoplanet mass increased from @xmath5 to @xmath75 over the range of models 0.1 to 2.0 . given the large disk masses , it is likely that the final protoplanet masses will similarly increase with time , as those protoplanets accrete mass from a massive reservoir of gas and dust . this growth will be limited though by the angular momentum of the disk gas that the protoplanet is trying to accrete ( boley et al . nevertheless , if one wishes to use these models to explain the formation of giant planets with minimum masses similar to those estimated for hr 8799 , i.e. , 5 to 7 @xmath11 , then the outer disk gas must be removed prior to growth of the protoplanets to unacceptably large masses . photoevaporation of the outer disk by fuv and euv fluxes from nearby massive ( ob ) stars is a likely means for achieving this timely disk gas removal , on a time scale of @xmath60 yr ( e.g. , balog et al . 2008 ; mann & williams 2009 , 2010 ) . if the a5v star hr 8799 formed in a region of high mass star formation , as in the case for the majority of stars , the outer disk gas should disappear within @xmath60 yr . if giant protoplanets can not accrete mass from the disk at a rate higher than @xmath76 yr@xmath77 , as argued by nelson & benz ( 2003 ) , then the maximum amount of disk gas that could be accreted in @xmath78 yr or less would be @xmath79 . a mass gain no greater than this appears to be roughly consistent with the range of masses estimated for the four planets in hr 8799 ( marois et al . 2008 , 2010 ) , which could be as high as 13 @xmath11 . figures 15 and 16 show that these protoplanets begin their existence with orbital semimajor axes in the range of @xmath0 30 au to @xmath0 70 au and orbital eccentricities from @xmath0 0 to @xmath0 0.35 . only upper limits of @xmath80 exist for the orbital eccentricities of the hr 8799 system ( figure 16 ) , comfortably above the model estimates . the initial orbital eccentricities appear to vary slightly with stellar mass , with eccentricities dropping as the stellar mass increases , though this hint is largely due to the higher eccentricities found in model 0.1 . the semimajor axes show a similar slight trend of decreasing with stellar mass , though both of these effects may be more of a result of small number statistics than of any robust physical mechanism at work . just as there is a danger that protoplanets formed in a massive disk could grow to become brown dwarfs , unless prevented from doing so by removal of the outer disk gas through photoevaporation , there is a danger that the protoplanets might suffer inward orbital migration due to interactions with the disk gas prior to its removal . however , models of the interactions of protoplanets with marginally gravitationally unstable disks ( boss 2005 ) have shown that the protoplanets experience an orbital evolution that is closer to a random walk ( type iv non - migration ) than to the classic monotonically inward ( or outward ) evolution due to type ii migration , where the planet clears a gap in the disk and then must move in the same direction as the surrounding disk gas . hence , the outer disk protoplanets formed in these models need not be expected to suffer major inward or outward migration prior to photoevaporation of the outer disk , though clearly this possibility is deserving of further study . the close - packing in semimajor axis of the fragments in model 2.0 ( figures 9 and 15 ) makes it clear that these protoplanets will interact gravitationally with each other ( as well as with the much more massive disk ) , resulting in mutual close encounters and scattering of protoplanets to orbits with larger and smaller semimajor axes than their initial values ( figure 15 ) . the evolution during this subsequent phase is best described with a fixed - grid code by using the virtual protoplanet technique , where the fragments are replaced by point mass objects that orbit and interact with the disk and each other ( boss 2005 ) . models that continue the present models with the virtual protoplanet technique are now underway and will be presented in a future paper . most disk instability models have focused on forming giant planets similar to those in our solar system , and hence have studied disks with outer radii of 20 au ( e.g. , boss 2001 ; mayer et al . boss ( 2003 ) found that disk instability could lead to the formation of self - gravitating clumps with initial orbital semimajor axes of @xmath7 au in disks with outer radii of 30 au . on the other hand , boss ( 2006a ) found no strong tendency for clump formation in disks extending from 100 au to 200 au . in both cases these models assumed 1 @xmath1 central protostars . model 1.0 in the present work shows that when the disk is assumed to extend from 20 au to 60 au , clumps are again expected to be able to form , with initial semimajor axes of @xmath0 30 au to @xmath0 45 au ( figure 15 ) . taken together , these models imply that for a 1 @xmath1 protostar at least , disk instability might be able to form gaseous protoplanets with initial semimajor axes anywhere inside @xmath0 50 au . when multiple protoplanets form , as is likely to be the case for stars more massive than m dwarfs , subsequent gravitational interactions are likely to result in at least a few protoplanets being kicked out to orbits with semimajor axes greater than 50 au . other authors have also considered the evolution of gravitationally unstable disks with outer radii much greater than 20 au . stamatellos & whitworth ( 2009a , b ) used a smoothed particle hydrodynamics ( sph ) code with radiative transfer in the diffusion approximation to model the evolution of disk instabilities in disks with the same mass as the central protostar : @xmath81 . the disks extended from 40 au to 400 au , with initial toomre @xmath2 values of 0.9 throughout , making them initially highly gravitationally unstable . as expected , these disks rapidly fragmented into multiple clumps , which often grew to brown dwarf masses ( i.e. , greater than 13 @xmath11 ) or higher , with final orbital radii as large as 800 au . their results are in general agreement with the present results , though the major differences in the initial disk assumptions preclude a detailed comparison . boley et al . ( 2010 ) used an sph code to demonstrate multiple fragment formation at distances from @xmath0 50 au to @xmath0 100 au from a 0.3 @xmath1 star in a disk with a radius of 510 au and a mass of 0.19 @xmath1 . given the large disk mass to stellar mass ratio of 0.63 , the formation of several clumps with initial masses of 3.3 @xmath11 and 1.7 @xmath11 is basically consistent with the present results for model 0.5 . the result that clump formation depends on protostellar mass , with models 0.1 and 0.5 forming fewer clumps than models 1.0 , 1.5 , and 2.0 , is consistent with the results presented by boss ( 2006b ) , who studied the evolution of disks with outer radii of 20 au around protostars with masses of 0.1 @xmath1 and 0.5 @xmath1 . boss ( 2006b ) found that clumps could form for both protostar masses , but that while several clumps formed for the 0.5 @xmath1 protostar , typically only a single clump formed for the 0.1 @xmath1 protostar , similar to the results in models 0.1 and 0.5 for much larger radii disks . thus , while not zero , the chances for giant planet formation by disk instability appear to decrease with stellar mass in the range of 0.5 @xmath1 to 0.1 @xmath1 . a simple explanation for this outcome may be that given the assumption of disk masses that scale with protostellar masses , the number of jupiter - mass protoplanets that could form by disk instability increases with the number of jupiter - masses of disk gas available for their formation : e.g. , the disk mass for model 2.0 is taken to be 7.5 times that of model 0.1 . nero & bjorkman ( 2009 ) used analytical models to study fragmentation in suitably massive protoplanetary disks , finding that their estimated cooling times were over an order of magnitude shorter than those estimates previously by rafikov ( 2005 ) , a result consistent with that of boss ( 2005 ) . nero & bjorkman ( 2009 ) found that the outermost planet around hr 8799 was likely to have formed by a disk instability , but that the two closer - in planets were not , a conclusion at odds with the results of the present numerical calculations . the different outcomes appear to be a result of different assumptions about the initial disk density and temperature profiles , dust grain opacities , and use of a cooling time argument rather than detailed radiative transfer and hydrodynamics . recently the use of cooling times to depict the thermodynamics of protoplanetary disks has been called in question by the three dimensional hydrodynamical models of meru & bate ( 2010 , 2011 ) , who found that previous calculations relied on an overly simplistic cooling time argument , and that when sufficiently high spatial resolution was employed , even disks previously thought to be stable underwent fragmentation into clumps . finally , it is interesting to note an observational prediction . helled & bodenheimer ( 2010 ) have modeled the capture of solids by gas giant protoplanets formed at distances similar to those of the four planet candidates in hr 8799 ( marois et al . 2008 , 2010 ) . they found that because such massive protoplanets contract rapidly on time scales of only @xmath82 yr , few planetesimals can be captured by gas drag in their outer envelopes , leading to a prediction that the bulk compositions of these four objects should be similar to that of their host stars if they formed by disk instability . hr 8799 has a low metallicity ( [ m / h ] = -0.47 ; gray & kaye 1999 ) , so the hr 8799 objects are expected to be similarly metal - poor , unless the protoplanets are able to form in a dust - rich region of the disk ( boley & durisen 2010 ) . the present set of models has shown that disk instability is capable of the rapid formation of giant planets on relatively wide orbits around protostars with masses in the range from 0.1 @xmath1 to 2.0 @xmath1 . while the number of protoplanets formed by disk instability appears to increase with the mass of the star ( and hence of the assumed protoplanetary disk ) , even late m dwarf stars might be able to form gas giants on wide orbits , provided that suitably gravitationally unstable disks exist in orbit around them . these results suggest that direct imaging searches for gas giant planets on wide orbits around low mass stars are likely to continue to bear fruit ; the protoplanet candidates detected to date do not appear to be rare oddballs unexplainable by theoretical models of planetary system formation . hr 8799 s four planets in particular appear to be broadly consistent with formation by disk instability , though clearly further study of the formation of this key planetary system is warranted . i thank the two referees and the scientific editor , eric feigelson , for valuable improvements to both the manuscript itself , and , more importantly , to the choice of initial conditions for the models , sandy keiser for computer systems support , and john chambers for advice on orbit determinations . this research was supported in part by nasa planetary geology and geophysics grant nnx07ap46 g , and is contributed in part to nasa astrobiology institute grant nna09da81a . the calculations were performed on the flash cluster at dtm . 2.0 & 2.0 & 0.21 & 0.11 & 40 . & 50 . & 1.13 & 1.71 + 1.5 & 1.5 & 0.17 & 0.11 & 35 . & 40 . & 1.12 & 1.67 + 1.0 & 1.0 & 0.13 & 0.13 & 30 . & 30 . & 1.13 & 1.68 + 0.5 & 0.5 & 0.083 & 0.17 & 22 . & 30 . & 1.12 & 1.61 + 0.1 & 0.1 & 0.028 & 0.28 & 11 . & 30 . & 1.11 & 1.47 + 2.0 & 0.21 & 4.6 & 33.8 & 0.107 + 2.0 & 0.21 & 2.7 & 36.7 & 0.104 + 2.0 & 0.21 & 3.5 & 37.6 & 0.106 + 2.0 & 0.21 & 3.5 & 33.5 & 0.122 + 2.0 & 0.21 & 4.0 & 36.4 & 0.135 + 2.0 & 0.21 & 3.4 & 35.6 & 0.149 + 1.5 & 0.17 & 5.1 & 35.2 & 0.191 + 1.5 & 0.17 & 4.2 & 31.2 & 0.151 + 1.5 & 0.17 & 1.6 & 32.1 & 0.077 + 1.5 & 0.17 & 1.8 & 31.8 & 0.070 + 1.0 & 0.13 & 2.8 & 35.1 & 0.166 + 1.0 & 0.13 & 1.8 & 32.0 & 0.119 + 1.0 & 0.13 & 2.9 & 40.5 & 0.117 + 0.5 & .083 & 1.4 & 43.2 & 0.174 + 0.1 & .028 & .74 & 48.7 & 0.350 + 2.0 & 0.21 & 1.4 & 43 . & 0.019 + 2.0 & 0.21 & 3.1 & 39 . & 0.051 + 2.0 & 0.21 & 2.9 & 43 . & 0.043 + 1.5 & 0.17 & 4.2 & 39 . & 0.19 + 1.5 & 0.17 & 3.0 & 48 . & 0.22 + 1.5 & 0.17 & 2.4 & 41 . & 0.047 + 1.5 & 0.17 & 4.9 & 67 . & 0.22 + 1.0 & 0.13 & 3.0 & 39 . & 0.12 + 1.0 & 0.13 & 2.0 & 37 . & 0.031 + 1.0 & 0.13 & 3.8 & 44 . & 0.11 + 1.0 & 0.13 & 2.5 & 45 . & 0.14 + 0.5 & .083 & 2.1 & 44 . & 0.092 + 0.5 & .083 & 1.9 & 51 . & 0.24 + 0.1 & .028 & .91 & 45 . & 0.33 + 0.1 & .028 & .80 & 42 . & 0.33 +
doppler surveys have shown that more massive stars have significantly higher frequencies of giant planets inside @xmath0 3 au than lower mass stars , consistent with giant planet formation by core accretion . direct imaging searches have begun to discover significant numbers of giant planet candidates around stars with masses of @xmath0 1 @xmath1 to @xmath0 2 @xmath1 at orbital distances of @xmath0 20 au to @xmath0 120 au . given the inability of core accretion to form giant planets at such large distances , gravitational instabilities of the gas disk leading to clump formation have been suggested as the more likely formation mechanism . here we present five new models of the evolution of disks with inner radii of 20 au and outer radii of 60 au , for central protostars with masses of 0.1 , 0.5 , 1.0 , 1.5 , and 2.0 @xmath1 , in order to assess the likelihood of planet formation on wide orbits around stars with varied masses . the disk masses range from 0.028 @xmath1 to 0.21 @xmath1 , with initial toomre @xmath2 stability values ranging from 1.1 in the inner disks to @xmath3 in the outer disks . these five models show that disk instability is capable of forming clumps on time scales of @xmath4 yr that , if they survive for longer times , could form giant planets initially on orbits with semimajor axes of @xmath0 30 au to @xmath0 70 au and eccenticities of @xmath0 0 to @xmath0 0.35 , with initial masses of @xmath5 to @xmath6 , around solar - type stars , with more protoplanets forming as the mass of the protostar ( and protoplanetary disk ) are increased . in particular , disk instability appears to be a likely formation mechanism for the hr 8799 gas giant planetary system .
If the order comes, the B-52s will return to a ready-to-fly posture not seen since the Cold War. BARKSDALE AIR FORCE BASE, La. — The U.S. Air Force is preparing to put nuclear-armed bombers back on 24-hour ready alert, a status not seen since the Cold War ended in 1991. That means the long-dormant concrete pads at the ends of this base’s 11,000-foot runway — dubbed the “Christmas tree” for their angular markings — could once again find several B-52s parked on them, laden with nuclear weapons and set to take off at a moment’s notice. “This is yet one more step in ensuring that we’re prepared,” Gen. David Goldfein, Air Force chief of staff, said in an interview during his six-day tour of Barksdale and other U.S. Air Force bases that support the nuclear mission. “I look at it more as not planning for any specific event, but more for the reality of the global situation we find ourselves in and how we ensure we’re prepared going forward.” Goldfein and other senior defense officials stressed that the alert order had not been given, but that preparations were under way in anticipation that it might come. That decision would be made by Gen. John Hyten, the commander of U.S. Strategic Command, or Gen. Lori Robinson, the head of U.S. Northern Command. STRATCOM is in charge of the military’s nuclear forces and NORTHCOM is in charge of defending North America. Putting the B-52s back on alert is just one of many decisions facing the Air Force as the U.S. military responds to a changing geopolitical environment that includes North Korea’s rapidly advancing nuclear arsenal, President Trump’s confrontational approach to Pyongyang, and Russia’s increasingly potent and active armed forces. Goldfein, who is the Air Force’s top officer and a member of the Joint Chiefs of Staff, is asking his force to think about new ways that nuclear weapons could be used for deterrence, or even combat. “The world is a dangerous place and we’ve got folks that are talking openly about use of nuclear weapons,” he said. “It’s no longer a bipolar world where it’s just us and the Soviet Union. We’ve got other players out there who have nuclear capability. It’s never been more important to make sure that we get this mission right.” During his trip across the country last week, Goldfein encouraged airmen to think beyond Cold War uses for ICBMs, bombers and nuclear cruise missiles. “I’ve challenged…Air Force Global Strike Command to help lead the dialog, help with this discussion about ‘What does conventional conflict look like with a nuclear element?’ and ‘Do we respond as a global force if that were to occur?’ and ‘What are the options?’” he said. “How do we think about it — how do we think about deterrence in that environment?” Asked if placing B-52s back on alert — as they were for decades — would help with deterrence, Goldfein said it’s hard to say. “Really it depends on who, what kind of behavior are we talking about, and whether they’re paying attention to our readiness status,” he said. Already, various improvements have been made to prepare Barksdale — home to the 2d Bomb Wing and Air Force Global Strike Command, which oversees the service’s nuclear forces — to return B-52s to an alert posture. Near the alert pads, an old concrete building — where B-52 crews during the Cold War would sleep, ready to run to their aircraft and take off at a moment’s notice — is being renovated. Inside, beds are being installed for more than 100 crew members, more than enough room for the crews that would man bombers positioned on the nine alert pads outside. There’s a recreation room, with a pool table, TVs and a shuffleboard table. Large paintings of the patches for each squadron at Barksdale adorn the walls of a large stairway. One painting — a symbol of the Cold War — depicts a silhouette of a B-52 with the words “Peace The Old Fashioned Way,” written underneath. At the bottom of the stairwell, there is a Strategic Air Command logo, yet another reminder of the Cold War days when American B-52s sat at the ready on the runway outside. Those long-empty B-52 parking spaces will soon get visits by two nuclear command planes, the E-4B Nightwatch and E-6B Mercury, both which will occasionally sit alert there. During a nuclear war, the planes would become the flying command posts of the defense secretary and STRATCOM commander, respectively. If a strike order is given by the president, the planes would be used to transmit launch codes to bombers, ICBMs and submarines. At least one of the four nuclear-hardened E-4Bs — formally called the National Airborne Operations Center, but commonly known as the Doomsday Plane — is always on 24-hour alert. Barksdale and other bases with nuclear bombers are preparing to build storage facilities for a new nuclear cruise missile that is under development. During his trip, Goldfein received updates on the preliminary work for a proposed replacement for the 400-plus Minuteman III intercontinental ballistic missiles, and the new long-range cruise missile. “Our job is options,” Goldfein said. “We provide best military advice and options for the commander in chief and the secretary of defense. Should the STRATCOM commander require or the NORTHCOM commander require us to [be on] a higher state of readiness to defend the homeland, then we have to have a place to put those forces.” ||||| The U.S. is ruminating about putting nuclear bombers back on a 24-hour alert. Defense One reports the move is being considered by top Pentagon officials over national security concerns. “This is yet one more step in ensuring that we’re prepared,” Gen. David Goldfein, Air Force chief of staff, said in an interview during his six-day tour of Barksdale and other U.S. Air Force bases that support the nuclear mission. “I look at it more as not planning for any specific event, but more for the reality of the global situation we find ourselves in and how we ensure we’re prepared going forward.” Goldfein and other senior defense officials stressed that the alert order had not been given, but that preparations were under way in anticipation that it might come. That decision would be made by Gen. John Hyten, the commander of U.S. Strategic Command, or Gen. Lori Robinson, the head of U.S. Northern Command. STRATCOM is in charge of the military’s nuclear forces and NORTHCOM is in charge of defending North America. It’s important to point out the Pentagon is only considering the option. It doesn’t mean this will happen, and it’s completely possible this is something they consider on a regular basis. After all, Great Britain reportedly has a plan in place to attack North Korea, something other countries probably have as well. That’s part of being in the military, making sure there’s a plan for almost everything. It just depends on whether something leaks out or not. But it’s pretty interesting the Air Force is going on the record and openly talking about the option. It’s not “unnamed sources,” but the Air Force chief of staff saying, “Hey…we’re thinking about it.” Goldfein did admit the strategy may or may not encourage so-called rogue regimes to chill out and back down, noting it depended on, “who, what kind of behavior are we talking about, and whether they’re paying attention to our readiness status.” The easy guess is North Korea, but other nations could include Iran, Russia, and China. The big question for me is why? It makes sense to be prepared, but there are ICBMs and cruise missiles which are available to military forces. Perhaps the Pentagon is considering using B-52’s to do some sort of attempted quiet strike against an enemy, like North Korea, and believe the bomber is a better option than the missiles. The military has yet to put the B-52 out to pasture, so this could just be going back to the well because it works. It also could be the Pentagon is confident a B-52 wouldn’t be detected by North Korea’s lone satellite and China or Russia wouldn’t let North Korea know what was going on. What doesn’t make sense is why the Goldfein would come right out and say, “Yeah this is an option.” Is he trying to send a message to China and Russia or just a message to the entire world that all options are being considered? It also goes against comments by President Donald Trump made during the 2016 campaign about the bombers and their usefulness (the entire, “second-generation B-52” statement). It could be a political move designated to send a message to North Korea, make Kim Jong-un realize the U.S. is taking his threats seriously, and hopefully get him to stop raging against the America. Or it’s Trump just trying to show how “big” his military is. Of course it could also complete backfire and cause Kim to issue even more threats against the U.S., and attempt to draw the nation, and possibly the world, into war. It’s a curious strategy, but one which is only being considered. At the moment.
– The security site Defense One reports that the Air Force is considering putting nuclear-armed bomber planes back on 24-hour notice, something that hasn't been in effect since the Cold War. The site emphasizes that no such order has yet been given, but it quotes Air Force chief of staff Gen. David Goldfein as saying the move is under consideration. "I look at it more as not planning for any specific event, but more for the reality of the global situation we find ourselves in and how we ensure we're prepared going forward," he says. Specifically, the order would result in B-52s armed with nuclear weapons being parked at Barksdale Air Force Base in Louisiana, with crews in nearby hangars ready to go at a moment's notice. A post at Hot Air adds a bit of caution about reading too much into the report. Just because it's being considered "doesn’t mean this will happen, and it's completely possible this is something they consider on a regular basis," writes Taylor Millard. After all, planning for all contingencies is what all militaries do, Millard notes. Still, at least one tangible sign of the potential move is in motion: The building where B-52 crews slept during the Cold War is being renovated, notes Defense One.
SECTION 1. SHORT TITLE. This Act may be cited as ``Social Security Earnings Test Repeal Act of 2003''. SEC. 2. REPEAL OF PROVISIONS RELATING TO DEDUCTIONS ON ACCOUNT OF WORK. (a) In General.--Subsections (b), (c)(1), (d), (f), (h), (j), and (k) of section 203 of the Social Security Act (42 U.S.C. 403) are repealed. (b) Conforming Amendments.--Section 203 of such Act (as amended by subsection (a)) is further amended-- (1) in subsection (c), by redesignating such subsection as subsection (b), and-- (A) by striking ``Noncovered Work Outside the United States or'' in the heading; (B) by redesignating paragraphs (2), (3), and (4) as paragraphs (1), (2), and (3), respectively; (C) by striking ``For purposes of paragraphs (2), (3), and (4)'' and inserting ``For purposes of paragraphs (1), (2), and (3)''; and (D) by striking the last sentence; (2) in subsection (e), by redesignating such subsection as subsection (c), and by striking ``subsections (c) and (d)'' and inserting ``subsection (b)''; (3) in subsection (g), by redesignating such subsection as subsection (d), and by striking ``subsection (c)'' each place it appears and inserting ``subsection (b)''; and (4) in subsection (l), by redesignating such subsection as subsection (e), and by striking ``subsection (g) or (h)(1)(A)'' and inserting ``subsection (d)''. SEC. 3. ADDITIONAL CONFORMING AMENDMENTS. (a) Provisions Relating to Benefits Terminated Upon Deportation.-- Section 202(n)(1) of the Social Security Act (42 U.S.C. 402(n)(1)) is amended by striking ``Section 203 (b), (c), and (d)'' and inserting ``Section 203(b)''. (b) Provisions Relating to Exemptions From Reductions Based on Early Retirement.-- (1) Section 202(q)(5)(B) of such Act (42 U.S.C. 402(q)(5)(B)) is amended by striking ``section 203(c)(2)'' and inserting ``section 203(b)(1)''. (2) Section 202(q)(7)(A) of such Act (42 U.S.C. 402(q)(7)(A)) is amended by striking ``deductions under section 203(b), 203(c)(1), 203(d)(1), or 222(b)'' and inserting ``deductions on account of work under section 203 or deductions under section 222(b)''. (c) Provisions Relating to Exemptions From Reductions Based on Disregard of Certain Entitlements to Child's Insurance Benefits.-- (1) Section 202(s)(1) of such Act (42 U.S.C. 402(s)(1)) is amended by striking ``paragraphs (2), (3), and (4) of section 203(c)'' and inserting ``paragraphs (1), (2), and (3) of section 203(b)''. (2) Section 202(s)(3) of such Act (42 U.S.C. 402(s)(3)) is amended by striking ``The last sentence of subsection (c) of section 203, subsection (f)(1)(C) of section 203, and subsections'' and inserting ``Subsections''. (d) Provisions Relating to Suspension of Aliens' Benefits.--Section 202(t)(7) of such Act (42 U.S.C. 402(t)(7)) is amended by striking ``Subsections (b), (c), and (d)'' and inserting ``Subsection (b)''. (e) Provisions Relating to Reductions in Benefits Based on Maximum Benefits.--Section 203(a)(3)(B)(iii) of such Act (42 U.S.C. 403(a)(3)(B)(iii)) is amended by striking ``and subsections (b), (c), and (d)'' and inserting ``and subsection (b)''. (f) Provisions Relating to Penalties for Misrepresentations Concerning Earnings for Periods Subject to Deductions on Account of Work.--Section 208(a)(1)(C) of such Act (42 U.S.C. 408(a)(1)(C)) is amended by striking ``under section 203(f) of this title for purposes of deductions from benefits'' and inserting ``under section 203 for purposes of deductions from benefits on account of work''. (g) Provisions Taking Into Account Earnings in Determining Benefit Computation Years.--Clause (I) in the next to last sentence of section 215(b)(2)(A) of such Act (42 U.S.C. 415(b)(2)(A)) is amended by striking ``no earnings as described in section 203(f)(5) in such year'' and inserting ``no wages, and no net earnings from self-employment (in excess of net loss from self-employment), in such year''. (h) Provisions Relating to Rounding of Benefits.--Section 215(g) of such Act (42 U.S.C. 415(g)) is amended by striking ``and any deduction under section 203(b)''. (i) Provisions Relating to Earnings Taken Into Account in Determining Substantial Gainful Activity of Blind Individuals.--The second sentence of section 223(d)(4)(A) of such Act (42 U.S.C. 423(d)(4)(A)) is amended by striking ``if section 102 of the Senior Citizens' Right to Work Act of 1996 had not been enacted'' and inserting the following: ``if the amendments to section 203 made by section 102 of the Senior Citizens' Right to Work Act of 1996 and by the Social Security Earnings Test Repeal Act of 2003 had not been enacted''. (j) Provisions Defining Income for Purposes of SSI.--Section 1612(a) of such Act (42 U.S.C. 1382a(a)) is amended-- (1) by striking ``as determined under section 203(f)(5)(C)'' in paragraph (1)(A) and inserting ``as defined in the last two sentences of this subsection''; and (2) by adding at the end (after and below paragraph (2)(G)) the following new sentences: ``For purposes of paragraph (1)(A), the term `wages' means wages as defined in section 209, but computed without regard to the limitations as to amounts of remuneration specified in paragraphs (1), (6)(B), (6)(C), (7)(B), and (8) of section 209(a). In making the computation under the preceding sentence, (A) services which do not constitute employment as defined in section 210, performed within the United States by an individual as an employee or performed outside the United States in the active military or naval services of the United States, shall be deemed to be employment as so defined if the remuneration for such services is not includible in computing the individual's net earnings or net loss from self-employment for purposes of title II, and (B) the term `wages' shall be deemed not to include (i) the amount of any payment made to, or on behalf of, an employee or any of his or her dependents (including any amount paid by an employer for insurance or annuities, or into a fund, to provide for any such payment) on account of retirement, or (ii) any payment or series of payments by an employer to an employee or any of his or her dependents upon or after the termination of the employee's employment relationship because of retirement after attaining an age specified in a plan referred to in section 209(a)(11)(B) or in a pension plan of the employer.''. (k) Repeal of Deductions on Account of Work Under the Railroad Retirement Program.-- (1) In general.--Section 2 of the Railroad Retirement Act of 1974 (45 U.S.C. 231a) is amended-- (A) by striking subsections (f); and (B) by striking subsection (g)(2) and by redesignating subsection (g)(1) as subsection (g). (2) Conforming amendments.-- (A) Section 3(f)(1) of such Act (45 U.S.C. 231b(f)(1)) is amended in the first sentence by striking ``before any reductions under the provisions of section 2(f) of this Act,''. (B) Section 4(g)(2) of such Act (45 U.S.C. 231c(g)(2)) is amended-- (i) in clause (i), by striking ``shall, before any deductions under section 2(g) of this Act,'' and inserting ``shall''; and (ii) in clause (ii), by striking ``any deductions under section 2(g) of this Act and before''. SEC. 4. EFFECTIVE DATE. The amendments and repeals made by this Act shall apply with respect to taxable years ending on or after the date of the enactment of this Act.
Social Security Earnings Test Repeal Act of 2003 - Amends title II (Old Age, Survivors and Disability Insurance) (OASDI) of the Social Security Act to remove the limitation on the amount of outside income which a beneficiary may earn (earnings test) without incurring a reduction in benefits.
it is widely recognized that extreme climatic conditions constitute a major public health hazard . epidemiological surveys and reports on heat wave have shown that the elderly population is particularly at a high risk of developing complications and heat - related mortality.[15 ] heat - related illness may range from trivial heat injury to life - threatening emergencies . as there is gradual global warming , the threat of intermittent heat wave on human life is increasing day by day . at the same time , we have to accept the fact that despite preventive measures by the national and international organizations to stop progression of unfavorable climatic change , the hot climatic trend may be delayed but can not be stopped . it is expected that these heat waves may increase in frequency , severity and duration to a discernible extent . this hot climate threat is going to be a concern of all health care specialties . while auditing our hospital mortality record of the last couple of years in the surgical population , we found high peaks of mortality in july and august [ figure 1 ] , which correspond to the summer months , with very high humidity levels in this region , which inspired us to look into the heat - related surgical morbidity prospectively . the record of hospital surgical mortality rate . note the peaking in the months of august although a lot of epidemiological studies have been carried out in hot climatic conditions for various aspects , very little has been studied in the surgical patients . secondly , most of these surveys are either from western regions or from well - developed countries with good living conditions . thirdly , temperature variations , other meterological factors and patient characteristics are different in india . it is well known that cardiac output increases to compensate for increased blood flow to skin . as most of the earlier work is done in the developed and western population , their results can not be extrapolated to developing countries like india , where many are exposed to constant hot and humid weather because of poor living conditions and facilities . as our hospital record witnessed peaks of surgical deaths in the summer season , we planned this prospective cohort study to determine the impact of hot climatic conditions in elderly surgical patients over 1 year . after approval from the institutional ethics committee and written informed consent from patients , an observational prospective cohort study was undertaken to study the impact of hot climate on elderly ( age > 60 years ) surgical patients over a period of 1 year . patients were included in the study irrespective of their characteristic , american society of anesthesiologists ( asa ) physical status , cardiac status and nature of surgery once the ambient temperature crossed 20c . we considered peak ambient temperature at the time of admission as our reference point . to minimize the bias due to medical problems and adaptation of body in the hospital air conditioned environment , the exclusion criterion were designed as follows : patients suffering from hyperthyroidism , hypothyroidism , malignant hyperthermia and taking psychotropic drugs , beta blockers and drugs interfering with temperature balance.patients living most of the time indoors in air conditioned houses or who stayed in an air conditioned hospital for more than 48 h prior to surgery . patients suffering from hyperthyroidism , hypothyroidism , malignant hyperthermia and taking psychotropic drugs , beta blockers and drugs interfering with temperature balance . patients living most of the time indoors in air conditioned houses or who stayed in an air conditioned hospital for more than 48 h prior to surgery . ninety - eight elderly patients requiring general anesthesia for surgery were enrolled in this study . subjects were grouped on the basis of peak ambient temperature with a cut - off value of 30c . we took 2030c as a control as , in this range , the human body is comfortable . peak ambient temperature , relative humidity and evaporation index were noted daily from meteorological department , punjab agricultural university , ludhiana , punjab , india . heat index was derived from the above - noted value with the formula given below . humidex or heat index are the commonly used indices to study the effect of temperature and relative humidity . heat index ( hi ) or apparent temperature ( ai ) = -42.379 + 2.04901523 ( tf ) + 10.14333127 ( rh ) - 0.22475541 ( tf ) ( rh ) - ( 6.83783 10 ) ( tf ) - ( ( 5.481717 10 ) ( rh ) + ( ( 1.22874 10 ) ( tf ) ( rh ) + ( ( 8.5282 10 ) ( tf ) ( rh ) ) - ( ( 1.99 10 ) ( tf ) ( rh ) tf = temperature in fahrenheit rh = relative humidity all patients included in the study were assessed pre - operatively . the pre - operative evaluation included complete history , general physical examination , clinical signs of heat dysfunction , patient 's urine output and daily approximate fluid intake . patient 's risk stratification was carried out on the basis of asa physical status , detsky scoring and shoemaker risk criteria to test the impact of bias due to different patient clinical profiles . detsky scoring and shoemaker risk criteria were used to assess the cardiovascular and surgical risk , respectively . nature , type of surgery ( emergency / elective ) , pre - operative vitals and reports of routine investigations were also recorded . intra- and post - operative data related to surgical risk factors , duration of anesthesia , vitals record and complications were noted . vitals were recorded every half an hour intra - operatively and then every hour for the first 2 h , every 2-hourly for the next 24 h and then daily during the post - operative hospital stay . perioperative complications such as hypotension , tachycardia / bradycardia , dysarrhythmias , myocardial infarction , respiratory distress , oliguria , anuria , acute renal failure , liver dysfunction or multiple organ dysfunction syndrome , etc . , hypotension was defined when the systolic blood pressure was < 90 mmhg , tachycardia when heart rate was > 100/min , bradycardia when heart rate was < 60/min , myocardial ischemia as evident from chest pain and st depression on ekg / monitor . respiratory distress was defined as a respiratory rate > 35/min or < 6/min , use of accessory muscles , pao2 < 60 mmhg on room air and paco2 > 40 mmhg or < 35 mmhg in abg ( if performed ) . oliguria was defined as urine output < 400 ml/24 h and anuria as urine output < 100 ml/24 h. the pacu stay , icu stay and hospital stay were noted . all patients showing signs of post - operative ischemia or unexplained hypotension were subjected to troponin - t investigation to rule out mi . post - operatively , patients were also observed for signs of septicemia ( as evident from fever , increased wbc count or culture report ) . outcome of patients was evaluated and compared in the form of incidence of complications and hospital stay . all these observations were noted in the proforma and analyzed using student 's t-test and z - test for statistical significance . stepwise multivariate regression analysis was used to determine the impact of risk variables on morbidity . it was difficult to plan a prospective study in view of the mixed patient profile because of different socioeconomic status , type of surgery and other surgical risk factors . being a pioneer study , a convenient sample of all patients coming to the hospital for surgery under general anesthesia and who fulfill the criteria for enrollment was taken over a period of 1 year . patents were grouped into two groups with a cut - off value of 30c keeping in mind comfort ( control group ) and non - comfort heat zone ( study group ) . relative humidity was higher in group i as compared with group ii , with a low evaporation index in group i as compared with group ii , counteracting some effects of higher temperature . the heat index was high in both the groups , but the difference was statistically significant . age , physical status and risk profiles and type of surgeries of patients were comparable [ tables 1 and 2 , figure 2 ] . average age among groups i and ii were 66.81 5.74 and 68.08 6.34 , respectively . group i had more asa i patients ( 37% ) as compared with group ii ( 9.8% ) and group ii had more asa ii patients ( 45% ) than group i ( 26% ) . however , patient distribution among groups as per the risk stratification for asa iii , asa iv , detsky score and shoemaker 's rrisk criteria were comparable . all heat variables : ambient temperature , evaporation index and heat index , had a high statistically significant difference between the two groups [ table 3 and figure 3 ] . temperature and heat index had a highly significant difference between the two groups ( p = 0.001 ) . relative humidity was high among both groups i and ii ( 96.48 2.76 and 82.03 14.40 , p = 0.01 ) . evaporation index was higher in group ii as compared with group i ( p = 0.001 ) , counteracting the effect of high temperature and relative humidity . as relative humidity remained in a higher range in both groups , heat index also remained high in both the groups ; group i ( 103.29 18.58 ) and group ii ( 166.79 18.25 ) . there were more complications in group ii as compared with group i [ table 4 ] . further , there was more complication in the high - risk patients in group ii . complications had a similar positive corelationship with all heat variables except evaporation index , which had a t - value of 0.031 as compared with temperature ( 1.133 ) , humidity ( 1.041 ) and heat index ( 1.021 ) [ table 5 ] . on multivariate analysis , the impact of heat variables was less than that of asa physical status , detsky score and shoemaker 's risk criteria [ tables 6 a , figures 4 and 5 ] . patients had a prolonged hospital stay of 13.21 6.44 in group ii as compared with group i ( 9.81 3.54 days ) [ table 7 and figure 6 ] . demographics of the study groups comparison of the patient 's physical status and surgical profile between the two groups comparative distribution of patients in both groups as per asa physical status comparisons of different heat variables comparison of the heat variables between the two groups comparison of the complications on the basis of peak outdoor temperature between the groups comparison of the heat variables and complications on multivariate analysis complications in relation to patient status factors affecting the morbidities among the patients multivariate analysis complications in the two groups distribution of complications among different risk variables in the two groups comparison of hospital stay between the two groups hospital stay in the two groups global warming is emerging as a threat to the survival of human beings in the coming future . there are a number of epidemiological surveys to address the heat wave and heat wave - related morbidity and mortality.[15923 ] in a study by nakai s et al . , the authors observed that heat - related deaths were more prone to occur during the day , with peak daily temperatures of > 38c , and the incidence of these deaths showed an exponential dependence on the number of hot days . furthermore , most deaths were reported either in children ( < 4 years ) or in the elderly ( > 70 years ) . in the last decade , numerous epidemiological studies related to heat wave appeared in the literature from italy , usa,[21319 ] japan , france , belgium and many other countries . there were similar deaths tolls in reports , raising an alarming concern for global warming and related heath issues . among the hot climatic elements or parameters that affect the human body , acting together , these elements influence the body 's comfort and well being.[2022 ] in addition to environmental heat , body heat is gained from cellular metabolism and the mechanical work of the skeletal muscle . evaporation is a primary way of heat loss when the environmental temperature is higher than that of the body . during humid climate , continuous active evaporation without adequate water intake poses a risk of dehydration and heat - related illness . active sympathetic cutaneous vasodilatation increases the blood flow in the skin up to 8 l / min . an elevated blood temperature also causes tachycardia and tachypnoea , augmenting heat loss through lungs and skin . if the body is given sufficient time , it will gradually become adapted to living and working in a hot environment this cohort was planned with the hypothesis that if any surgery is carried out in the period prior to adaptation , it may have more complications . we found that patients had statistically significant complications when the peak ambient temperature was higher ( group ii ; 38.2 2.96c ) as compared with the comfortable temperature zone ( group i ; 28.41 1.63c ) . 22.54% patients in group ii had complications while in group i , only 7.41% patients had complications ( p < 0.10 ) . in group ii , two patients had three acute respiratory distress syndrome , two acute renal failure , one myocardial infarction and two multiple organ failure syndrome . hospital stay was also prolonged statistically when the peak outdoor temperature was higher ( p = 0.01 ) . high - risk patients with poor cardiorespiratory reserve are at a greater risk of complications . inglis et al . also found seasonal variations in cardiac failure patients in the australian population in the summer season . patients in group ii have prolonged hospital stay , but it is difficult to comment whether prolonged stay resulted in more complications or complications resulted in prolonged stay . infections were not monitored strictly . as chronic dehydration is common in hot weather , particularly in sick patients , it may increase the risk of infection due to gut ischemia . however , in our study , the nature and type of surgery were similar between the two groups . we found that patients with poor asa physical status had more complications in group ii . we also found that patients with a higher detsky score or poor shoemaker 's risk had more complications when the ambient temperature was high . on stepwise multivariate regression analysis , it was found that poor asa status , detsky score and shoemaker 's risk had a greater impact on morbidity as compared with hot weather . probably , it was the poor cardiorespiratory reserve of the patients that failed to cope with the added adverse weather stress . there was no statistical difference in the tools used . however , their t values were of order as follows : temperature = 1.33 , relative humidity = 1.041 , evaporation index = 0.031 and heat index = 1.021 . there is only one prospective study carried out on intensive care patients evaluating the effect of ambient temperature on core body temperature , and the authors found hyperthermia in these patients . most of the data and literature available are either from western countries or from well - developed south eastern countries . the impact of hot weather on the indian population may be different from that of the population of the western and well - developed countries . they probably may be more adapted and genetically different also . secondly , as change in temperature in this part of the country is often gradual over many days , these subjects might have adapted to some extent also . during the summer months these spells are often seen to move from one region to another . in places where the normal temperature itself is high and considering environmental and subject differences , it is important to review our local environmental values and impacts . but , there are very few studies from india or pakistan . these authors found climate and its variations different from the western and other worlds as well . considering these aspects , the india meteorological department ( imd ) has defined heat wave differently in two categories . the first category includes places where the normal maximum temperature is greater than 40c . in such regions , if the day temperature exceeds by 34c above the normal , it is said to be affected by a heat wave . similarly , when the day temperature is 5c or more than the normal , severe heat wave conditions persist . the second category considers the regions where the normal maximum is 40c or less . in these areas , if the day temperature is 56c above the normal , then the place is said to be affected by a moderate heat wave . a severe heat wave condition exists when the day temperature exceeds the normal maximum temperature over the place by 6c . a recent study by sinha ray et al . has shown that the average annual loss of human life due to heat wave over india is 153 . in a report from the imd , it has been observed that the most affected states were where the normal temperature was greater than 40c . these authors have also noted that loss of human lives were more in regions with poor socioeconomic conditions of the people than in a state with better living conditions . they also mentioned that the impact of heat waves over bihar , punjab and parts of maharashtra was more as it may create more water scarcity and adversely affect agriculture . these authors also correlated heat wave mortality with el - nino events . secondly , living conditions , regional metrological factors , socioeconomic status and genetic predisposition of patients are all different . however , statistical significance has poor strength as the sample size was too small for intragroup comparison . our patients were mainly from the northern part of india , particularly punjab state , which is an agricultural state with few industrial cities . although both these population subsets work in similar hot and humid conditions , providing a suitable observational ground for study , these patients differ in a number of other aspects . , the compounding effect of a high level of industrial smoke and high humidex has resulted in respiratory distress in stable chest patients in this city . being a pioneer work , there are limitations in our study , the major one being poor power because of the small sample size in group i. other bias factors such as different types of surgeries , asa status variations , polluted weather , academic cycle , type of emergency , infections , etc . need to be adjusted to minimize bias . even within the same city , in addition to the regional difference in heat vulnerability , a higher vulnerability had been seen within the downtown areas of all cities compared with the suburban areas , regardless of the city 's overall vulnerability . hot and humid weather adversely affects the outcome in terms of prolonged hospital stay and complication rate in elderly surgical patients . there is need to explore the impact of hot and humid weather in the vulnerable group of patients such as laborers working in fields coming for emergency surgery . different heat variables have a similar effect on patient outcome . keeping in mind the study design limitations and sample size , this ground breaking idea needs further corroboration with a large sample size in suitable high - risk patients to reduce the bias of hidden variables . we need to consider the affects of confounding factors such as air pollutants , socioeconomic status , living conditions , job profiles , rate of change of weather , etc . to study the impact of heat - related morbidity . we need to focus on multiple regions individually also as geographical and racial differences affect the results . as hot weather affects farmers and industrial laborers because of prolonged sun exposure in the harvesting season and hot humid polluted environment , respectively , this study may be more focused on this vulnerable subgroup to study the discernible impact . we also hypothesize that stabilization of patients in an air conditioned environment for 48 h in a hot climate prior to surgery may reduce the complication rate and improve the surgical outcome , especially in patients with poor reserve . keeping in mind the study design limitations and sample size , this ground breaking idea needs further corroboration with a large sample size in suitable high - risk patients to reduce the bias of hidden variables . we need to consider the affects of confounding factors such as air pollutants , socioeconomic status , living conditions , job profiles , rate of change of weather , etc . to study the impact of heat - related morbidity . we need to focus on multiple regions individually also as geographical and racial differences affect the results . as hot weather affects farmers and industrial laborers because of prolonged sun exposure in the harvesting season and hot humid polluted environment , respectively , this study may be more focused on this vulnerable subgroup to study the discernible impact . we also hypothesize that stabilization of patients in an air conditioned environment for 48 h in a hot climate prior to surgery may reduce the complication rate and improve the surgical outcome , especially in patients with poor reserve . keeping in mind the study design limitations and sample size , this ground breaking idea needs further corroboration with a large sample size in suitable high - risk patients to reduce the bias of hidden variables . we need to consider the affects of confounding factors such as air pollutants , socioeconomic status , living conditions , job profiles , rate of change of weather , etc . to study the impact of heat - related morbidity . we need to focus on multiple regions individually also as geographical and racial differences affect the results . as hot weather affects farmers and industrial laborers because of prolonged sun exposure in the harvesting season and hot humid polluted environment , respectively , this study may be more focused on this vulnerable subgroup to study the discernible impact . we also hypothesize that stabilization of patients in an air conditioned environment for 48 h in a hot climate prior to surgery may reduce the complication rate and improve the surgical outcome , especially in patients with poor reserve .
background : it is well known that heat wave is a major cause of weather related mortality in extreme of ages . while auditing our hospital mortality record , we found higher surgical mortality in the months of summer season which inspired us to look into the impact of hot climate in elderly surgical patients.materials and methods : an observational prospective cohort study was undertaken to study the impact of hot climate on elderly ( age > 60 yrs ) surgical patients over one year when outside temperature was more than 20c . 98 elderly patients requiring general anaesthesia for surgery were enrolled . patients were grouped on the basis of peak outdoor temperature with a cut off value of 30c . group i- when peak outdoor temperature ranged between 20 - 30c ( comfortable zone ) and group ii - when peak outdoor temperature ranged above 30c . to reduce the bias , inclusion and exclusion criterion were defined . meteorological factors , patient characteristics , surgical risk factors and other related data were noted . data was analyzed using student'st and z - test for statistical significance.results:there were statistically significant complications and prolonged hospital stay in group ii as compared to i ( 13.216.44 vs 9.813.54 days , p value = 0.01 ) on univariate analysis . high risk patients had more complications in hot weather . stepwise multivariate regression analysis showed higher adverse impact of poor physical and cardiac status than hot climate.conclusion:hot and humid weather adversely affect the perioperative outcome in elderly surgical patients . patients with poor reserves are at greater perioperative risk during hot and humid climate .
this study was approved by the animal care and use committee at gyeongsang national university ( approval number : gnu-140128-e0006 ) . these birds were hospitalized in the wildlife center in the province of gyeongsangnam - do , republic of korea , due to various orthopedic injuries and had received appropriate medical treatments to recover their health . all vultures in this study were suspected of being between 1 and 2 years of age . body weights and lengths of the birds were ranged from 7.5 to 9.0 kg ( mean sd of body weight , 8.27 0.40 kg ) and from 98 to 104 cm ( mean sd of body length , 100.9 2.0 cm ) , respectively . although these cinereous vultures were recovered to normal health , they had not been released because of disability . the health status of ten cinereous vultures was assessed by physical examination , diagnostic imaging , cbc and serum biochemical analyses . in addition , no clinical diseases were verified other than their disability caused by the previous orthopedic injuries . after recovering to normal health conditions , the birds were acclimated to the rehabilitation program for at least 7 days prior to the study . the birds were then transferred to temporary individual housing units , and food was withheld for 12 hr prior to anesthesia , although they were allowed access to water ad libitum . the experimental protocol was modified based on dose - related effects obtained from a previous study by naganobu and hagio . differences based on gender were not investigated , because cinereous vultures do not show sexual dimorphism . seoul , korea ) of an isoflurane vaporizer ( multiplus , royal medical co. , ltd . , pyeongtaek , korea ) setting in 100% oxygen ( 3 l / min ) delivered via a mask . a non - rebreathing anesthetic circuit ( modified jackson rees anesthesia circuit ) was used in this study . when voluntary movement of the nictitating membrane was absent for 30 sec , the trachea was intubated with an uncuffed endotracheal tube . the endotracheal tubes with internal diameter of 8.0 mm ( n=8 ) and 7.5 mm ( n=2 ) were chosen to prevent leakage of the anesthetic agent according to the size of cinereous vultures . each bird was restrained in dorsal recumbency to minimize the impairment of sternal motion . after endotracheal intubation , anesthesia was maintained by spontaneous ventilation with isoflurane in 100% oxygen ( 3 l / min ) . the end tidal co2 partial pressure ( petco2 ) and the inspired and end - tidal isoflurane concentrations ( etiso ) and respiratory rate ( rr ) were monitored with a calibrated multigas monitor ( as3 , datex - ohmeda division instrumentarium corp . , helsinki , finland ) . initial anesthesia was maintained at 1.0% etiso . to measure the systolic arterial blood pressure ( sap ) , diastolic arterial blood pressure ( dap ) , and mean arterial blood pressure ( map ) and obtain arterial blood samples for the blood gases and acid - base analysis , a sterile 24-gauge catheter ( angiocath plus , becton dickinson , sandy , ut , u.s.a . ) was inserted aseptically through a small cut - down incision into the superficial ulnar artery . the arterial catheter was connected to a blood pressure transducer ( transpac iv monitoring kit , abbott critical care systems , north chicago , il , u.s.a . ) and a pressure line filled with saline , which was connected to the monitor for heart rate ( hr ) and direct blood pressure ( bp ) monitoring . the values of the blood pressure were zeroed to the atmospheric pressure at the level of the chest . a lead ii electrocardiogram ( ecg ) was monitored using the monitor described above during the procedure . body temperature ( bt ) was also recorded with an oral probe linked to the monitor . throughout the procedure , the bt was maintained at 3940c with a circulating water blanket ( medi - therm , gaymer industries inc . , ochard park , ny , u.s.a . ) and a forced warm air blanket ( warm air hyperthermia system , cincinnati sub - zero products inc . , ansan , korea ) was administered via an intravenous 24-gauge catheter ( angiocath plus , becton dickinson ) placed in the ulnar vein during the procedure at a rate of 10 ml / kg / hr . the environmental temperatures of the operating room were adjusted to between 20c and 23c , by the facilities and plants engineering department of the wildlife center . after instrumentation , each bird was held initially at 1.0 , 1.5 , 2.0 , 2.5 and 3.0% etiso and then at 1.0% etiso for an equilibration period of 15 min in the given order . at the end of the equilibration period , the direct bp , hr , rr and petco2 were recorded , and 0.4 ml arterial blood was collected for ph , partial pressure of arterial oxygen ( pao2 ) , partial pressure of arterial carbon dioxide ( paco2 ) , bicarbonate ( hco3 ) and base excess ( be ) analyses . blood gas analysis ( vetstat , idexx laboratories , inc . , westbrook , me , u.s.a . ) was performed using a 1 ml heparin washed syringe immediately after the blood was sampled . when arrhythmia or respiratory instabilities occurred during the procedure , the anesthetic concentration was immediately decreased to 1.0% etiso . after the procedure , the arterial catheter was removed , hemostasis was performed by manual compression , and the skin was sutured . the vaporizer was then turned off , and the birds were allowed to breathe 100% oxygen until extubation , which was performed when the presence of a swallowing or cough response was observed . time to induction and extubation , and total anesthesia time were recorded for each anesthetic procedure . time to induction was defined as the time from the beginning of isoflurane inhalation to intubation . time to extubation was defined as the time from the cessation of isoflurane administration to extubation . total anesthesia time was defined as the time from the intubation to the cessation of isoflurane administration . all statistical analyses were performed using spss statistics 21 ( ibm corp . , armonk , ny , u.s.a . ) . repeated measures analysis of variance ( anova ) and the bonferroni correction technique for multiple comparisons were used to compare the hr , rr , direct bp , petco2 , bt and blood gas data at each designated anesthetic concentration . this study was approved by the animal care and use committee at gyeongsang national university ( approval number : gnu-140128-e0006 ) . these birds were hospitalized in the wildlife center in the province of gyeongsangnam - do , republic of korea , due to various orthopedic injuries and had received appropriate medical treatments to recover their health . all vultures in this study were suspected of being between 1 and 2 years of age . body weights and lengths of the birds were ranged from 7.5 to 9.0 kg ( mean sd of body weight , 8.27 0.40 kg ) and from 98 to 104 cm ( mean sd of body length , 100.9 2.0 cm ) , respectively . although these cinereous vultures were recovered to normal health , they had not been released because of disability . the health status of ten cinereous vultures was assessed by physical examination , diagnostic imaging , cbc and serum biochemical analyses . in addition , no clinical diseases were verified other than their disability caused by the previous orthopedic injuries . after recovering to normal health conditions , the birds were acclimated to the rehabilitation program for at least 7 days prior to the study . the birds were then transferred to temporary individual housing units , and food was withheld for 12 hr prior to anesthesia , although they were allowed access to water ad libitum . the experimental protocol was modified based on dose - related effects obtained from a previous study by naganobu and hagio . differences based on gender were not investigated , because cinereous vultures do not show sexual dimorphism . seoul , korea ) of an isoflurane vaporizer ( multiplus , royal medical co. , ltd . , pyeongtaek , korea ) setting in 100% oxygen ( 3 l / min ) delivered via a mask . a non - rebreathing anesthetic circuit ( modified jackson rees anesthesia circuit ) was used in this study . when voluntary movement of the nictitating membrane was absent for 30 sec , the trachea was intubated with an uncuffed endotracheal tube . the endotracheal tubes with internal diameter of 8.0 mm ( n=8 ) and 7.5 mm ( n=2 ) were chosen to prevent leakage of the anesthetic agent according to the size of cinereous vultures . each bird was restrained in dorsal recumbency to minimize the impairment of sternal motion . after endotracheal intubation , anesthesia was maintained by spontaneous ventilation with isoflurane in 100% oxygen ( 3 l / min ) . the end tidal co2 partial pressure ( petco2 ) and the inspired and end - tidal isoflurane concentrations ( etiso ) and respiratory rate ( rr ) were monitored with a calibrated multigas monitor ( as3 , datex - ohmeda division instrumentarium corp . , initial anesthesia was maintained at 1.0% etiso . to measure the systolic arterial blood pressure ( sap ) , diastolic arterial blood pressure ( dap ) , and mean arterial blood pressure ( map ) and obtain arterial blood samples for the blood gases and acid - base analysis , a sterile 24-gauge catheter ( angiocath plus , becton dickinson , sandy , ut , u.s.a . ) was inserted aseptically through a small cut - down incision into the superficial ulnar artery . the arterial catheter was connected to a blood pressure transducer ( transpac iv monitoring kit , abbott critical care systems , north chicago , il , u.s.a . ) and a pressure line filled with saline , which was connected to the monitor for heart rate ( hr ) and direct blood pressure ( bp ) monitoring . the values of the blood pressure were zeroed to the atmospheric pressure at the level of the chest . a lead ii electrocardiogram ( ecg ) was monitored using the monitor described above during the procedure . body temperature ( bt ) was also recorded with an oral probe linked to the monitor . throughout the procedure , the bt was maintained at 3940c with a circulating water blanket ( medi - therm , gaymer industries inc . , ochard park , ny , u.s.a . ) and a forced warm air blanket ( warm air hyperthermia system , cincinnati sub - zero products inc . ansan , korea ) was administered via an intravenous 24-gauge catheter ( angiocath plus , becton dickinson ) placed in the ulnar vein during the procedure at a rate of 10 ml / kg / hr . the environmental temperatures of the operating room were adjusted to between 20c and 23c , by the facilities and plants engineering department of the wildlife center . after instrumentation , each bird was held initially at 1.0 , 1.5 , 2.0 , 2.5 and 3.0% etiso and then at 1.0% etiso for an equilibration period of 15 min in the given order . at the end of the equilibration period , the direct bp , hr , rr and petco2 were recorded , and 0.4 ml arterial blood was collected for ph , partial pressure of arterial oxygen ( pao2 ) , partial pressure of arterial carbon dioxide ( paco2 ) , bicarbonate ( hco3 ) and base excess ( be ) analyses . blood gas analysis ( vetstat , idexx laboratories , inc . , westbrook , me , u.s.a . ) was performed using a 1 ml heparin washed syringe immediately after the blood was sampled . when arrhythmia or respiratory instabilities occurred during the procedure , the anesthetic concentration was immediately decreased to 1.0% etiso . after the procedure , the arterial catheter was removed , hemostasis was performed by manual compression , and the skin was sutured . the vaporizer was then turned off , and the birds were allowed to breathe 100% oxygen until extubation , which was performed when the presence of a swallowing or cough response was observed . time to induction and extubation , and total anesthesia time were recorded for each anesthetic procedure . time to induction was defined as the time from the beginning of isoflurane inhalation to intubation . time to extubation was defined as the time from the cessation of isoflurane administration to extubation . total anesthesia time was defined as the time from the intubation to the cessation of isoflurane administration . all statistical analyses were performed using spss statistics 21 ( ibm corp . , armonk , ny , u.s.a . ) . repeated measures analysis of variance ( anova ) and the bonferroni correction technique for multiple comparisons were used to compare the hr , rr , direct bp , petco2 , bt and blood gas data at each designated anesthetic concentration . mean sd induction and extubation time were 297.2 33.9 sec and 308.1 37.8 sec , respectively . three of ten vultures developed adverse cardiorespiratory effects as the isoflurane concentration was increased from 2.5% to 3.0% etiso . specifically , sinus arrhythmia occurred in one vulture , and two vultures showed difficulty breathing . as soon as symptoms were observed , the anesthetic concentration was immediately decreased to 1.0% etiso , at which time their arrhythmias and dyspnea disappeared . following the procedure , all vultures recovered from the anesthesia uneventfully , and no more cardiorespiratory problems all of the vultures tested in this experiment were treated and put through the rehabilitation program . the mean hr , rr , arterial bp ( sap , map and dap ) and petco2 of cinereous vultures at various concentrations of isoflurane are summarized in table 1table 1.dose-response effects on temperature , cardiovascular and respiratory parameters following isoflurane anesthesia with spontaneous ventilation in cinereous vultures ( aegypius monachus)variables1.0% etiso ( first)1.5% etiso2.0% etiso2.5% etiso3.0% etiso1.0% etiso ( second)(n=10)(n=10)(n=10)(n=10)(n=7)(n=10)hr ( beats / min)99 23124 35170 54198 60199 49100 22rr ( breaths / min)17 1213 1012 413 514 511 5sap ( mmhg)157 22154 21148 18138 15127 7156 15map ( mmhg)130 18124 21117 20106 1597 8122 13dap ( mmhg)109 20103 2496 2584 1971 1092 15petco2 ( mmhg)41 846 854 758 962 1148 5bt ( c ) 39.8 0.639.7 0.539.7 0.539.7 0.439.8 0.439.7 0.4a ) values are presented as the mean sd . abbreviations : hr , heart rate ; rr , respiratory rate ; sap , systolic arterial blood pressure ; map , mean arterial blood pressure ; dap , diastolic arterial blood pressure ; petco2 , end tidal carbon dioxide partial pressure ; bt , body temperature . b ) p<0.01 , significant difference compared with the values at the first application of 1.0% etiso . c ) p<0.05 , significant difference compared with the values at the first application of 1.0% etiso . e ) p<0.05 , significant difference compared with the values of 1.5% etiso .. the bt remained constant throughout the procedures . the mean hr , arterial bp and petco2 changed significantly with changes in isoflurane concentration . when the vultures were subjected to 2.0 , 2.5 and 3.0% etiso , the hr and petco2 increased significantly relative to the value obtained at the first application of 1.0% etiso . in addition , there were significant increases in hr and petco2 from 2.0 to 2.5% and from 2.5 to 3.0% etiso . when the vultures were subjected to 3.0% of etiso , the arterial bp decreased significantly compared to the value obtained at the first application of 1.0% etiso . abbreviations : hr , heart rate ; rr , respiratory rate ; sap , systolic arterial blood pressure ; map , mean arterial blood pressure ; dap , diastolic arterial blood pressure ; petco2 , end tidal carbon dioxide partial pressure ; bt , body temperature . b ) p<0.01 , significant difference compared with the values at the first application of 1.0% etiso . c ) p<0.05 , significant difference compared with the values at the first application of 1.0% etiso . the mean arterial blood gas and acid - base changes of cinereous vultures at various isoflurane concentrations are summarized in table 2table 2.dose-response effects on arterial blood gas and acid - base balance following isoflurane anesthesia with spontaneous ventilation in cinereous vultures ( aegypius monachus)variables1.0% etiso ( first)1.5% etiso2.0% etiso2.5% etiso3.0% etiso1.0% etiso ( second)(n=10)(n=10)(n=10)(n=10)(n=7)(n=10)ph7.50 0.057.46 0.057.41 0.067.33 0.067.24 0.067.48 0.05pao2 ( mmhg)518 31522 26512 20502 18457 88527 19paco2 ( mmhg)38 741 648 960 1276 1539 6hco3 ( mmol / l)26.8 3.626.9 2.727.8 2.728.8 2.929.8 3.426.0 2.7be ( mmol / l)4.3 2.93.5 2.22.7 1.71.5 1.90.1 2.83.1 2.3a ) values are presented as the mean sd . abbreviations : pao2 , partial pressure of arterial oxygen ; paco2 , partial pressure of arterial carbon dioxide ; hco3 , calculated bicarbonate concentration ; be , calculated base excess . b ) p<0.01 , significant difference compared with the values at the first application of 1.0% etiso . c ) p<0.05 , significant difference compared with the values at the first application of 1.0% etiso . g ) p<0.05 , significant difference compared with the values of 2.5% etiso .. the mean arterial ph , pao2 and paco2 changed significantly with changes in the isoflurane concentration . when the vultures were subjected to 2.0 , 2.5 and 3.0% etiso , the arterial ph decreased significantly compared to the value obtained at the first application of 1.0% etiso . in addition , there were significant decreases in arterial ph from 2.0 to 2.5% and from 2.5 to 3.0% etiso . when vultures were subjected to 3.0% etiso , the arterial pao2 decreased significantly relative to the value obtained at the first application of 1.0% etiso . when the vultures were subjected to 2.5 and 3.0% etiso , the arterial paco2 increased significantly relative to the value obtained at the first application of 1.0% etiso . in addition , there were significant increases in arterial paco2 as the etiso increased from 2.5 to 3.0% . the mean be values changed significantly at 3.0% etiso relative to the value obtained at the first application of 1.0% etiso . however , the mean hco3 values did not change significantly with changes in isoflurane concentration . abbreviations : pao2 , partial pressure of arterial oxygen ; paco2 , partial pressure of arterial carbon dioxide ; hco3 , calculated bicarbonate concentration ; be , calculated base excess . b ) p<0.01 , significant difference compared with the values at the first application of 1.0% etiso . c ) p<0.05 , significant difference compared with the values at the first application of 1.0% etiso . the mean hr , rr , arterial bp ( sap , map and dap ) and petco2 of cinereous vultures at various concentrations of isoflurane are summarized in table 1table 1.dose-response effects on temperature , cardiovascular and respiratory parameters following isoflurane anesthesia with spontaneous ventilation in cinereous vultures ( aegypius monachus)variables1.0% etiso ( first)1.5% etiso2.0% etiso2.5% etiso3.0% etiso1.0% etiso ( second)(n=10)(n=10)(n=10)(n=10)(n=7)(n=10)hr ( beats / min)99 23124 35170 54198 60199 49100 22rr ( breaths / min)17 1213 1012 413 514 511 5sap ( mmhg)157 22154 21148 18138 15127 7156 15map ( mmhg)130 18124 21117 20106 1597 8122 13dap ( mmhg)109 20103 2496 2584 1971 1092 15petco2 ( mmhg)41 846 854 758 962 1148 5bt ( c ) 39.8 0.639.7 0.539.7 0.539.7 0.439.8 0.439.7 0.4a ) values are presented as the mean sd . abbreviations : hr , heart rate ; rr , respiratory rate ; sap , systolic arterial blood pressure ; map , mean arterial blood pressure ; dap , diastolic arterial blood pressure ; petco2 , end tidal carbon dioxide partial pressure ; bt , body temperature . b ) p<0.01 , significant difference compared with the values at the first application of 1.0% etiso . c ) p<0.05 , significant difference compared with the values at the first application of 1.0% etiso . e ) p<0.05 , significant difference compared with the values of 1.5% etiso .. the bt remained constant throughout the procedures . the mean hr , arterial bp and petco2 changed significantly with changes in isoflurane concentration . when the vultures were subjected to 2.0 , 2.5 and 3.0% etiso , the hr and petco2 increased significantly relative to the value obtained at the first application of 1.0% etiso . in addition , there were significant increases in hr and petco2 from 2.0 to 2.5% and from 2.5 to 3.0% etiso . when the vultures were subjected to 3.0% of etiso , the arterial bp decreased significantly compared to the value obtained at the first application of 1.0% etiso . abbreviations : hr , heart rate ; rr , respiratory rate ; sap , systolic arterial blood pressure ; map , mean arterial blood pressure ; dap , diastolic arterial blood pressure ; petco2 , end tidal carbon dioxide partial pressure ; bt , body temperature . b ) p<0.01 , significant difference compared with the values at the first application of 1.0% etiso . c ) p<0.05 , significant difference compared with the values at the first application of 1.0% etiso . the mean arterial blood gas and acid - base changes of cinereous vultures at various isoflurane concentrations are summarized in table 2table 2.dose-response effects on arterial blood gas and acid - base balance following isoflurane anesthesia with spontaneous ventilation in cinereous vultures ( aegypius monachus)variables1.0% etiso ( first)1.5% etiso2.0% etiso2.5% etiso3.0% etiso1.0% etiso ( second)(n=10)(n=10)(n=10)(n=10)(n=7)(n=10)ph7.50 0.057.46 0.057.41 0.067.33 0.067.24 0.067.48 0.05pao2 ( mmhg)518 31522 26512 20502 18457 88527 19paco2 ( mmhg)38 741 648 960 1276 1539 6hco3 ( mmol / l)26.8 3.626.9 2.727.8 2.728.8 2.929.8 3.426.0 2.7be ( mmol / l)4.3 2.93.5 2.22.7 1.71.5 1.90.1 2.83.1 2.3a ) values are presented as the mean sd . abbreviations : pao2 , partial pressure of arterial oxygen ; paco2 , partial pressure of arterial carbon dioxide ; hco3 , calculated bicarbonate concentration ; be , calculated base excess . b ) p<0.01 , significant difference compared with the values at the first application of 1.0% etiso . c ) p<0.05 , significant difference compared with the values at the first application of 1.0% etiso . g ) p<0.05 , significant difference compared with the values of 2.5% etiso .. the mean arterial ph , pao2 and paco2 changed significantly with changes in the isoflurane concentration . when the vultures were subjected to 2.0 , 2.5 and 3.0% etiso , the arterial ph decreased significantly compared to the value obtained at the first application of 1.0% etiso . in addition , there were significant decreases in arterial ph from 2.0 to 2.5% and from 2.5 to 3.0% etiso . when vultures were subjected to 3.0% etiso , the arterial pao2 decreased significantly relative to the value obtained at the first application of 1.0% etiso . when the vultures were subjected to 2.5 and 3.0% etiso , the arterial paco2 increased significantly relative to the value obtained at the first application of 1.0% etiso . in addition , there were significant increases in arterial paco2 as the etiso increased from 2.5 to 3.0% . the mean be values changed significantly at 3.0% etiso relative to the value obtained at the first application of 1.0% etiso . however , the mean hco3 values did not change significantly with changes in isoflurane concentration . abbreviations : pao2 , partial pressure of arterial oxygen ; paco2 , partial pressure of arterial carbon dioxide ; hco3 , calculated bicarbonate concentration ; be , calculated base excess . b ) p<0.01 , significant difference compared with the values at the first application of 1.0% etiso . c ) p<0.05 , significant difference compared with the values at the first application of 1.0% etiso . in raptors , the anesthesia with isoflurane resulted in rapid induction and recovery , and the anesthetic depth was easily controlled . even allowing for reported adverse cardiac effects of isoflurane in raptors , such as tachycardia , hypertension and arrhythmias , this anesthetic agent was considered as a reasonable selection to anesthetize the birds , because of its evident cardiorespiratory stability . thus , the results of the present study indicate that it is possible to safely use isoflurane by controlling the anesthetic concentration . etiso concentrations of 1.5 to 2.0% were considered adequate for anesthetic maintenance with minimal adverse effects in cinereous vultures . according to previous studies [ 17 , 18 , 23 ] , although the minimum alveolar concentration ( mac ) is not appropriate as an index of potency for inhalant anesthetics in birds , the minimum anesthetic concentration as a similar index has been used to determine the anesthetic dose . the minimum anesthetic concentration for isoflurane in ducks is 1.3% , while that for cockatoos is 1.44% , and for captive thick - billed parrots , it is 1.07% . factors that may influence the minimum anesthetic concentration include age , circadian rhythm , methodology used to determine the minimum anesthetic concentration , severe hypercapnia , severe hypoxemia , changes in body temperature , severe hypotension and acidemia or alkalemia [ 16 , 24 ] . accordingly , therefore , in the present study , the clinical etiso concentration was used as an index for inhalant anesthetics dose . many previous studies [ 8 , 18 , 19 , 21 , 32 ] of avian anesthesia have not included measurement of awake cardiovascular parameters . although comparison of the awake and anesthetized parameters is important to evaluate the significance of the adverse effects caused by the anesthetics , birds can be extremely sensitive to the stress induced by handling . in birds that are restrained or excited , the hr can be 184 to 401% higher than that of birds that are caged , free from excitement or at rest . in our pilot study , although measurement of the cardiorespiratory parameter was attempted while the vultures were awake , it was not possible to maintain stable enough conditions to collect the data normally . sinus arrhythmia is defined as a patterned , irregular sinus rhythm with alternate slowing and acceleration of the hr . in the current study , three of the cinereous vultures developed cardiorespiratory problems ( sinus arrhythmia and respiratory instability ) , while attempting to increase the etiso from 2.5 to 3.0% . therefore , it can be assumed that the anesthetic index is more than 2.5 . the sinus arrhythmia at high hr identified via ecg was identified in a vulture of this study . the anesthesia of 3.5% etiso is associated with cardiac arrhythmia , such as second degree atrioventricular block in bald eagles , probably because of an increase of vagal tone . premature ventricular contractions have been described in isoflurane anesthetized ducks , however , the arrhythmias resolved when isoflurane concentration was reduced , which might suggest that higher anesthetic concentrations can more easily induce arrhythmia in birds . in avian species , anesthetic dose - related changes in cardiorespiratory parameters have been reported during inhalational anesthesia . spectral doppler - derived cardiac parameters , including hr and blood flow velocities , were found to decrease significantly in common buzzards anesthetized with isoflurane . in chickens [ 17 , 18 ] , dose - related decreases in arterial bp were observed , but hr did not differ when the chickens were anesthetized with isoflurane or sevoflurane during controlled ventilation . when ducks were anesthetized with isoflurane during spontaneous ventilation , there was no significant dose - dependent difference between the means of either hr or arterial bp . however , another study demonstrated that there was a significant dose - dependent increase in hr when ducks were anesthetized with halothane during spontaneous ventilation . in this study , a dose - dependent increase in hr and decrease in arterial bp were observed in cinereous vultures anesthetized with isoflurane during spontaneous ventilation . therefore , the patterns of the dose - dependent changes in cardiorespiratory parameters may vary depending on the species of bird , type of inhalant anesthetic and ventilation conditions . in addition , as reported in mammals , a decrease in systemic vascular resistance caused by inhalant anesthetics seems to be the most probable cause of the hypotension observed in the current study . respiratory depression caused by anesthesia with isoflurane was investigated in different bird species during spontaneous ventilation [ 11 , 12 , 25 , 32 ] . species - dependent characteristics , such as the absence of a diaphragm , restriction of sternum excursions by dorsal recumbency and decreased responsiveness of intrapulmonary chemoreceptors to co2 caused by inhalant anesthetics , make birds more sensitive to respiratory depression caused by inhalant anesthetic [ 5 , 10 , 22 ] . in the present study , an increase in petco2 and paco2 and decrease in ph were associated with the respiratory depression caused by anesthesia with isoflurane . in a previous study , inhalant - induced hypoventilation resulted in increased petco2 , paco2 and hco3 and decreased ph . an fio2 > 40% has been shown to lead to hypoventilation in ducks anesthetized with isoflurane , likely due to depression of o2-sensitive chemoreceptors and a depressed ventilator drive . in addition , previous studies [ 11 , 18 , 19 , 26 ] reported respiratory acidosis in ducks anesthetized with isoflurane and chickens anesthetized with sevoflurane during spontaneous ventilation . moreover , increased sympathetic activity and circulating concentration of catecholamines , peripheral vasodilation and myocardial depression were caused by hypercapnia in humans . when the things above are taken into consideration , the respiratory and cardiovascular effects by anesthesia with isoflurane seem to affect the cardiorespiratory system complexly . depression of the central nervous system may be associated with a ph below 7.2 . moreover , the decreased ph may lead to organ damage and failed activity of some enzymes , causing alterations in metabolism . dose - dependent decreases in arterial ph were observed during isoflurane anesthesia in the present study . a non - rebreathing anesthetic circuit , such as the modified jackson rees anesthesia circuit used in this study , is considered to be ideal for the use in birds , because it provides minimal resistance to patient ventilation . however , inhalant anesthetics leading to respiratory depression seems to be much more significant in birds than mammals . this may justify the fact that the thoracic musculature in birds plays significantly important roles in ventilation and that they relax these muscles during anesthesia . when aerobic energy metabolism is degraded or there is a low hematocrit or lactic acidemia , a negative be is expected as an indicator of acid / base status . since the be was > 2 mmol / l in the present study , the disturbance in ph was more likely due to respiratory rather than metabolic changes . a previous study investigated that saline administration results in dilutional acidosis . in the present study , 0.9% normal saline was administered during the procedure at a rate of 10 ml / kg / hr . although each vulture was administered about 100150 ml of normal saline , the values of ph , paco2 and hco3 were within the normal range during the anesthetic phase at the second application of 1.0% etiso . therefore , the present study indicates that the dilution associated with saline administration at the rate of 10 ml / kg / hr did not affect ph of the birds . in the current study , the duration of anesthesia and the specific concentration of anesthetics may be significant factors influencing changes in cardiorespiratory parameters . a previous study investigated whether anesthesia with isoflurane did not cause a significant change in the arterial bp of spontaneously ventilating anesthetized hispa - niolan amazon parrots . the same study reported that a decrease of hr was noted only after 45 min of anesthesia . conversely , another study conducted by joyner et al . reported that the hr and rr significantly decreased over time , whereas the arterial bp significantly increased over time in bald eagles anesthetized with isoflurane during spontaneous ventilation . however , since there was very limited information describing the correlation between the duration of anesthesia and changes in cardiorespiratory parameters of birds , it is difficult to define exactly how the prolonged period of anesthesia would affect changes in cardiorespiratory parameters of the birds . in the present study , while dose - dependent increases in petco2 and paco2 were measured , the rr taken at the second application of 1.0% etiso appeared to be slightly lower than that taken at the first application of 1.0% etiso , but these differences were not statistically significant . based on the results , although spirometry was not used to measure breathing capacity , respiratory depth was not able to be maintained because of anesthetic induced muscle relaxation . although the duration of anesthesia had no significant effects on the cardiorespiratory parameters investigated in the current study , it is difficult to be certain that the duration of anesthesia would not have any effects on the cardiorespiratory parameters of the cinereous vulture under different clinical settings . in conclusion , when the isoflurane concentration was increased during spontaneous ventilation in cinereous vultures , a dose - dependent increase in hr and petco2 with minimal changes in rr , a decrease in arterial bp and respiratory acidosis were observed . overall , isoflurane for anesthesia of spontaneously breathing cinereous vultures is a suitable choice for diagnostic or surgical procedures . however , careful controls of anesthetic dose and ventilation conditions are recommended for this species .
anesthesia is an inevitably important component of diagnosis and treatments examining the health condition of wild animals . not only does anesthesia become an essential tool in minimizing stress of the patients and providing an opportunity to deliver accurate and safe procedures , but it also ensures the safety of the medical crew members . this study was conducted to investigate the dose - response cardiorespiratory effects of isoflurane during spontaneous ventilation in ten cinereous vultures . each bird was administered isoflurane at initial concentration of 1.0 , 1.5 , 2.0 , 2.5 and 3.0 and then an end - tidal isoflurane concentrations ( etiso ) of 1.0% for an equilibration period of 15 min in the given order . at the end of the equilibration period , the direct blood pressure ( bp ) , heart rate ( hr ) , respiratory rate ( rr ) and end tidal co2 partial pressure ( petco2 ) were recorded , and blood gas analysis was performed . increasing isoflurane concentrations during spontaneous ventilation led to dose - dependent increases in hr and petco2 , with minimal changes in rr , decreased arterial bp and respiratory acidosis . overall , isoflurane for anesthesia of spontaneously breathing cinereous vultures is a suitable choice for diagnostic or surgical procedures .
detecting causal relationships in data is an important data analytics task as causal relationships can provide better insights into data , as well as actionable knowledge for correct decision making and timely intervening in processes at risk . causal relationships are normally identified with experiments , such as randomised controlled trials @xcite , which are effective but expensive and often impossible to be conducted . causal relationships can also be found by observational studies , such as cohort studies and case control studies @xcite . an observational study takes a causal hypothesis and tests it using samples selected from historical data or collected passively over the period of time when observing the subjects of interest . therefore observational studies need domain experts knowledge and interactions in data selection or collection and the process is normally time consuming . currently there is a lack of scalable and automated methods for causal relationship exploration in data . these methods should be able to find causal signals in data without requiring domain knowledge or any hypothesis established beforehand . the methods must also be efficient to deal with the increasing amount of big data . classification methods are fast and have the potential to become practical substitutes for finding causal signals in data since the discovery of causal relationships is a type of supervised learning when the target or outcome variable is fixed . decision trees @xcite are a good example of classification methods , and they have been widely used in many areas , including social and medical data analyses . however , classification methods are not designed with causal discovery in mind and a classification method may find false causal signals in data and miss true causal signals . for example , figure [ fig_hypnotizeddt ] shows a decision tree built from a hypothesised data set of the recovery of a disease . based on the decision tree , we may conclude that the use of tinder ( a matchmaking mobile app ) helps cure the disease . however , it is misleading since the majority of people using tinder are young whereas most people not using tinder are old . young people will recover from the disease anyway and old people have a lower chance of recovery . this misleading decision tree is caused by an unfair comparison between the two different groups of people . it may be a good classification tree to predict the likelihood of recovery , but it does not imply the causes of recovery and its nodes do not have any causal interpretation . r0.25 a classification method fails to take account of the effects of other variables on the class or outcome variable when examining the relationship between a variable and the class variable , and this is the major reason for the false discoveries ( of causal relationships ) . for example , when we study the relationship between using tinder and the recovery of a disease , the effect of other variables such as age , gender , and health condition of patients ( who may or may not use tinder ) should be considered . the objective is not simply to maximise the difference of the conditional probabilities of recovered and not recovered conditioning on the use of tinder when a classifier is being sought . in this paper , we design a causal decision tree ( cdt ) where nodes have causal interpretations . as presented in the following sections , our method follows a well established causal inference framework , the potential outcome model , and it makes use of a classic statistical test , mantel - haenszel test . the proposed cdt is practical for uncovering causal signals in large data . the paths in a cdt are not interpreted as `` if - then '' first order logic rules as in a normal decision tree . for example , figure [ fig_titanic_adult ] ( left ) shows a cdt learnt from the titanic data set . it does not read as `` if female = n then survived = n '' . a node in a cdt indicates a causal factor of the outcome attribute . the node ` female ' indicates that being female or not is causally related to survived or not ; the node ` thirdclass ' shows that in the female group ( the context ) , staying in a third class cabin or not is causally related to survived or not . the main contributions of this paper are as follows : * we systematically analyse the limitations of decision trees for causal discovery and identify the underlying reasons . * we propose the cdt method , which can be used to represent and identify simple and interpretable causal relationships in data , including big data . let @xmath0 be a predictive attribute and @xmath1 the outcome attribute where @xmath2 and @xmath3 we aim to find out if there is a causal relationship between @xmath0 and @xmath1 . for easy discussion , we consider that @xmath4 is a treatment and @xmath5 the recovery . we will establish if the treatment is effective for the recovery . the potential outcome or counterfactual model @xcite is a well established framework for causal inference . here we introduce the basic concepts of the model and a principle for estimating the average causal effect , mainly following the introduction in @xcite . with the potential outcome model , an individual @xmath6 in a population has two potential outcomes for a treatment @xmath0 : @xmath7 when taking the treatment and @xmath8 when not taking the treatment . we say that @xmath7 is the potential outcome in the treatment state and @xmath8 is the potential outcome in the control state . then we have the following definition . the individual level causal effect is defined as the difference of two potential outcomes of an individual , i.e. @xmath9 . in practice we can only find out one outcome @xmath7 or @xmath8 since one person can be placed in either the treatment group ( @xmath4 ) or the control group ( @xmath10 ) . one of the two potential outcomes has to be estimated . so the potential outcome model is also called counterfactual model . for example , we know that mary has a headache ( the outcome ) and she did not take aspirin ( the treatment ) , i.e. we know @xmath8 . the question is what the outcome would be if mary took aspirin one hour ago , i.e. we want to know @xmath7 and to estimate the ice of aspirin on mary s condition ( having headache or not ) . if we had both @xmath7 and @xmath8 of an individual we would aggregate the causal effects of individuals in a population to get the average causal effect as defined below , where @xmath11 $ ] stands for expectation operator in probability theory . the average causal effect of a population is the average of the individual level causal effects in the population , i.e. @xmath12 = e[y^1_i ] - e[y^0_i]$ ] . note that @xmath6 is kept in the above formula as other work in the counterfactual framework to indicate individual level heterogeneity of potential outcomes and causal effects . assuming that @xmath13 proportion of samples take the treatment and @xmath14 proportion do not , and the sample size is large so the error caused by sampling is negligible , given a data set @xmath15 , the ace , @xmath12 $ ] can be estimated as : @xmath16 & \!\!\!= \!\!\ ! & \pi ( e_\textbf{d}[y^1_i | x_i = 1 ] - e_\textbf{d}[y^0_i | x_i = 1 ] ) + \nonumber \\ & & ( 1-\pi ) ( e_\textbf{d}[y^1_i | x_i = 0 ] - e_\textbf{d}[y^0_i | x_i = 0])\end{aligned}\ ] ] that is , the ace of the population is the ace in the treatment group plus the ace in the control group , where @xmath17 indicates that an individual takes the treatment and the causal effect is @xmath18 . similarly , @xmath19 indicates that an individual does not take the treatment and the causal effect is @xmath20 . in a data set , we can observe the potential outcomes in the treatment state for those treated , ( @xmath21 ) , and the potential outcomes in the control state for those not treated , ( @xmath22 ) . however , we can not observe the potential outcomes in the control state for those treated , ( @xmath23 ) , or the potential outcomes in the treatment state for those not treated , ( @xmath24 ) . we have to estimate what the potential outcome , @xmath25 , would be if an individual did not take the treatment ( in fact she has ) ; and what potential outcome , ( @xmath24 ) , would be if an individual took the treatment ( in fact she has not ) . with a data set @xmath15 we can obtain the following `` nave '' estimation of the ace : @xmath26 = e_\textbf{d}[y^1_i | x_i = 1 ] - e_\textbf{d}[y^0_i | x_i = 0]\end{aligned}\ ] ] the question is when the nave estimation ( equation ( [ eqn_2 ] ) ) will approach the true estimation ( equation ( [ eqn_1 ] ) ) . if the assignment of individuals to the treatment and control groups is purely random , the estimation in equation ( 2 ) approaches the estimation in equation ( 1 ) . in an observational data set , however , the random assignment is not possible . how can we estimate the average causal effect ? a solution is by perfect stratification . let the differences of individuals in a data set be characterised by a set of attributes @xmath27 ( excluding @xmath0 and @xmath1 ) and let the data set be perfectly stratified by @xmath27 . in each stratum , apart from the fact of taking treatment or not , all individuals are indistinguishable from each other . under the perfect stratification assumption , we have : @xmath28 = e[y^1_i | x_i = 1 , \textbf{s } = \textbf{s}_i ] \label{firsteqn}\\ e[y^0_i | x_i = 1 , \textbf{s } = \textbf{s}_i ] = e[y^0_i | x_i = 0 , \textbf{s}=\textbf{s}_i]\label{secondeqn}\end{aligned}\ ] ] where @xmath29 indicates a stratum of perfect stratification . since individuals are indistinguishable in the stratum , unobserved potential outcomes can be estimated by observed ones . specifically , the mean potential outcome in the treatment state for those untreated is the same as that in the treatment state for those treated ( equation ( [ firsteqn ] ) ) , and the mean potential outcome in the control state for those treated is the same as that in the control state for those untreated ( equation ( [ secondeqn ] ) ) . by replacing equation ( 1 ) with equations ( [ firsteqn ] ) and ( [ secondeqn ] ) , we have : @xmath30 \nonumber\\ & = & \pi ( e_\textbf{d}[y^1_i | x_i = 1 , \textbf{s } = \textbf{s}_i ] - e_\textbf{d}[y^0_i | x_i = 1 , \textbf{s } = \textbf{s}_i ] ) + \nonumber \\ & & ( 1-\pi ) ( e_\textbf{d}[y^1_i | x_i = 0 , \textbf{s } = \textbf{s}_i ] - e_\textbf{d}[y^0_i | x_i = 0 , \textbf{s } = \textbf{s}_i ] ) \nonumber \\ & = & \pi ( e_\textbf{d}[y^1_i | x_i = 1 , \textbf{s } = \textbf{s}_i ] - e_\textbf{d}[y^0_i | x_i = 0 , \textbf{s } = \textbf{s}_i ] ) + \nonumber \\ & & ( 1-\pi ) ( e_\textbf{d}[y^1_i | x_i = 1 , \textbf{s } = \textbf{s}_i ] - e_\textbf{d}[y^0_i | x_i = 0 , \textbf{s } = \textbf{s}_i ] ) \nonumber \\ & = & e_\textbf{d}[y^1_i | x_i = 1 , \textbf{s } = \textbf{s}_i ] - e_\textbf{d}[y^0_i | x_i = 0 , \textbf{s } = \textbf{s}_i ] \nonumber \\ & = & e^{naive}_\textbf{d}[\delta_i | \textbf{s } = \textbf{s}_i ] \nonumber\end{aligned}\ ] ] as a result , the nave estimation approximates the true average causal effect , and we have the following observation . [ principle][principle for estimating average causal effect ] the average causal effect can be estimated by taking weighted sum of nave estimators in stratified sub data sets . this principle ensures that each comparison is between individuals with no observable differences , and hence the estimated causal effect is not resulted from other factors than the studied one . in the following , we will use this principle to estimate causal effect in observational data sets . as a result , we have the following results . @xmath31 & = & ( \pi e[y^1_i | x_i = 1 ] + ( 1-\pi ) e[y^1_i | x_i = 0 ] ) - ( \pi e[y^0_i | x_i = 1 ] + ( 1-\pi ) e[y^0_i | x_i = 0 ] ) \\ & = & e[y^1_i | x_i = 1 , s_i ] - e[y^0_i | x_i = 0 , s_i]\end{aligned}\ ] ] in the above formulae , both potential outcomes are discoverable since individuals in data are interchangeable . we can remove superscripts and subscripts . @xmath32 - e[y=1 | x = 0 , s]\ ] ] the key for estimating causal effect of a treatment becomes finding perfect stratification samples . decision trees are a popular classification model , with two types of nodes : branching and leaf nodes . a branching node represents a predictive attribute and each of its values denotes a choice and leads to another branching node or a leaf node representing a class . now we use the potential outcome model to explain why decision trees may not encode causality . [ cols="^,^ " , ] [ fig_titanic_adult ] the cdt built from the data set is shown in figure [ fig_titanic_adult ] ( left ) . at the first level , the tree reveals a causal relationship between ` female ' ( gender ) and ` survived ' . this relationship is sensible as we know that if someone was a female , she was likely to have higher priority to board the limited number of lifeboats . at the second level , the tree gives a context specific causal relationship between ` thirdclass ' and ` survived ' in the female group , which is reasonable too as passengers in the lowest class cabins would have less chance to escape . therefore the tree is simple but it gives insights about the causes of surviving for people on titanic , and the results are logic . the adult data set ( table [ tab_titanic_adult_data ] ) was retrieved from the uci machine learning repository @xcite and it is an extraction of 1994 usa census database . it is a well known classification data set to predict whether a person earns over 50k or not in a year . we recoded the data set to make the causes for high / low income more clearly and easily understandable . the objective is to find the causal factors of high ( or low ) income . from the cdt built with the data set ( figure [ fig_titanic_adult ] , right ) , there is a causal relationship between ` age@xmath3330 ' ( or not ) and income , i.e. young adults have lower income , which follows common knowledge . for older adults , year of education is causally related to income , i.e. adults with education shorter than 12 years would have low income , which makes good sense too . for older and highly educated adults , gender affects income such that females have lower income than male ( an unfortunate finding but it could be true in reality ) . in the highly educated and older male group , occupation is a causal factor of income so that those in professional occupations earn more than those not in professional occupations . we see that the cdt gives sensible explanations for the causes of high or low income . [ fig_adultc45 ] from the normal decision tree from the adult data set ( figure [ fig_adultc45 ] ) , we have observed that : ( 1 ) a normal decision may be large for high classification accuracy but a large tree reduces its interoperability . the objectives of causal discovery and classification may not be consistent ; ( 2 ) causality based and classification based criteria do not make the same choice . in the top level , the branching attribute of the normal decision tree is ` education - num@xmath3412 ' while the branching attribute of the cdt is ` age@xmath3330 ' . from common knowledge , age should have stronger influence on income than years of education since young adults usually get lower income in most cases , simply because of their lack of working experience . formally , there are more strata ( 13.3% of all strata ) violating the causal relationship between ` education - num@xmath3412 ' and ` @xmath3450k ' than those ( 7.75% of all strata ) violating the causal relationship between ` age @xmath3330 ' and ` @xmath33 50k ' . this is why the cdt chooses ` age @xmath3330 ' as the root . in contrast , since attribute ` education - num @xmath34 12 ' has a higher information gain than ` age @xmath3330 ' it is chosen to split the data set firstly in a normal decision tree . different choices lead to different trees . in this example , the different choices do not cause significant difference as ` age @xmath3330 ' is chosen immediately after ` education - num @xmath34 12 ' , but in other data sets , the difference could be significant . for causal discoveries , it is better to choose a causal based criterion . with the above data sets , the first few levels at the top of a normal decision tree is quite interpretable since the causal relationships are evident in data . a cdt and a normal decision tree will be different when the causal relationships are subtle or with noises . to demonstrate this point , we build a cdt and a normal decision tree with a randomised data set where there is no relationship at all . values in each of 10 attributes were randomly drawn with 50% 1s and 50% 0s in the data set . when we tried to learn a cdt from the data , no tree was returned and this is expected . however , c4.5 grew a decision tree as in figure [ fig_randomdatac45 ] . this result shows that the relationships in a normal decision tree may not be meaningful at all and a more interpretable decision tree , like a cdt , is necessary . to show that cdt is competent to discover causal relationships , we use 5 groups of synthetic data sets , each group containing 10 data sets with the same number of variables , to compare the findings of cdt and the pc algorithm from the data . in total 50 data sets are used , and each data set contains 10k samples . the data sets are generated using the tetrad tool ( http://www.phil.cmu.edu/tetrad/ ) . to create a data set , in tetrad we firstly generate randomly a causal bayesian network structure with the specified number of variables ( 20 , 40 , 60 , 80 , or 100 ) , and randomly select a node with a specified degree ( i.e. number of parent and children nodes , which is in the range of 3 to 7 ) as the outcome attribute for the data set . the conditional probability tables of the causal bayesian network are also randomly assigned . the data set is then generated using the built - in bayes instantiated model ( bayes i m ) based on the conditional probability tables . the ground truth of the data is the set of nodes directly connected to the outcome variable in the causal bayesian network structure . we then apply cdt and pc to each of the 50 data sets , and for each group of the data sets , the average recalls of the algorithms are shown in table [ tableavgrecall ] ( part a ) . it can be seen that in general cdt can detect similar percentages of causal relationships as pc does , indicating that cdt has comparable ability and has obtained consistent results in discovering causal relationships as the commonly used approach . we are aware that the causal relationships identified by cdt are context specific while those discovered by pc is global or context free . however , it is reasonable to assume that if a causal relationship exists with no context , it should appear in the contexts too , and these relationships have been mostly picked up by the cdts . c c c c c c + group & # d & # v & recall ( cdt ) & recall ( pc ) + [ 0ex ] 1 & 10 & 20 & 85% & 75% + 2 & 10 & 40 & 77% & 79% + 3 & 10 & 60 & 89% & 78% + 4 & 10 & 80 & 94% & 90% + 5 & 10 & 100 & 85% & 94% + + group & # d & # v & recall ( cdt ) & recall ( pc ) + [ 0ex ] 6 & 10 & 20 & 81% & n / a + 7 & 10 & 40 & 77% & n / a + 8 & 10 & 60 & 85% & n / a + 9 & 10 & 80 & 89% & n / a + 10 & 10 & 100 & 78% & n / a + + + [ tableavgrecall ] in order to test the performance of cdt in finding context specific causal relationships , we also use 5 groups of synthetic data sets , each group containing 10 data sets with the same number of variables ( 20 , 40 , 60 , 80 or 100 ) . to create a data set , e.g. with 20 binary variables , @xmath35 , we firstly create a causal bayesian network structure that contains only one edge , e.g. between @xmath36 and @xmath37 , and all other nodes are isolated nodes . based on this structure , we use logistic regression to simulate the data set for the bayesian network . one of the two causally related variables , e.g. @xmath37 is chosen as the outcome variable , then @xmath36 in this example is the ground truth of the global causal node of @xmath37 . however , we do not know the ground truth of the context specific causal relationships around @xmath37 . our solution is to use @xmath36 as the context variable , and apply pc - select @xcite ( also known as pc - simple @xcite ) to the two partitions of the data set respectively , one partition containing all the samples with ( @xmath38 ) and one containing all the samples with ( @xmath39 ) ( while the @xmath36 column is excluded ) . in this way , we identify the variables that are causally related to @xmath37 within each of the two contexts , ( @xmath38 ) and ( @xmath39 ) , and use the findings as the ground truth of the context specific causal relationships around @xmath37 in the data set . pc - select is a simplified version of the pc algorithm for finding causal relationships around a given outcome variable . we then apply cdt to each of the 50 data sets generated . the cdt built from each of the data sets always has the node that is causally related to the output attribute as its root , i.e. the cdt correctly finds the global causal relationship . moreover , each of the cdts also contains context specific causal relationships . we did not prune cdt trees in these data sets since some randomly generated data sets have skewed distribution , which makes the pruning too aggressive . we will design a pruning strategy for skewed data sets in future work . table [ tableavgrecall ] ( part b ) summarises the average recall of cdt in finding the context specific causal relationships . from the table , cdt is able to discover the majority of the context specific causal relationships . pc , in contrast , does not find any context specific causal relationships in the data sets since it is not design for the purpose . if we want to use pc to find the context specific causal relationships , we have to run pc in each context specific data set , which is impractical . on the other hand , cdt can find context specific causal relationships in the complete data sets . we test the scalability of the cdt algorithm by comparing it with the c4.5 @xcite algorithm implemented in weka @xcite and the pc algorithm @xcite . we use 12 synthetic data sets generated with the same procedure as for generating the data sets in section [ exp_cdt_causal]-1 ) . to be fair among data sets , we chose the nodes with the same degree as the target variables . the comparisons were carried out using the same desktop computer ( quad core cpu 3.4 ghz and 16 gb of memory ) . the comparison results are shown in figure [ fig_scalability ] . the run time of cdt is almost linear to the size of the data sets and the number of attributes . it is less efficient than c4.5 but more efficient than pc . the results have shown that the proposed cdt is practical for high dimensional and large data sets . + [ fig_scalability ] in this section , we will differentiate our cdts from other causal trees derived from causal bayesian networks , including the conditional probability table tree ( cpt - tree ) @xcite and causal explanation tree @xcite . a causal bayesian network ( cbn ) @xcite consists of a causal structure of a directed acyclic graph ( dag ) , with nodes and arcs representing random variables and causal relationships between the variables respectively , and a joint probability distribution of the variables . given the dag of a cbn , the joint probability distribution can be represented by a set of conditional probabilities attached to the corresponding nodes ( given their parents ) . a cbn provides a graphical visualisation of causal relationships , a reasoning machinery for deriving new knowledge ( effects ) when evidence ( changes of causes ) is fed into the given network ; as well as a mechanism for learning causal relationships in observational data . in recent decades , cbns have emerged , especially in the areas of machine learning and data mining , as a core methodology for causal discovery and inference in data . [ fig_adultbayesinnetwork ] a cbn depicts the relationships of all attributes under consideration , and it can be complex when the number of attributes is more than just a few . for example , it takes some effort to understand the cbn in figure [ fig_adultbayesinnetwork ] learnt from the adult data set , even though there are only 14 attributes in the data set . a cbn does not give a simple model to explain the causes of an outcome as our cdt does . the conditional probability table tree ( cpt - tree ) @xcite is designed to summarise the conditional probability tables of a cbn for concise presentation and fast inference . an example of cpt - trees is shown in figure [ fig_cpttree ] . the probabilistic dependence relationships among the outcome @xmath1 and its parent nodes @xmath40 and @xmath41 ( causes of @xmath1 ) are specified by a conditional probability table where the probabilities of @xmath1 given all value assignments of its parents are listed . the size of a conditional probability table is exponential to the number of parent nodes of @xmath1 and can be very large . for example , for 20 parent nodes , the conditional probability table will have 1,048,576 rows . this table will be difficult to display and the inference based on the table is inefficient too . given a context , i.e. one or more parent nodes taking an assignment of a value , the probability of @xmath1 may be constant ( without being affected by the values of other parents ) . so a conditional probability table can be represented clearly with a tree structure , called a conditional probability table tree ( cpt - tree ) , as illustrated in figure [ fig_cpttree ] . in the cpt - tree , the causal semantics is naturally linked to the cbn where all parent nodes are direct causes of variable @xmath1 . ; ( r ) : cpt - tree.,scaledwidth=45.0% ] [ fig_cpttree ] there are two major differences between a cpt - tree and a cdt . firstly , cpt - trees are built from cbns and cdts are built from data sets directly . before building the cpt - trees , we already know the causal relationships , and a cpt tree specifies how the assignments of some causal variables link to outcome values . this is impractical in many real world applications since we do not know the cbn or we could not build a cbn from a data set , particularly a large data set , as existing algorithms for learning cbns can not handle a large number of variables and they often only present a partially oriented cbn . secondly , in a cbn , the parents of a node @xmath1 are all global causes of @xmath1 . as a cpt - tree is derived from a cbn , all the variables included in a cpt - tree are all global causes . however , as discussed previously , it is possible that under a context , a variable becomes causally related to @xmath1 . for example , in the titanic data set , ` thirdclass ' cabin is not a causal factor in the whole data set ( i.e. it is not a global cause of ` survived ' ) , but it becomes a causal factor in the context of female passengers / crew . so such causal relationships will not be discovered or represented by a cbn and thus not by the cpt - trees too , but they can be be revealed and represented by our cdts . a causal explanation tree @xcite aims at explaining the outcome values using a series of value assignments of a subset of attributes in a cbn . a series of value assignments of attributes form a path of a causal explanation tree , and a path is determined by a causal information flow . the assignment of a set of attributes along a path represents an intervention in the causal inference in a cbn . the causal interpretation is based on the causal information flow criterion used for building a causal explanation tree . however this method is impractical since we do not have a cbn in most real world applications as explained previously . similarly a causal explanation tree can not capture the context specific causal relationships encoded in a cdt , because the explanation tree is obtained from a cbn , which only encodes global causes . causal discovery is based on assumptions . in the causal bayesian network discovery framework , some assumptions , such as causal markov condition , faithfulness and causal sufficiency @xcite , are used to ensure the causal semantics of the discoveries . simply speaking , markov condition requires that every edge in a causal bayesian network implies a probabilistic dependence . the faithfulness assumption ensures that for two variables that are probabilistically dependent , there is a corresponding edge between the two variables in the causal bayesian network . the causal sufficiency assumes that there is no unmeasured or hidden causes in data . up to now , we have not explicitly discussed the causal assumptions . however , we do need certain assumptions which will be discussed in the following . the causal interpretation of a cdt is ensured by the evaluation in the stratified data sets of the difference in the potential outcomes of a possible causal attribute @xmath42 . in each stratum , the individuals are indistinguishable , or the attributes possibly affecting the outcome @xmath1 take the same values and they do not affect the estimation of the causal effect of @xmath42 on @xmath1 . therefore , the causal effect estimated using the stratified data sets approaches the true causal effect . an assumption here is that the differences of individuals should be captured by the set of attributes used for stratification . this assumption implies causal sufficiency that all causes are measured and included in the data set . a nave choice is to select all attributes other than the attribute being tested ( @xmath42 ) and the outcome ( @xmath1 ) , for stratification . however , this is not workable for high dimensional data sets since many strata will contain very few or no samples when the number of attributes is large . as a result , the cdt algorithm may not find any causal relationship . for example , diverse information , such as demographic information , education , hobbies and liked movies , is collected as personal profile in a data set . however , if all the attributes are used for stratification , they reduce the chance of finding sizable strata for reliable discovery . in fact , it is unwise to use any irrelevant attributes , such as hobbies and liked movies , for stratification when the objective is to study , e.g. the causal effect of a treatment on a disease . a reasonable and practical choice of stratifying attributes is the set of attributes that may affect the outcome , called relevant attributes in this paper . differences in irrelevant attributes that do not affect the outcome should not impact the estimation of the causal effect of the studied attribute on the outcome . for example , different hobbies and liked movies may not affect the estimation of the causal effect of a treatment on a disease . therefore , only those relevant attributes should be used for stratifying a data set , and this is what we have used in the algorithm for building a cdt . in case there are many relevant variables , which may result in many small strata , we restrict the maximum number of relevant variables to ten according the strength of correlation . the purpose of this work is to design a fast algorithm to find causal signals in data automatically without user interactions . we do tolerate certain false positives and expect that a real causal relationship will be refined by a dedicated follow - up observational study . in many real world studies , the stratification is based on a limited number of demographic attributes , e.g. gender , age group and residential areas . thinking about a heath study , it is very difficult to recruit volunteers with the same background ( age , diet , education , etc . ) , and stratification on more than a few attributes is just impractical . considering some stratifying attributes is better than considering none . cdts help practitioners with the discovery of causal relationships in the following ways although it may not confirm causal relationships : ( 1 ) because of stratification , many spurious relationships that are definitely not causal will be excluded from the resulting cdts , so practitioners will have a smaller set of quality hypotheses for further studying ; ( 2 ) context specific causal relationships are more difficult to be observed than global causal relationships . cdts are useful for practitioners to find hidden context specific causal hypotheses . discovering causal relationships in passively observed data has attracted enormous research efforts in the past decades , due to the high cost , low efficiency and unknown feasibility of experiments based approaches , as well as the increasing availability of observational data . to the credit of the theoretical development by a group of statisticians , philosophers and computer scientists , including pearl @xcite , spirtes , glymour @xcite and others , we have seen graphical causal models playing dominant role in causality discovery . among these graphical models , causal bayesian networks ( cbns ) @xcite have been the most developed and used one . many algorithms have been developed for learning cbns @xcite . however in general learning a complete cbn is np - hard @xcite and the methods are able to handle a cbn with only tens of variables , or hundreds if the causal relationships are sparse @xcite . consequently , local causal relationship discovery around a given target ( outcome ) variable has been actively explored recently as in practice we are often more interested in knowing the direct causes or effects of a variable , especially in the early stage of investigations . the work presented in this paper is along the line of local causal discovery . existing methods for local causal discovery around a given a target fall into two broad categories : ( 1 ) methods that adapt the algorithms or ideas for learning a complete cbn into local causal discovery , such as pc - simple @xcite , a simplified version of the well - known pc algorithm @xcite for cbn learning ; and hiton - pc @xcite , which applies the basic idea of pc to find variables strongly ( and causally ) related to a given target ; ( 2 ) methods that are designed to exploit the high efficiency of popular data mining approaches and the causal discovery ability of traditional statistical methods , including the work in @xcite and @xcite , both using association rule mining for identifying causal rules ; and the decision tree based approach @xcite for finding the markov blanket of a given variable . the cdt proposed in this paper belongs to the second category , as it takes advantage of decision tree induction and partial association tests . comparing to other methods in the category , however , the proposed cdt approach is distinct because it is aimed at finding a sequence of causal factors ( variables along the path from the root to a leaf of a cdt ) where a preceding factor is a context under which the following factors can have impact on the target , while the other methods identify a set of causal factors each being a cause or an effect of the given target , and they only discover global causal relationships . however , in practice , a variable may not be a cause of another variable globally , but under certain context , it may affect other variables . a cdt provides a way to identify such context specific causal relationships . additionally because a context specific causal relationship contains information about the conditions in which a causal relationship holds , such relationships are more prescriptive and actionable and thus are more suitable for decision support and action planning . in terms of using decision trees as a means for causality investigation , except from the above mentioned method for identifying markov blankets @xcite , most existing work takes decision trees as a tool for causal relationship representation and/or inference , assuming that the causal relationships are known in advance . examples include the cpt - trees @xcite and causal explanation tree @xcite introduced in the discussions section , which are both derived from a known causal bayesian network . unlike these trees , our cdt is mainly used as a tool for detecting causal relationships in data , without any assumption of known causal relationships . in this paper , we have proposed causal decision trees ( cdts ) , a novel model for representing and discovering causal relationships in data . a cdt provides a compact and precise graphical representation of the causal relationships between a set of predicate attributes and an outcome attribute . the context specific causal relationships represented by a cdt are of great practical use and they are not encoded by existing causal models . the algorithm developed for constructing a cdt utilises the divide and conquer strategy for building a normal decision tree and thus is fast and scalable to large data sets . the criterion used for selecting branching attributes of a cdt is based on the well established potential outcome model and partial association tests , ensuring the causal semantics of the tree . given the increasing availability of big data , we believe that the proposed cdts will be a promising tool for automated discovery of causal relationships in big data , thus to support better decision making and action planning in various areas . c. f. aliferis , a. statnikov , i. tsamardinos , s. mani , and x. d. koutsoukos . local causal and markov blanket induction for causal discovery and feature selection for classification part i : algorithms and empirical evaluation . , 11:171234 , 2010 . l. frey , d. fisher , i. tsamardinos , c. aliferis , and a. statnikov . identifying markov blankets with decision tree induction . in _ data mining , 2003 . icdm 2003 . 3rd ieee int . conf . on _ , pages 5966 , 2003 .
uncovering causal relationships in data is a major objective of data analytics . causal relationships are normally discovered with designed experiments , e.g. randomised controlled trials , which , however are expensive or infeasible to be conducted in many cases . causal relationships can also be found using some well designed observational studies , but they require domain experts knowledge and the process is normally time consuming . hence there is a need for scalable and automated methods for causal relationship exploration in data . classification methods are fast and they could be practical substitutes for finding causal signals in data . however , classification methods are not designed for causal discovery and a classification method may find false causal signals and miss the true ones . in this paper , we develop a causal decision tree where nodes have causal interpretations . our method follows a well established causal inference framework and makes use of a classic statistical test . the method is practical for finding causal signals in large data sets . decision tree , causal relationship , potential outcome model , partial association
according to a standard picture of the mixed state in bulk type - ii superconductors the abrikosov vortices penetrating the homogeneous sample form a periodic arrangement called a flux lattice @xcite . the magnetic flux through the unit cell of such flux line lattice equals to the flux quantum @xmath1 : we have one vortex per unit cell . there are a few examples of rather exotic superconducting systems which may provide a possibility to observe a different vortex lattice periodicity , namely the structures with more than one vortices per unit cell . in particular , the phase transitions to such multiquanta flux lattices can occur , e.g. , for superconductors with unconventional pairing @xcite or 2d fulde - ferrell - larkin - ovchinnikov superconductors @xcite . the goal of this work is to suggest an alternative scenario of the phase transitions between the flux structures with different number of vortices per unit cell which can be realized in thin films of anisotropic superconductors . the underlying physical mechanism for this scenario arises from the interplay between the long range attraction and repulsion between tilted vortex lines in thin films discussed recently in ref . . the unusual attractive part of the vortex vortex interaction potential is known to be a distinctive feature of anisotropic superconductors and the value of the attractive force is controlled by the tilting angle of the vortex line with respect to the anisotropy axis @xcite . the origin of the long range intervortex repulsion in thin films has been analyzed in the pioneering work @xcite by pearl in 1964 . this repulsion force always overcomes the attraction at rather large distances because of the different power decay laws of these contributions . note that , of course , the short range interaction between vortices is also repulsive . finally , this balance between the repulsion and attraction can result in the formation of the nonmonotonic interaction potential @xmath2 vs the intervortex distance @xmath3 . increasing the vortex tilting angle we first strengthen the attraction force between vortices and , thus , the minimum in the vortex interaction potential can appear only for rather large tilting angles when the attraction overcomes the pearl s repulsion . this minimum shifts towards the larger intervortex distances with the further increase in the tilting angle and , finally , at rather large distances the attraction appears to be suppressed due to the exponential screening effect . as a consequence , the minimum in the interaction potential exists only for a certain restricted range of the vortex tilting angles which shrinks with the decrease of the system anisotropy parameter . the appearance of a minimum in the interaction potential points to the possibility to get a bound vortex pair ( or even the clusters with higher vorticities ) for a certain range of vortex tilting angles . for a flux line lattice such vortex vortex interaction potential can cause an instability with respect to the unit cell doubling , i.e. the phase transition to the multiquanta vortex lattices . in this paper we use two theoretical approaches to describe the peculiarities of the intervortex interaction and resulting formation of clusters and multiquanta lattices . one of them is a standard london model accounting for an anisotropic mass tensor which is adequate for the superconductors with moderate anisotropy . this approach assumes that the superconducting coherence length in all directions exceeds the distance between the atomic layers and obviously breaks down in the limit of strong anisotropy , i.e. , for josephson coupled layered structures . in the latter case we choose to apply another phenomenological model , namely the so called lowrence doniach theory @xcite . for rather small intervortex distances this theory can be simplified neglecting the effects of weak interlayer josephson coupling . this approach of josephson decoupled superconducting layers is known to be useful in studies of the vortex lattice structure at low fields @xcite . considering thin film samples in tilted magnetic fields we do not restrict ourselves by the case of only straight vortex lines and study the problem of the energetically favorable vortex line shape in the presence of the inhomogeneous supercurrent screening the field component @xmath4 parallel to the film plane . previously this problem has been addressed in ref . for rather small deviations of the vortex line from the direction normal to the film plane . such approximation is obviously valid only for the @xmath5 values much smaller than the critical field @xmath6 of the penetration of vortices parallel to the film plane . for anisotropic london model this analysis of ref . has been previously generalized for the case of a strongly distorted vortex line ( see ref . ) . for the sake of completeness we present here the calculations of the shape of an isolated vortex line for arbitrary fields @xmath7 within both theoretical models describing the limits of strong and moderate anisotropy . as a next step , we calculate the vortex - vortex interaction potential for such strongly deformed vortex lines . further analysis in the paper includes the calculations of energy of finite size vortex clusters as well as the energy of vortex lattices with different number of vortices per unit cell . experimentally the visualization of unconventional vortex arrangements could be carried out by a number of methods which provided convincing evidence for the existence of vortex chains in bulk anisotropic superconductors caused by the intervortex attraction phenomenon ( such as the decoration technique in @xmath8 @xcite , scanning - tunneling microscopy in @xmath9 @xcite , scanning hall - probe @xcite and lorentz microscopy measurements in @xmath8 @xcite ) . the paper is organized as follows . in sec . ii we find the energetically favorable shape of an isolated vortex line . in sec . iii we calculate the vortex vortex interaction potential and prove the existence of a potential minimum for a certain range of field tilting angles and parameters . the sec . iv is devoted to the calculation of energy of vortex clusters . finally , in sec . v we present our analysis of the phase transition between the vortex lattices with one and two flux quanta per unit cell . the results are summarized in sec . vi . some of the calculation details are presented in the appendices [ apx - a ] and [ apx - b ] . we start our study of the distinctive features of equilibrium vortex structures in thin films of anisotropic superconductors with the consideration of the vortex line shape in the layered systems . let us consider a finite stack of @xmath10 superconducting ( sc ) layers . vortex line of an arbitrary shape pierces the film and can be viewed as a string of 2d pancake vortices : each of these pancakes is centered at the point @xmath11 in the @xmath12-th layer . within the model of the stack of josephson decoupled sc layers , pancakes can interact with each other only via magnetic fields . we denote the interlayer spacing as @xmath13 and consider each of the @xmath10 layers as a thin film with the thickness @xmath14 much less than the london penetration depth @xmath15 . general equation for the vector potential @xmath16 distribution in such system reads @xmath17 where @xmath18 is the effective penetration depth in a superconducting film of a vanishing thickness @xmath14 , each @xmath19th sc layer coincides with the plane @xmath20 ( @xmath21 ) , the sheet current at the @xmath19th layer created by the pancake at @xmath22th layer takes the form @xmath23\ , \ ] ] @xmath24 is the vector potential induced by the only pancake vortex located in the @xmath22th layer ( fig . 1 ) . the vector @xmath25 in the eq . ( [ eq:2 ] ) is given by the expression @xmath26 } { \mathbf{r}^2 } \,,\ ] ] and @xmath27 is the flux quantum . for the layered system without josephson coupling a general expression for the free energy can be written in the form : @xmath28\,.\ ] ] where the total vector potential @xmath29 and the sheet current in the @xmath19th layer @xmath30 , produced by an arbitrary vortex line are the sum of the contributions induced by all 2d pancakes : @xmath31 to find the magnetic vector potential @xmath32 we adopt an approach similar to that in refs . . between the sc layers the vector potential @xmath33 is described by the laplace equation @xmath34 for the gauge @xmath35 the vector potential has only the in - plane components @xmath36 , where @xmath37 and the function @xmath38 can be written as @xmath39 / \sinh(q s)\ , , \\ \qquad z_n < z < z_{n+1},\ ; n=1\ldots n-1 \ , , \\ \\ \alpha_n^m\,\exp\left(-q(z - z_n)\right)\,,\quad z \ge z_n \ , , \\ \\ \alpha_1^m\,\exp\left(q(z - z_1)\right)\,,\quad z \le z_1 \ , . \end{array } \right.\ ] ] taking the fourier transform of eq . ( [ eq:2 ] ) we find : @xmath40,\ ] ] where @xmath41}{q^2 } \ .\ ] ] the sheet current density @xmath42 results in the discontinuity of the in - plane component of the magnetic field @xmath43 across the @xmath12 layer : @xmath44 = \mathbf{z}_0 \times \left.\left [ \mathbf{z}_0 \times \frac{\partial \mathbf{a}^m}{\partial z } \right]\right|_{z_n-0}^{z_n+0 } \,.\ ] ] substituting the expressions ( [ eq:5 ] ) , ( [ eq:6 ] ) , ( [ eq:7 ] ) into above condition ( [ eq:9 ] ) we obtain the system of linear equations for the coefficients @xmath45 : @xmath46 here we introduce two new functions which depend on the wave number @xmath47 : @xmath48 the solution of the linear system ( [ eq:10 ] ) and the eqs . ( [ eq:5 ] ) , ( [ eq:6 ] ) define the distribution of the vector potential @xmath24 which is created by a single pancake vortex positioned in the @xmath22th layer . without the in - plane external magnetic field @xmath49 the relative displacement of the pancakes in the different layers is absent : @xmath50 ( i.e. the pancakes form a vertical stack ) . a rather small magnetic field @xmath51 induces a screening meissner current @xmath52 in each @xmath19th layer . lorentz forces arising from these currents will move the pancakes from their initial positions . taking into account the eq . ( [ eq:9 ] ) , we find the following system of linear equations describing the distribution of the magnetic field screened by the layered structure : @xmath53 here @xmath54 is the magnetic field value between the @xmath19th and @xmath55th layers . the distribution of the meissner screening currents in the layers takes the form : @xmath56 the resulting lorentz forces @xmath57 acting on the pancakes can be written as follows : @xmath58 = ( \phi_0 / \ , c ) j_n^m \mathbf{y}_0\,.\ ] ] the interaction forces between the pancakes can be found using the expression ( [ eq:7 ] ) for the sheet current @xmath59 generated by the pancake positioned in the @xmath22th layer : @xmath60 = \frac{\phi_0 ^ 2}{8\pi^2\lambda\lambda_{ab } } \left\{\ , % \frac{1}{r_{mk}}\,\delta_{mk } - \int\limits_0^\infty d q\ , j_1(q r_{mk})\ , \frac { \alpha_k^m(q)\,g(q)}{z(q)}\ , \right\}\ , \frac{\mathbf{r}_{mk}}{r_{mk}}\,,\ ] ] where @xmath61 is the first - order bessel function of the first kind , @xmath62 is the penetration depth for the in - plane currents , and @xmath63 in order to find the equilibrium form of the vortex line in a finite stack of @xmath10 superconducting layers under the influence of the in - plane external magnetic field @xmath49 , we consider the relaxation of the set of the pancakes towards the equilibrium positions within the simplest version of the dynamic theory : @xmath64 where @xmath65 is the viscous drag coefficient . considering the vortex line consisting of @xmath66 pancakes we start from the initial configuration of pancakes arranged in a straight vortex line parallel to the @xmath67 direction ( see fig . 2 ) . as the system approaches its final force - balanced ( equilibrium ) configuration , the velocities of all pancake motions should vanish : @xmath68 in fig . 2 we illustrate the evolution of the pancake configurations for two values of the applied in plane magnetic field @xmath49 and for two different numbers of layers : @xmath66 ( fig . 2a , 2b ) and @xmath69 ( fig . 2c , 2d ) . the forces @xmath57 caused by the meissner currents rotate and bend the vortex line . for rather small applied field values this rotation and bending result in the formation of a certain stable configuration ( see fig . 2a , 2c ) . for the fields exceeding a certain critical value @xmath70 we do not find such stable pancake arrangement . the vortex line splits into two segments which move in opposite directions ( see fig . 2b , 2d ) . to define the critical value @xmath70 for the breakup of the vortex line we have carried out the calculations of the pancake arrangements increasing the in - plane magnetic field with the step @xmath71 ( @xmath72 ) . the stationary vortex arrangement disappears above a certain threshold field value which is taken as a critical field @xmath70 . the pancake configurations for the both cases @xmath66 and @xmath69 are qualitatively similar but the values of the critical field @xmath70 for @xmath66 and @xmath69 are different . with a decrease in the number of layers @xmath10 ( film thickness ) the value of the critical field @xmath70 grows : @xmath73 for @xmath66 and @xmath74 for @xmath69 . in fact in layered superconductors with very weak interlayer coupling the josephson vortices will appear at much lower field @xmath75 . as a result , at tilted magnetic field , crossing lattice of pancakes , forming abrikosov vortices , and josephson vortices exist rather than a lattice of tilted vortex stacks @xcite . the interaction between pancakes and in - plane field in the form of josephson vortices produces zigzag deformation of the stack of the pancakes @xcite . this deformation is responsible for a long range attraction between such stacks @xcite which is quite similar to the case of considered in the present work . we proceed now with the consideration of the vortex line shape in an anisotropic film which is characterized by the london penetration depths @xmath76 and @xmath77 for currents flowing parallel and perpendicular to the @xmath78 plane , respectively . we consider the case of uniaxial anisotropy which can be described by a dimensionless anisotropic mass tensor @xmath79 , where @xmath80 is the anisotropy parameter and @xmath81 is the anisotropy axis . we choose the @xmath67 axis of the coordinate system perpendicular to the film surface . in the parallel to the film plane direction we apply a certain magnetic field @xmath82 which is screened inside the superconducting film . we consider first a typical geometry when the @xmath83 axis is chosen along the direction normal to the film plane . in such geometry the vortex line is parallel to the plane @xmath84 and can be parameterized by a single valued function @xmath85 . an appropriate thermodynamic potential for determination of the energetically favorable form of the vortex line takes the form @xmath86 where @xmath87 is the film thickness . the first term , @xmath88 , is the ginzburg - landau free energy of the curved vortex line , and the second term corresponds to the work of lorentz force acting on the flux line and distorting this line in the presence of screening currents induced by the external magnetic - field component @xmath89 parallel to the film plane . to simplify the @xmath88 expression we consider a strong type - ii superconducting material with a large ratio of the london penetration depths and coherence lengths . in this case the main contribution to the vortex line energy is determined by the energy of supercurrents @xmath90 flowing around the vortex core @xmath91 where @xmath92 , and @xmath93 is the order parameter phase distribution around the vortex line . the above expression for the free energy reveals a logarithmic divergence which should be cut off at both the small and large spatial length scales . the lower cut off length is naturally equal to the characteristic size @xmath94 of the vortex core which is of the order of the coherence length . of course , in anisotropic case one should introduce two different coherence lengths @xmath95 and @xmath96 in the @xmath78 plane and along the @xmath97 axis , respectively . the resulting core size and lower cut off length @xmath94 for a certain element of the tilted flux line will , thus , depend on both @xmath95 and @xmath96 lengths as well as on the local tilting angle of the vortex line . the upper cut off length strongly depends on the ratio of the film thickness to the london penetration depth . for rather thick films @xmath98 this cut off length @xmath99 is determined by a certain combination of the screening lengths @xmath76 and @xmath77 ( see , e.g. , ref . ) . for a thin film with @xmath100 one can separate two energy contributions : ( i ) the contribution coming from the region of the size @xmath101 around the curved vortex line and providing the logarithmic term in the free energy with the upper cut off length @xmath102 ; ( ii ) the logarithmic contribution @xmath103 coming from the larger distances @xmath104 which weakly depends on the vortex line shape . to sum up , the part of the vortex line energy which depends on its shape can be approximately written in the form : @xmath105 note that we neglect here the weak dependence of the logarithmic factor on the vortex line curvature and local orientation . within such approximation we consider the vortex line as a thin elastic string which is , of course , valid provided the characteristic scale of the string bending is larger than the upper cut off length @xmath99 . the condition of the zero first variation of the gibbs functional gives us the following equation @xmath106 where @xmath107 the equation is valid for magnetic fields which do not exceed the critical field of the penetration of vortices parallel to the film plane @xmath108 thus , analogously to the case of a stack of decoupled layers the stable curved vortex lines can exist only for rather small magnetic fields below the critical field @xmath6 which corresponds to the penetration of a vortex parallel to the film plane . note that in the limit @xmath109 one can obtain the result found previously in ref . : @xmath110 typical shape of a bent vortex line calculated from eq . ( [ anis forma d - arb ] ) is shown in fig . 3a . the above expressions can be easily generalized for an arbitrary angle @xmath111 between the anisotropy axis @xmath81 and the direction normal to the film plane . we restrict ourselves to the case when the axis @xmath81 is parallel to the plane @xmath84 and vortex line can be parameterized by a function @xmath85 as before . in this case the part of the free energy depending on the vortex line shape takes the form : @xmath112 where @xmath113=y'(z)$ ] . thus we find the following equation describing the vortex line shape : @xmath114 where @xmath115 note that the equation is valid in the field range @xmath116 the critical field @xmath117 corresponds to the penetration of a vortex parallel to the film plane . typical plots illustrating the numerical solution of the equation are shown in the fig . 3b for different orientations of the applied magnetic field . note an important difference between the opposite directions of the magnetic field @xmath89 : for @xmath118 the vortex line consists of segments tilted in opposite directions with respect to the @xmath67 axis . in this section we derive general expressions for the interaction energy between two vortices in a thin film of an anisotropic superconductor taking into account both long range attraction and repulsion phenomena . we study both the limits of strong and moderate anisotropy for a wide range of vortex tilting angles . the shape of the interacting vortex lines is assumed to be fixed and not affected by the vortex vortex interaction potential . certainly , such assumption is valid only in the limit of rather larger distances between the vortex lines when the effect of interaction on the vortex shape can be viewed as a small perturbation . in this section we consider the interaction between two vortex lines consisting of pancake vortices . for each vortex the pancake centers are assumed to be positioned along the straight line tilted at the angle @xmath119 with respect to the anisotropy axis @xmath120 ( @xmath67 axis ) . we restrict ourselves to the case of parallel vortex lines shifted by a certain vector @xmath121 in the plane of the layers . using the gauge @xmath122 and the fourier transform @xmath123 @xmath124 one can rewrite the basic equation ( [ eq:1 ] ) in the momentum representation as follows : @xmath125 where @xmath126 . taking account of the relation @xmath127 we obtain from ( [ eq:2c ] ) the following equations for the fourier components of the vector potential @xmath128 : @xmath129 these equations can be reduced to the scalar form @xmath130 where we introduce the new functions @xmath131 : @xmath132 the solution of the linear system ( [ eq:4c ] ) for a fixed distribution of pancakes @xmath133 determines the distribution of the vector potential @xmath29 which is created by an arbitrary vortex line in a finite stack of superconducting layers . in the momentum representation the general expression ( [ eq:6c ] ) for the free energy of the layered system without josephson coupling reads @xmath134 for two vortex lines we can write the total vector potential and the total sheet current as superpositions of contributions coming from the first ( @xmath135 , @xmath136 ) and second ( @xmath137 , @xmath138 ) vortices : @xmath139 calculating the interaction energy @xmath140 of vortex lines we should keep in ( [ eq:7c ] ) only the terms which contain the products of fields corresponding to different vortex lines : @xmath141\,.\ ] ] finally , for the particular case of two parallel vortex lines which are shifted at the vector @xmath142 in the @xmath143 plane we get following expression for the interaction energy via the scalar functions @xmath131 : @xmath144 the expression ( [ eq:9c ] ) and equations ( [ eq:4c ] ) determine the interaction energy of two identically bent vortex lines . our further consideration in this subsection is based on two assumptions : ( i ) for each vortex we choose the centers of pancakes to be positioned along the straight line tilted at a certain angle @xmath119 relative to @xmath67 axis , and put @xmath145 ; ( ii ) we restrict ourselves by the continuous limit assuming @xmath146 and @xmath147 . in this case the eqs . ( [ eq:4c ] ) , ( [ eq:9c ] ) can be simplified ( see appendix [ apx - a ] for details ) : @xmath148 } { \sqrt{q^2 + \lambda_{ab}^{-2}}\,(1 + p^2)^2 \left [ 2 k \cosh l + ( 1 + k^2)\sinh l)\right ] } \right\}\ , , \label{eq:11c}\end{aligned}\ ] ] where @xmath149 and @xmath150 is the thickness of the superconducting film . the first term in ( [ eq:11c ] ) describes the interaction in the bulk system , while the second term is responsible for the effect of film boundaries . the minimum energy configuration corresponds to the case @xmath151 . in fig . 4 we present some typical plots of the interaction energy @xmath152 vs the distance @xmath153 for @xmath154 which corresponds to the lorentz microscopy experiments in ybco @xcite and bi-2212 @xcite samples . analyzing the dependence @xmath155 one can separate three contributions to the energy of vortex vortex interaction : ( i ) a short range repulsion which decays exponentially with increasing intervortex distance @xmath3 ( for @xmath156 ) ; ( ii ) an intervortex attraction which is known to be specific for tilted vortices in anisotropic systems ; this attraction energy term decays as @xmath157 and strongly depends on the angle @xmath158 between the vortex axis and the @xmath120 direction ; ( iii ) long range ( pearl ) repulsion which decays as @xmath159 and results from the surface contribution to the energy . note that the third term does exist even for a large sample thickness @xmath87 ( see ref . @xcite ) although in the limit @xmath160 it is certainly masked by the dominant bulk contribution . at @xmath161 the short range interaction term vanishes and the interaction energy vs @xmath162 takes the form @xmath163where @xmath164 is the effective film thickness . one can observe here an interplay between the long - range attractive ( first term in eq.([energy - interact ] ) ) and the repulsive ( second term in eq.(energy - interact ) ) forces . note that the @xmath165 value increases with an increase in temperature , thus , the effective thickness decreases and the long range attraction force appears to be suppressed with increasing temperature . for large @xmath3 the energy is always positive and corresponds to the vortex repulsion similar to the one between the pancakes in a single layer system . with a decrease in the distance @xmath3 the attraction force comes into play resulting in the change of the sign of the energy . such behavior points to the appearance of a minimum in the interaction potential . we now proceed with the derivation of the expression for the intervortex interaction energy in an anisotropic superconducting film . we choose the anisotropy axis @xmath120 ( @xmath166axis ) to be oriented perpendicular to the film plane and consider two curved vortex lines with identical shapes found in sec . ii b. our further calculations are based on general expressions derived in ref . for the energy of an arbitrary vorticity distribution in an anisotropic superconducting film of finite thickness ( see appendix [ apx - b ] for details ) . the resulting interaction energy of two curved vortices shifted from each other in the @xmath167 direction at a certain distance @xmath3 can be presented in the form : @xmath168 where @xmath169 , while @xmath170 , @xmath171 , and @xmath172 are given by the expressions ( [ eq : b7 ] ) , ( [ eq : b8 ] ) , ( [ eq : b9 ] ) . considering the particular case of straight vortex lines parallel to the plane @xmath173 and tilted at a certain angle @xmath158 with respect to the @xmath120 direction we obtain the following expression for the interaction energy of two vortices : @xmath174 } { \sqrt{q^2 + \lambda_{ab}^{-2}}\,(1 + p^2)^2 \left [ 2 k \cosh l + ( 1 + k^2)\sinh l)\right ] } \right\}\ , , \label{straight - tilted2}\end{aligned}\ ] ] where @xmath175 and the parameters @xmath176 , @xmath177 and @xmath178 are described by the expressions ( [ eq:10c1 ] ) . in the limit of strong anisotropy ( @xmath179 ) the spectral function @xmath180 ( [ straight - tilted2 ] ) naturally coincides with the corresponding expressions ( [ eq:11c ] ) obtained for the layered system without josephson coupling . some typical plots of the interaction energy vs the intervortex distance for tilted vortex lines calculated using the eqs . ( [ straight - tilted]),([straight - tilted2 ] ) are shown in figs . [ fig:5],[fig:51 ] . analyzing the dependence @xmath181 one can separate three contributions to the energy of intervortex interaction : ( i ) a short range repulsion ( for @xmath182 ) which decays exponentially with increasing intervortex distance @xmath3 ; ( ii ) an intervortex attraction which comes into play for the region @xmath183 and decays exponentially with the vortex vortex distance @xmath3 for @xmath184 ; ( iii ) long range ( pearl ) repulsion which decays as @xmath159 at large distances and results from the surface contribution to the energy . taking the limit @xmath182 we get @xmath185 in the region @xmath186 the short range interaction term vanishes and the interaction energy vs @xmath3 is given by the sum ( [ energy - interact ] ) of attractive and pearl s contributions . similarly to the case of decoupled layers discussed above the attractive term can result in the appearance of a minimum in the dependence of the vortex vortex interaction potential vs @xmath3 . the position of this minimum can be roughly estimated as the boundary of the region of the short range repulsion : @xmath187 . obviously , the minimum should disappear provided @xmath188 , i.e. , when the region of the attraction between vortices vanishes . this condition gives us the the upper boundary on the tilting angle @xmath119 restricting the interval of the energy minimum existence : @xmath189 the lower boundary of this angular interval can be found comparing the attractive and repulsive terms in the expression ( [ energy - interact ] ) at the distance @xmath190 : @xmath191 these analytical estimates of the angular interval are in a rough qualitative agreement with the numerical calculations ( see figs . [ fig:5],[fig:51 ] ) for two values of the film thickness @xmath192 . indeed , one can see that increasing the tilting angle we first deepen the minimum in the interaction potential and then make it more shallow . the figures confirms the deepening of the minimum with the increase in the anisotropy parameter @xmath193 . our numerical calculations demonstrate that for the film thickness @xmath194 ( fig . [ fig:5 ] ) the minimum of the interaction energy of two straight tilted vortices can appear only for @xmath195 . starting from @xmath196 the bound vortex pair becomes energetically favorable . an increase in the film thickness reduces the relative contribution of pearl repulsion to the energy of intervortex interaction @xmath140 . as a result attraction of vortices takes place for smaller values of the tilting angle @xmath119 and anisotropy parameter @xmath193 . thus , in a film with the thickness @xmath197 ( fig . [ fig:51 ] ) the minimum in the @xmath181 dependence appears for @xmath198 , whereas creation of the bound vortex pair becomes energetically favorable for @xmath199 . as a next step , we check if the above results obtained for straight tilted vortices remain valid for the curved vortex lines . our analysis of the effect of the vortex line curvature is carried out for model vortex profiles found in sec . ii b. the resulting typical dependencies of the intervortex interaction potential vs @xmath3 for different magnetic field values and anisotropy parameters are shown in figs . [ fig:6],[fig:7 ] . one can clearly see that the minimum in the interaction potential vs @xmath3 survives when we take account of the vortex line curvature . moreover the curving of the vortex line even deepens this minimum as it is confirmed by the comparison of energies of straight tilted and curved vortices presented in fig . [ fig:7 ] . for such comparison we choose the straight vortex lines connecting the ends of curved vortices . we find that for curved vortices the energy minimum exists even for smaller anisotropy parameters than for straight vortices ( i.e. , the threshold anisotropy value for @xmath194 becomes less than @xmath200 ) . of course , increasing the film thickness one can weaken the restrictions on the existence of the minimum in the interaction potential : e.g. , for @xmath201 the minimum appears at @xmath199 . the above theoretical analysis demonstrates that vortex vortex attraction and the formation of chains are possible only for the rather large tilting angles and at low vortex concentrations , i.e. , when the magnetic - field component @xmath202 perpendicular to the film plane is very weak . in fields @xmath202 slightly above @xmath203 abrikosov vortices will form chains due to the long range attractive interaction . peculiarities of penetration of such chains of tilted abrikosov vortices into bulk layered ( anisotropic ) superconductor are well known : in the first approximation , the vortex period in chains does not depend on applied magnetic field , while the distance between chains changes as @xmath204 . the presence of vortex chains significantly modifies the magnetization curves with respect to analogous curves for isotropic superconductors . @xcite in the next sections we discuss additional peculiarities of intervortex interaction specific for thin film samples of layered ( anisotropic ) superconductors . the unusual vortex - vortex interaction potential behavior discussed in the previous section can result in unconventional vortex structures . we start our analysis of energetically favorable vortex structures from the problem of stability of a vortex chain . the formation of infinite vortex chains is known to be a signature of the intervortex attraction in bulk anisotropic superconductors . the long range repulsion of vortices in thin films can destroy the infinite vortex chains . indeed , despite of the fact that two vortices attract each other at a certain distance , further increase in the number of vortices arranged in a chain can be energetically unfavorable because of the slower decay of the repulsive force compared to the attractive one . in this case , for rather thin samples , there appears an intriguing possibility to observe vortex chains of finite length , i.e. , vortex molecules or clusters . in this section we present the calculations of energies of such vortex clusters . as we have demonstrated above , the minimum in the interaction potential exists for both the limits of strong and moderate anisotropy . the vortex molecule cohesion energy is given by the expression : @xmath205where @xmath10 is the number of vortices in the molecule , and @xmath206 are the distances between @xmath207th and @xmath208th vortices in the chain molecule . shown in figs . [ fig:8],[fig:9 ] are typical plots of the interaction energy per vortex vs the intervortex distance @xmath3 for equidistant vortex chains with different @xmath10 numbers calculated within the model of decoupled superconducting layers and anisotropic london theory . the energetically favorable number of vortices in a molecule grows as we increase the film thickness and/or the tilting angle because of the increasing attraction term in the pair potential @xmath140 . shown in the insets of figs . [ fig:8 ] are schematic pictures of vortex matter consisting of dimeric and trimeric molecules . finally , for rather thick samples with @xmath209 we get a standard infinite chain structure typical for bulk systems . note that the formation of an infinite vortex chain may be considered in some sense as a polymerization of the vortex molecules . certainly , the crossover from the vortex molecule state to the infinite chain structure is strongly influenced by the increase in the vortex concentration governed by the component @xmath210 of the external magnetic field perpendicular to the film . indeed , one can expect such a cross - over to occur when the mean intervortex spacing approaches the molecule size . thus , the vortex molecule state can appear only in a rather weak perpendicular field when its observation can be complicated , of course , by the pinning effects . considering the effect of a finite magnetic field ( i.e. , a finite concentration of vortex clusters ) we restrict ourselves by the simplest case of regular vortex arrays . for a regular vortex array the formation of clusters corresponds to the transition with a change in the number of vortices in the elementary lattice cell . the mechanism underlying such transition is naturally connected with the appearance of the minimum in the interaction potential for a vortex pair . in this section we present our calculations of energy of vortex lattices with different number of flux quanta per unit cell . the possibility to get the energetically favorable states with a few vortices per unit cell will be illustrated for a particular intervortex interaction potential derived above for a model of decoupled superconducting layers . the generalization of such consideration for anisotropic london theory is straightforward . note that the vortex lattice structure for bulk anisotropic superconductors in tilted field in the framework of london approach has been calculated in ref . . let s consider a vortex lattice characterized by the translation vectors @xmath211 , where @xmath212 are primitive vectors of the lattice . the primitive cell occupies the area @xmath213\,\cdot \mathbf{z}_0 $ ] and is assumed to contain @xmath0 vortices : @xmath214 . positions of vortices in a cell are determined by the vectors @xmath215 ( @xmath216 ) ( see fig . [ fig:10 ] ) . the interaction energy per unit lattice cell can be expressed via the vortex vortex interaction potentials ( [ eq:10c]),([eq:11c ] ) : @xmath217 the interaction energy ( [ eq:1e ] ) depends on both the relative positions @xmath218 of vortices in the primitive cell and the structure of the vortex lattice defined by the translation vectors @xmath219 . the first term in ( [ eq:1e ] ) describes the interaction energy between vortices in the primitive cell ( without the lattice contribution ) , whereas the second sum takes account of the lattice effects . with the help of the poisson formula , one can rewrite the intervortex interaction energy ( [ eq:1e ] ) in terms of the fourier components @xmath220\,,\ ] ] where the function @xmath221 is determined by the eq . ( [ eq:11c ] ) , and @xmath222 are the reciprocal lattice vectors . the sum and the integral in eq . ( [ eq:2e ] ) diverge both at @xmath223 and at large @xmath222 values . the small @xmath222 divergence corresponds to the linear ( in the system size ) increase in the vortex energy because of the slow @xmath224 decay of the vortex - vortex interaction potential . the large @xmath222 divergence is logarithmic and is associated with the vortex self energy . for simplicity , we restrict ourselves to the case of an instability with respect to the unit cell doubling and tripling , i.e. formation of the vortex lattices with two and three flux quanta per unit cell ( @xmath225 and @xmath226 ) . hereafter we consider only the shifts of vortex sublattices along the @xmath167 direction and choose the appropriate reciprocal lattice vectors @xmath227 @xmath228 for @xmath225 and @xmath226 , respectively . here we consider only equidistant vortex chains within the primitive cells . fixing the value of the field @xmath210 we fix the unit cell area area @xmath229 for @xmath230 and @xmath231 for @xmath226 . thus , the interaction energy ( [ eq:2e ] ) depends only on two parameters : ( i ) @xmath232 ratio characterizing the lattice deformation ; ( ii ) relative displacement @xmath233 of vortex sublattices along the @xmath167-axis ( see fig . [ fig:10 ] ) . to exclude the divergence at @xmath234 it is convenient to deal with the energy difference : @xmath235 the results of our numerical calculations of this energy difference are shown in fig . [ fig:11]a . one can clearly observe that changing the vortex tilting angle we obtain the minimum in the function @xmath236 which gives us the evidence for the phase transition in the lattice structure with the unit cell doubling or tripling depending on the vortex tilting angle . the multiplication of the unit cell is accompanied by the strong change in the lattice deformation ratio @xmath237 ( see fig . [ fig:11]b ) . to sum up , we suggest a scenario of the phase transitions between the flux structures with different number of vortices per unit cell which can be realized in thin films of anisotropic superconductors placed in tilted magnetic fields . we demonstrate that the vortex interaction in the films of anisotropic superconductors placed in tilted magnetic fields is very special . the underlying physics arises from the interplay between the long range attraction and repulsion between tilted vortex lines . in consequence , new and very reach types of vortex structures may appear . they are formed from the vortex dimers , trimers , etc . , and the transition between different types of vortex structures may be controlled by tilting of external magnetic field and/or by varying of the temperature . our theoretical findings are based on two theoretical approaches : anisotropic london model and the london type model of decoupled superconducting layers . taking account of the vortex tilt and bending we analyzed the distinctive features of the vortex vortex interaction potential in a wide range of parameters and fields and demonstrated the possibility to obtain a minimum in the vortex interaction potential vs the intervortex distance . further analysis in the paper included the calculations of energy of finite size vortex clusters as well as the energy of regular vortex arrays with different number of vortices per unit cell . the phase transitions accompanied by the multiplication of the primitive lattice cell appear to be possible for dilute vortex arrays , i.e. for rather small magnetic field component @xmath210 . we believe that our theoretical predictions concerning the unusual vortex configurations are experimentally observable using the modern vortex imaging methods such as lorentz microscopy , scanning tunneling microscopy , scanning hall - probe or decoration technique . we are grateful to professor a. tonomura for stimulating discussions . this work was supported , in part , by the russian foundation for basic research , russian academy of sciences under the program quantum physics of condensed matter " , russian agency of education under the federal program scientific and educational personnel of innovative russia in 20092013 " , and dynasty " foundation . let us evaluate the interaction energy ( [ eq:9c ] ) of two tilted parallel vortex lines taking @xmath238 and assuming @xmath146 and @xmath239 . we introduce a continuous coordinate @xmath240 and continuous function @xmath241 . thus , the linear system of equations ( [ eq:4c ] ) reduces to the following integral equation @xmath242 the equation ( [ eq : a1 ] ) can be rewritten as a differential one @xmath243 at the interval @xmath244 with the boundary conditions @xmath245 introducing the notations @xmath246 one can rewrite the equation ( [ eq : a2 ] ) and boundary conditions ( [ eq : a3 ] ) in dimensionless form @xmath247 the solution of the eq . ( [ eq : a4 ] ) has the form @xmath248 where the constants @xmath249 and @xmath250 are defined by the boundary conditions ( [ eq : a5 ] ) : @xmath251 @xmath252 in the continuous limit the expression for the interaction energy ( [ eq:9c ] ) takes the form : @xmath253 here the function @xmath254 can be calculated analytically : @xmath255 } { \sqrt{q^2 + \lambda_{ab}^{-2}}\,(1 + p^2)^2\ , \left[\,2 k \cosh l + ( 1 + k^2)\sinh l\,\right ] } \ .\ ] ] to calculate the vortex vortex interaction within anisotropic london model we use general expressions derived in ref . for the total energy @xmath256 of an arbitrary arrangement of curved vortices in a superconducting film of thickness @xmath87 with the @xmath97-axis perpendicular to the film plane : @xmath257 where @xmath264a\bigg ] \ , \\ \nonumber \phi^+({\bf k}_{\bot})=(-\upsilon / k_{\bot}c ) \bigg[-2k_{\bot}a+\bigg[(k_{\bot}+\upsilon)e^{\upsilon d}+(k_{\bot}-\upsilon)e^{-\upsilon d}\bigg]b\bigg ] \ , \end{aligned}\ ] ] the summation in the above expressions is carried out over @xmath268 and @xmath269 stands for the vector component parallel to the @xmath270 plane . following ref . we introduce here the fourier transform @xmath271 of the vorticity distribution @xmath272 : @xmath276 + j_0\left[\nu\left(\eta(\zeta_2)-\eta(\zeta_1 ) - \tilde r \right)\right]\bigg\ } \nonumber \\ & & \qquad\times\left(\pi_1(\nu,\zeta_1,\zeta_2 ) + \pi_2(\nu,\zeta_1,\zeta_2)\right)\ , \end{aligned}\ ] ] @xmath277 + j_0\left[\nu\left(\eta(\zeta_2)-\eta(\zeta_1 ) - \tilde r \right)\right]\bigg\ } \nonumber \\ & & \qquad\times\frac{\pi_3(\nu,\zeta_1,\zeta_2 ) } { \tau(\nu)\sinh\left[\tau(\nu)\tilde d\right]\bigg\{e^{-\tau(\nu)\tilde d}\left[\nu-\tau(\nu)\right]^2- e^{\tau(\nu)\tilde d}\left[\nu+\tau(\nu)\right]^2\bigg\}^2 } \ , \end{aligned}\ ] ] @xmath278 + j_0\left[\nu\left(\eta(\zeta_2)-\eta(\zeta_1 ) - \tilde r \right)\right]\bigg\ } \nonumber\\ & & \qquad\times\frac{\pi_4(\nu,\zeta_1,\zeta_2 ) } { \nu\sinh^2\left[\tau(\nu)\tilde d\right]\bigg\{e^{-\tau(\nu)\tilde d}\left[\nu-\tau(\nu)\right]^2- e^{\tau(\nu)\tilde d}\left[\nu+\tau(\nu)\right]^2\bigg\}^2 } \,,\end{aligned}\ ] ] @xmath284\ , \cosh\left[\tau(\nu)\left((-\zeta_1-\zeta_2-|\zeta_1-\zeta_2|)/2\right)\right ] } { \nu^2\sinh\left[\tilde d\tau ( \nu)\right ] } \nonumber \\ & -&\frac{\cosh\left[u(\nu)\left(\tilde d+(\zeta_1+\zeta_2-|\zeta_1-\zeta_2|)/2\right)\right ] \cosh\left[u(\nu)\left((-\zeta_1-\zeta_2-|\zeta_1-\zeta_2|)/2\right)\right ] } { \nu^2u(\nu)\sinh\left[\tilde du ( \nu)\right ] } \nonumber \ , \end{aligned}\ ] ] @xmath285+\tau ( \nu)^2\cosh\left[\tau ( \nu)\tilde d\right ] + 2\nu\tau(\nu)\sinh\left[\tau ( \nu)\tilde d\right]\bigg\ } \nonumber\\ & & \times\bigg\{\cosh\left[\tau(\nu)(\tilde d+\zeta_1)\right]\cosh\left[\tau(\tilde d+\zeta_2)\right ] + \cosh\left[\tau(\nu)\zeta_1\right]\cosh\left[\tau(\nu)\zeta_1\right]\bigg\}\nonumber\\ & + & \cosh\left[\tau(\nu)(\tilde d+\zeta_1)]\right)\cosh\left[\tau(\nu)\zeta_2\right ] + \cosh\left[\tau(\nu)z_1\right]\cosh\left[\tau(\nu)(\zeta_2+\tilde d)\right]\nonumber\,\end{aligned}\ ] ] @xmath286+\sinh^2\left[\tau(\nu)\tilde d\right]+ 2\nu\tau(\nu)\sinh\left[\tau(\nu)\tilde d\right]\cosh\left[\tau(\nu)\tilde d\right]\bigg\}\times \\ \nonumber \bigg\{\cosh\left[\tau(\nu)(\tilde d+\zeta_1)\right]\cosh\left[\tau(\nu)(\tilde d+\zeta_2)\right]+ \cosh\left[\tau(\nu)\zeta_1\right]\cosh\left[\tau(\nu)\zeta_1\right]\bigg\}- \\ \nonumber 2\bigg\{\nu^2\cosh\left[\tau(\nu)\tilde d\right]+\nu\tau\sinh\left[\tau(\nu)\tilde d\right]\bigg\}\times \\ \nonumber \bigg\{\cosh\left[\tau(\nu)(\tilde d+\zeta_1)\right]\cosh\left[\tau(\nu)\zeta_2\right]+ \cosh\left[\tau(\nu)\zeta_1\right]\cosh\left[\tau(\nu)(\zeta_2+\tilde d)\right]\bigg\ } \ .\end{aligned}\ ] ] a. tonomura , h. kasai , o. kamimura , t. matsuda , k. harada , t. yoshida , t. akashi , j. shimoyama , k. kishio , t. hanaguri , k. kitazawa , t. masui , s. tajima , n. koshizuka , p. l. gammel , d. bishop , m. sasase , and s. okayasu , phys . rev . lett . * 88 * , 237001 ( 2002 ) . pancake vortex positioned in the @xmath22th layer of a finite layered structure , @xmath14 is a thickness of the superconducting layer , and @xmath13 is the distance between the layers.,scaledwidth=45.0% ] ( panels a , b ) and @xmath69 ( panels c , d ) pancakes in a finite stack in the presence of the applied in - plane magnetic field @xmath89 . ( a ) the force - balanced ( equilibrium ) configuration of pancakes for @xmath289 . ( b ) pancake configurations at sequential time points @xmath290 for @xmath291 . for the structure with @xmath66 we find @xmath292 . ( c ) the force - balanced ( equilibrium ) configuration of pancakes for @xmath293 . ( d ) pancake configurations at sequential time points @xmath294 for @xmath295 . for the structure with @xmath69 we find @xmath296 . here @xmath72 , @xmath297 , and @xmath298.,title="fig:",scaledwidth=40.0% ] ( panels a , b ) and @xmath69 ( panels c , d ) pancakes in a finite stack in the presence of the applied in - plane magnetic field @xmath89 . ( a ) the force - balanced ( equilibrium ) configuration of pancakes for @xmath289 . ( b ) pancake configurations at sequential time points @xmath290 for @xmath291 . for the structure with @xmath66 we find @xmath292 . ( c ) the force - balanced ( equilibrium ) configuration of pancakes for @xmath293 . ( d ) pancake configurations at sequential time points @xmath294 for @xmath295 . for the structure with @xmath69 we find @xmath296 . here @xmath72 , @xmath297 , and @xmath298.,title="fig:",scaledwidth=40.0% ] ( panels a , b ) and @xmath69 ( panels c , d ) pancakes in a finite stack in the presence of the applied in - plane magnetic field @xmath89 . ( a ) the force - balanced ( equilibrium ) configuration of pancakes for @xmath289 . ( b ) pancake configurations at sequential time points @xmath290 for @xmath291 . for the structure with @xmath66 we find @xmath292 . ( c ) the force - balanced ( equilibrium ) configuration of pancakes for @xmath293 . ( d ) pancake configurations at sequential time points @xmath294 for @xmath295 . for the structure with @xmath69 we find @xmath296 . here @xmath72 , @xmath297 , and @xmath298.,title="fig:",scaledwidth=40.0% ] ( panels a , b ) and @xmath69 ( panels c , d ) pancakes in a finite stack in the presence of the applied in - plane magnetic field @xmath89 . ( a ) the force - balanced ( equilibrium ) configuration of pancakes for @xmath289 . ( b ) pancake configurations at sequential time points @xmath290 for @xmath291 . for the structure with @xmath66 we find @xmath292 . ( c ) the force - balanced ( equilibrium ) configuration of pancakes for @xmath293 . ( d ) pancake configurations at sequential time points @xmath294 for @xmath295 . for the structure with @xmath69 we find @xmath296 . here @xmath72 , @xmath297 , and @xmath298.,title="fig:",scaledwidth=37.0% ] for the anisotropy parameter @xmath299 and for different values of in - plane magnetic field @xmath300 . ( a ) the anisotropy axis is perpendicular to the film plane ( @xmath301 , @xmath302 ) . ( b ) the anisotropy axis is tilted with respect to the @xmath67 axis ( @xmath303 , @xmath304 ) the numbers near the curves denote the values of the ratio @xmath305 . the dashed line shows the shape of a vortex line in the absence of the in - plane magnetic field.,title="fig:",scaledwidth=45.0% ] for the anisotropy parameter @xmath299 and for different values of in - plane magnetic field @xmath300 . ( a ) the anisotropy axis is perpendicular to the film plane ( @xmath301 , @xmath302 ) . ( b ) the anisotropy axis is tilted with respect to the @xmath67 axis ( @xmath303 , @xmath304 ) the numbers near the curves denote the values of the ratio @xmath305 . the dashed line shows the shape of a vortex line in the absence of the in - plane magnetic field.,title="fig:",scaledwidth=45.0% ] ) , ( [ straight - tilted2 ] ) ] vs the distance r between two tilted vortices for an anisotropic film of the thickness @xmath194 . ( a ) interaction energy for the anisotropy parameter @xmath309 and different tilting angles . the numbers near the curves denote the values of tilting angle @xmath119 . ( b ) interaction energy for @xmath310 and different values of anisotropy parameter . the numbers near the curves denote the values of @xmath193.,title="fig:",scaledwidth=40.0% ] ) , ( [ straight - tilted2 ] ) ] vs the distance r between two tilted vortices for an anisotropic film of the thickness @xmath194 . ( a ) interaction energy for the anisotropy parameter @xmath309 and different tilting angles . the numbers near the curves denote the values of tilting angle @xmath119 . ( b ) interaction energy for @xmath310 and different values of anisotropy parameter . the numbers near the curves denote the values of @xmath193.,title="fig:",scaledwidth=40.0% ] ) , ( [ straight - tilted2 ] ) ] vs the distance r between two tilted vortices for an anisotropic film of the thickness @xmath197 . ( a ) interaction energy for the anisotropy parameter @xmath311 and different tilting angles . the numbers near the curves denote the values of tilting angle @xmath119 . ( b ) interaction energy for @xmath312 and different values of anisotropy parameter . the numbers near the curves denote the values of @xmath193.,title="fig:",scaledwidth=40.0% ] ) , ( [ straight - tilted2 ] ) ] vs the distance r between two tilted vortices for an anisotropic film of the thickness @xmath197 . ( a ) interaction energy for the anisotropy parameter @xmath311 and different tilting angles . the numbers near the curves denote the values of tilting angle @xmath119 . ( b ) interaction energy for @xmath312 and different values of anisotropy parameter . the numbers near the curves denote the values of @xmath193.,title="fig:",scaledwidth=40.0% ] ) vs the distance @xmath3 between two curved vortices for an anisotropic film of the thickness @xmath194 : ( a ) @xmath313 ; ( b ) @xmath309 . the numbers near the curves denote the values of the ratio @xmath314 . the shape of vortex lines is schematically shown in the insets.,title="fig:",scaledwidth=45.0% ] ) vs the distance @xmath3 between two curved vortices for an anisotropic film of the thickness @xmath194 : ( a ) @xmath313 ; ( b ) @xmath309 . the numbers near the curves denote the values of the ratio @xmath314 . the shape of vortex lines is schematically shown in the insets.,title="fig:",scaledwidth=45.0% ] ) ] ( solid lines ) and straight tilted [ eqs . ( [ straight - tilted ] ) , ( [ straight - tilted2 ] ) ] ( dashed lines ) vortices for an anisotropic film of the thickness @xmath194 with different anisotropy parameters : ( a ) @xmath313 , @xmath315 ( @xmath316 ; ( b ) @xmath309 , @xmath317 ( @xmath318 . the shape of vortex lines is schematically shown in the insets.,title="fig:",scaledwidth=45.0% ] ) ] ( solid lines ) and straight tilted [ eqs . ( [ straight - tilted ] ) , ( [ straight - tilted2 ] ) ] ( dashed lines ) vortices for an anisotropic film of the thickness @xmath194 with different anisotropy parameters : ( a ) @xmath313 , @xmath315 ( @xmath316 ; ( b ) @xmath309 , @xmath317 ( @xmath318 . the shape of vortex lines is schematically shown in the insets.,title="fig:",scaledwidth=45.0% ] ) , ( [ eq:11c]),([eq:1d ] ) ] vs the intervortex distance @xmath3 in an equidistant chain of n vortices in a stack of decoupled superconducting layers ( @xmath194 ) : ( a ) @xmath319 ; ( b ) @xmath320 . the numbers near the curves denote the number @xmath10 of vortices in molecule . inserts show schematic pictures of vortex matter consisting of dimeric ( a ) and trimeric ( b ) molecules.,title="fig:",scaledwidth=45.0% ] ) , ( [ eq:11c]),([eq:1d ] ) ] vs the intervortex distance @xmath3 in an equidistant chain of n vortices in a stack of decoupled superconducting layers ( @xmath194 ) : ( a ) @xmath319 ; ( b ) @xmath320 . the numbers near the curves denote the number @xmath10 of vortices in molecule . inserts show schematic pictures of vortex matter consisting of dimeric ( a ) and trimeric ( b ) molecules.,title="fig:",scaledwidth=45.0% ] , @xmath309 , @xmath321(@xmath320 ) . the numbers near the curves denote the number n of vortices in a molecule . the shape of vortex line and effective tilting angle @xmath119 are schematically shown in the inset.,scaledwidth=50.0% ] vs the relative displacement @xmath233 of vortex sublattices for different tilting angles @xmath322 ( solid line ) and @xmath323 ( dashed line ) and different number of flux quanta per unit cell @xmath324 . ( b ) lattice deformation ratio @xmath232 vs the relative displacement @xmath233 of vortex sublattices for different tilting angles @xmath322 ( solid line ) and @xmath323 ( dashed line ) and different number of flux quanta per unit cell @xmath324 . here we put @xmath325 . the numbers near the curves denote the number @xmath0 of vortices per unit cell.,title="fig:",scaledwidth=45.0% ] vs the relative displacement @xmath233 of vortex sublattices for different tilting angles @xmath322 ( solid line ) and @xmath323 ( dashed line ) and different number of flux quanta per unit cell @xmath324 . ( b ) lattice deformation ratio @xmath232 vs the relative displacement @xmath233 of vortex sublattices for different tilting angles @xmath322 ( solid line ) and @xmath323 ( dashed line ) and different number of flux quanta per unit cell @xmath324 . here we put @xmath325 . the numbers near the curves denote the number @xmath0 of vortices per unit cell.,title="fig:",scaledwidth=45.0% ]
the distinctive features of equilibrium vortex structures in thin films of anisotropic superconductors in tilted magnetic fields are studied for the limits of moderate and strong anisotropy . the energetically favorable shape of isolated vortex lines is found in the framework of two particular models describing these limiting cases : london theory with an anisotropic mass tensor and london - type model for a stack of josephson decoupled superconducting layers . the increase of the field tilting is shown to result in qualitative changes in the vortex vortex interaction potential : the balance between long range attractive and repulsive forces occurs to be responsible for a formation of a minimum of the interaction potential vs the intervortex distance . this minimum appears to exist only for a certain restricted range of the vortex tilting angles which shrinks with the decrease of the system anisotropy parameter . tilted vortices with such unusual interaction potential form clusters with the size depending on the field tilting angle and film thickness or / and can arrange into multiquanta flux lattice . the magnetic flux through the unit cells of the corresponding flux line lattices equals to an integer number @xmath0 of flux quanta . thus , the increase in the field tilting should be accompanied by the series of the phase transitions between the vortex lattices with different @xmath0 .
hand - outs in the local languages were provided to all with a simple description regarding screening criteria that could be understood even by casual observers . every week at a prefixed schedule , an rop - trained retinal specialist ( sj ) visited each participating nicu for rop screening . screening and treatment protocols were standardized , and a portable diode laser with indirect ophthalmoscope delivery was used . after each evaluation , all the data of preterm babies recorded prospectively on standard forms and was entered into a computerized database for analysis . all preterm babies who came from other nicu's / practitioners to the institute retina services directly with any stage rop were also included prospectively in our database . the timing of first screening strategy , called as the day-20 , day-30 strategy, was modified to suit local needs . the timing of first examination had been recommended at that time as 3 - 6 weeks after birth or 31 weeks post - conceptional age , whichever was earlier . this was to be calculated based primarily on the gestational age ( depending on the post - menstrual age ) plus the chronological age . we realized that this approach had its problems for strict implementation and comprehension by various service providers . gestational age in weeks was difficult to calculate readily and was cumbersome as many mothers could not accurately recall the date of their last menstrual period ( lmp ) , and antenatal data was often missing . to have a benchmark and to avoid inconsistencies , ambiguity , and confusion , our strategy recommendation was to day 30 of life and in smaller babies ( possibly less than 30 weeks and/or birth weight less than 1200 gms ) , by day 20 of life. the date of birth was reliably known to all and so , the criteria of day 30 and day 20 of life was found to be easy to comprehend and follow by all involved in the care of the preterm child . where gestational age was unknown , then a preterm baby should have one screening irrespective of the weight recorded or estimated gestational age . in order to analyze and audit our work , and to plan further screening strategies , we decided to evaluate the impact of our screening strategy to control the rop blindness in our community after 2 years of the program onset . this report 3 of the indian twin cities rop study ( itcrops ) is an unmasked , retrospective , case- control study of records from the prospectively collected itcrop database . we included for cases those rop babies that were screened directly in nicu where we went regularly , and controls as those rop babies who were referred to the institute 's retinal services from other nicu / practitioners in our city or surrounding areas / other states during the same time period . historical controls included rop cases seen in the institute in the years preceding the start of itcrops . our hypothesis for the study was that an nicu- based screening program would enhance timely screening , detection , and treatment of retinopathy of prematurity and improve treatment outcomes . the rop patients were first grouped into a prospective , post - screening program itcrops database ( group 1 ; after 1 dec 1999 to dec 2001 ) . this post - screening group of patients was further sub - divided into nicu - screened patient ( group 1a , child from a center that was part of our program - cases ) and non - nicu - screened patient ( group 1b , child from an nicu center , not part of our program , but seen during the same period when directly referred to the institute 's retinal services - controls ) . for the historical control group ( group 2 ) , records of consecutive babies with diagnostic icd ( international classification of diseases ) coding of rop were retrieved retrospectively from the institute computerized medical records from june 1987 ( when the institute started ) to 30 november 1999 . the data sheets of all participants were analyzed for gestational age , birth weight , age at presentation , stage and severity of the disease at time of presentation . primary outcome measure was the stage of disease at first presentation as better or as worse than stage 4 rop . secondary outcomes were age at presentation , proportion of babies that could be treated , and the final treatment outcomes . the final outcome was defined as good , fair , or poor depending on the status of retina in the last follow - up , at least 4 months or more after birth , when acute stage rop is generally stable . good outcome included fully regressed rop with no vision - affecting sequelae , completely attached macular area , and clear media . outcome was classified as fair for regressed rop with clear media but sequelae that could potentially affect vision such as macular fold , macular heterotropia , macular pigmentary change , or squint . poor outcome was defined as total retinal detachment or media haze with no or minimal visual potential . visual loss from non - rop causes categorical data were compared using fisher 's exact test , and between - group differences in means were compared using independent sample t - test . of the 643 cases screened during the study period , 161 had rop . amongst patients with rop , various parameters were compared between nicu - based rop patients ( group 1 a ) and the directly referred patients ( group 1b ) as shown in table 1 . we also compared the pre - screening program group ( group 2 ) and post - screening program group ( group 1 ) as shown in table 2 . comparison of nicu - screened vs. non - nicu - screened rop cases in post - screening program period ( group 1 ) comparison of rop cases in pre - screened and post - screening program period the median age of presentation in the group 2 was much higher than in group 1 . the range of age was , however , quite large due to outliers who missed timely screening and presented only when child was symptomatic . the number of eyes presenting in the stage 4 and 5 was also significantly high ( p < 0.0001 ) in group 2 compared to group 1 . the relative risk of an eye presenting in the stage 4 and 5 in group 2 was 4.7 times higher ( 95% confidence interval 3.07 - 7.32 ) than in group 1 . the number of eyes that could be provided treatment in the group 2 was significantly less ( p < 0.0005 ) than in the group 1 . the final outcome in group 2 was significantly poor ( p < 0.0001 ) than in group 1 . the relative risk of poor outcome in group 2 was 3.83 times higher ( 95% confidence interval 2.75 - 5.34 ) than in group 1 . patients in group 1a were seen earlier than those in group 1b and showed good compliance to our day-30 strategy though here too few babies were non - compliant and presented in late stages . the relative risk of an eye presenting in the stage 4 and 5 in group 1b was 1.92 times higher ( 95% confidence interval 1.4 - 2.6 ) than in group 1a . significantly higher number of patients ( p < 0.0005 ) in group 1a received treatment with statistically better outcomes than those in group 1b . the relative risk of having poor outcome in group 1b was 2.21 times ( 95% confidence interval 1.31 - 3.71 ) higher than group 1a . the natural course of rop offers a narrow window of opportunity for the treatment , which is recommended for high - risk pre - threshold and threshold rop stages . if rop can be diagnosed and treated in this period , results are quite encouraging . group 2 shows the natural history of the disease when no rop program existed , and the disease was just emerging in our city , and we were mainly seeing advanced untreatable blindness with children presenting universally late when leucocoria / squint or vision loss was detected by the caregivers . our study data shows that the best outcomes were seen in cases from participating centers where cases were screened in the nicu itself and consequently diagnosed and treated earlier . we believe that still better strategies need to be devised to further reduce failure rates due to late presentation and delayed treatment , as even in group 1a , we had some babies who came late [ table 1 ] . a few isolated , hospital - based screening programs exist in india , but there was lack of a city - based nicu screening strategy that would ensure universal coverage . similar city - wide strategies improved compliance from 47% to 73% in a geographical area in uk . periodic screening guidelines and audits such as those presented here and the goal is to ensure timely and effective rop screening of all eligible babies without wasting resources in evaluating low - risk infants . the timing of the first rop screening had been initially established in the literature based on rop data and the understanding of the disease before the cryo - rop studies . at that time , zone 1 disease was poorly recognized , and extremely premature babies less than 27 weeks were uncommonly seen . so , the recommendations were based on the natural course of zone ii disease and in babies more than 27 weeks gestational age , as these were represented more in the study groups . zone 1 disease and aggressive - posterior rop ( aprop ) is now being increasingly recognized as a distinct manifestation of a severe and early onset rop with the highest chance of progression to threshold rop . the treatment guidelines for rop have also changed considerably with treatment now recommended in pre - threshold eyes that are at risk . however , in our center , and in many other centers in asia , progressive pre - threshold rop , especially in zone i , have been treated much before the new early rop treatment guidelines . to detect the disease in this treatable form and to improve outcomes in zone 1 disease that usually progresses rapidly , our data showed that possibly earlier screening has better outcomes as seen between group 1a and 1b . screening at a median age of less than 1 month of birth had a substantially better impact on stage at presentation being less than 4 , ability to offer treatment and good to fair final anatomical outcomes with minimal of poor outcomes . this day 20-day 30 strategy would need further verification in future studies in different geographic areas . we believe that the simplicity and accuracy of this strategy allows improved and measurable compliance as to the timing of the first screening besides being readily understood by all caregivers of the preterm baby . in our country and maybe in other middle - income countries , where antenatal and cultural / socio - economic circumstances are different from many western countries , ascertainment of gestational age and post - conceptional age is not always accurate . antenatal records are not universally available , and mothers are often not able to communicate the lmp . birth weight may or may not be recorded due to home deliveries and lack of infrastructure ; accuracy of weighing machines in different centers may not be the same , and the weighing methodology may be different ( with or without clothes ) . some babies are large for ga as those with fluid retention ( hydrocephalus / intestinal obstruction / congenital renal dysfunction / diabetic mothers etc . ) and can be overlooked if gestational age is not precisely known . to avoid missing such babies , our screening criteria include that any baby whose exact gestational age is not known and is labeled as preterm by a pediatrician , should have one rop screening irrespective of birth weight . the day- 20 and day- 30 strategy helped to maintain uniformity and compliance in terms of timing of screening because date of birth was well - known to all , and this was the terminology used in case notes by the pediatricians and other care givers and can be used in epidemiological surveillance . our treatment and outcome results were better in the nicu - screened group with screening done at a median of less than a month ( group 1a ) than at a median of 2.3 months ( group 1b ) during the same study period . one limitation of our study is that babies in the different groups could have differences in the post - natal events and type of nicu - care that they received before presenting to us . however , some of these differences may be balanced across the groups because of similar mean gestational age in all groups . we have not included that data in the current study because firstly , the risk factors for rop are already well - known and since these babies have already developed rop , this report is not looking into the risk factors . secondly , the emphasis of this report is to document how the disease presentation and outcomes change towards the better if the screening strategy is implemented in the nicu itself ( group 1a ) and on a city - wide scale rather than no screening ( group 2 ) or screening at the ophthalmologists office only ( group 1b ) . additionally , our nicu screening involves equal numbers of nicu of tertiary , secondary , and primary level since we screen across the whole of twin cities including government , charitable , and private hospitals . further database analysis is already underway to present rop data based on type of nicu versus disease presentation and outcomes . the current study evaluated a group of rop children identified through different strategies and evaluated their rop presentation and anatomical outcome as a measure of success of the program . our data highlights the positive impact of our nicu - based citywide screening program and the efficacy of the unique strategies developed by us in reducing blindness due to rop . it provides a model that brings the science and technological know - how from the research literature in journals to the bedside of high - risk infants in the community . early screening within a month of birth in nicus and prompt the national neonatology forum ( nnf ) of india has recently endorsed some of these strategies in their rop national guidelines . as in most developed countries , these need further endorsement followed by monitoring and punitive actions for defaulters , by our national pediatric and ophthalmology societies to ensure wider and uniformly effective implementation .
context : outcomes of various screening strategies in retinopathy of prematurity are not well reported.aim:to assess the impact of a city - wide , rop screening strategy , on the disease presentation and treatment outcome.materials and methods : a retrospective case - control study from a prospectively collected rop data - base was analyzed . cases ( group 1a ) included rop babies that were screened directly in neonatal intensive care units , and controls ( group 1b ) were babies referred directly to the institute from other neonatal centers during the same period . historical controls ( group 2 ) were rop cases seen in the years preceding establishment of this rop program and database . primary outcome measure was the risk of eyes presenting with stage 4 or worse rop , and main secondary outcome measure was the final anatomic outcome.results:of the 643 cases screened , 322 eyes of 161 babies had rop . the median age of 7.19 months at presentation for the 46 patients ( 92 eyes ) in group 2 was higher than the median age of 1.29 months for the 115 patients ( 230 eyes ) in group 1 . within the group 1 , group 1a had lower median age at presentation than group 1b ( 0.91 months versus 2.30 months ) . the relative risk of an eye presenting in the stage 4 and 5 in group 2 was 4.7 times higher ( 95% confidence interval 3.07 -7.32 ) than in group 1 . eyes that could be given treatment in group 2 were significantly less ( p < 0.0005 ) than in group 1 . the relative risk of poor outcome in group 2 was 3.83 times higher ( 95% confidence interval 2.75 -5.34 ) than in group 1 . group 1a eyes had the best outcomes.conclusion:early screening before one month of age in neonatal centers detects the disease early where prompt treatment can lead to favorable outcomes . the study provides early results of a model strategy for rop screening .