text
stringlengths 512
504k
| summary
stringlengths 503
2k
|
---|---|
Six Irish students have been killed in Berkeley, California, when a fourth-floor balcony collapsed.
Niccolai Schuster (21), Eoghan Culligan (21), Eimear Walsh (21), Olivia Burke (21), Ashley Donohoe (22) and Lorcán Miller (21) were celebrating a 21st birthday when the balcony collapsed on to the balcony below.
Niccolai Schuster and Eoghan Culligan were former pupils of St Mary’s College in Rathmines, Dublin.
Ashley Donohoe is an Irish-American from Rohnert Park, which is 50 miles north of San Francisco. She and Olivia Burke are cousins.
Seven other people were seriously injured. Berkeley police spokeswoman Jennifer Coats said the survivors’ injuries were “very serious and potentially life-threatening”.
The victims, who had travelled to the United States on J-1 summer visas, fell from the fourth floor of an apartment building when the balcony gave way at 12.40am on Tuesday.
Four died at the scene and two others were pronounced dead at a local hospital, police said.
Pictures from the scene showed the balcony detached itself from the wall and collapsed into a balcony on the third floor of the pale stucco building on Kittredge Street, near the University of California Berkeley.
The City of Berkeley has released, along with the update on the incident, the 57-page building and safety inspection history for 2020 Kittredge Street.
The collapsed balcony and the three other similar balconies in the building, have been red-tagged, prohibiting access to those areas.
The City said it had ordered the property owner to immediately remove the collapsed balcony and to perform a structural assessment of the remaining balconies within 48 hours.
Blackrock, the investment giant that advises the property fund that owns the building, and Greystar, the Texas-based company that manages it, said that an independent structural engineer would carry out an investigation to determine the cause of the accident.
ADVERTISEMENT
Irish students who had been sleeping in the building at the time described hearing a bang. “I walked out and I saw rubble on the street and a bunch of Irish students crying,” said Mark Neville, a J-1 student.
Taoiseach Enda Kenny said the news from California was “truly terrible” and that his thoughts were with the victims’ families. “My heart breaks for the parents who lost children this morning and I can only imagine the fear in the hearts of other parents,” he said.
Mr Kenny will update the Dáil in the morning, and it is understood arrangements for the business of the House may be reviewed.
A number of TDs have called for the tragedy to be marked in some way.
A book of condolences will be open in the Mansion House in Dublin on Thursday and Friday from 10am-4pm.
Earlier the Department of Foreign Affairs set up an emergency telephone line (+353 1 418 0200) and activated its consular response team. The Irish consulate in San Francisco, a popular J-1 summer destination for Irish students, were also arranging grief counselling.
The balcony was holding 13 students when it collapsed, Mr Kenny said, citing California police.
The dead and injured were brought to three hospitals - Highland Hospital in nearby Oakland; Eden Medical Centre in Castro Valley, about 18 miles from the scene of the accident; and John Muir Medical Centre in Walnut Creek.
Apartments in the complex, completed in 2007, are available to rent for $2,150 to $4,000 a month, according to its website.
The cause of the collapse was not clear. As part of the City’s investigation of the incident, it will be retaining possession of the collapsed materials. Its investigation is expected to take several days.
Gene St Onge, an Oakland civil and structural engineer who reviewed a picture of the detached balcony at the request of the San Francisco Chronicle, said it appeared to be “a classic case of there being inadequate waterproofing at the point where the deck meets the house.”
While stressing that his assessment was preliminary, he said: “If the waterproofing is substandard, rainwater can enter the building, causing dry rot, which can destroy the wood members within a short time, i.e. only a few years from construction.”
Carrie Olson, a preservation expert who was a member of the City of Berkeley Design Review Committee for 14 years, told The Irish Times the balconies were not constructed to hold large numbers of people.
President Michael D Higgins sent a message of condolence while on a state visit to Italy. “I have heard with the greatest sadness of the terrible loss of life of young Irish people and the critical injury of others in Berkeley, California, today,” he said.
Minister for Foreign Affairs Charlie Flanagan said it was “an appalling loss of life for young people whose hopes and dreams of the future have suddenly and without notice been shattered.”
The US Ambassador to Ireland, Kevin O’Malley, expressed sympathy to the families, loved ones and friends of those who died.
Some of the students attended University College Dublin. The college has made counselling and student support services available to students in San Francisco and in Dublin. An online book of condolences will be opened on the website ucd.ie
Philip Grant, the Consul General of Ireland, held a wreath-laying ceremony at 5pm local-time near the site of the balcony collapse. ||||| Water seeping into the horizontal beams supporting a balcony could have caused dry rot, contributing to a balcony collapse that killed six people in Berkeley, engineers who visited the scene said Tuesday.
“It appears to be a classic case of dry rot, meaning water intruded into the building [and] rotted the wood” that supported the balcony, said Gene St. Onge, a civil and structural engineer in Oakland. With more than a dozen people on the balcony, “it gave way. It didn’t have enough residual strength, and it failed.”
St. Onge said the broken wooden beams protruding from the building that once held up the balcony show what looks like signs of dry rot.
“It appeared to be shredded and darkened and had all the appearance of wood that had been totally compromised by dry rot,” he said.
The balcony itself should have been able to support the weight of 13 or 14 people, he said.
“If you had 14 people, and they were all -- I don’t know -- football players, and they were jumping up and down, you would get a fair amount of deflection, depending on how well the railing was tied back,” St. Onge said. “But if the [wooden supports] were designed even under minimal standards, it should still have held.”
There are other clues that the wood had rotted. There is visible mold in one of the broken wooden joists. And it broke into short fibers at the failure point, a sign of dry rot; if the wood had not rotted, “you would see long slender splinters. It would look like a broken baseball bat,” said Bernard Cuzzillo, a consulting engineer who has a doctorate in mechanical engineering at UC Berkeley who visited the balcony scene Tuesday.
And when you look at what used to be the floor of the balcony, much of the wooden joists that once supported it have disintegrated.
Cuzzillo offered his interpretation of what happened:
The seven horizontal wooden joists that supported the balcony broke. The deck folded straight down 90 degrees, while the guardrail assembly flipped upside down.
With the deck flipped, it’s possible to see the condition of the balcony’s floor. “You will notice when you look through those things, you see a bunch of vertical lines. Those vertical lines correspond to where the joists had been attached at the bottom of the deck assembly,” he said.
“And the very startling thing is that only remnants of the joists remain in those locations,” Cuzzillo said. “You’re basically looking at what had been the joists, lined up now vertically, and now mostly gone, because they’re rotten. So basically, almost all that’s left of the joists are its shadows.”
Added Cuzzillo, “It became degraded over time due to dry rot. But then it completely disintegrated in the incident, in the fall, when it broke off.”
The wood was so deteriorated at the balcony site that when workers on the scene touched the wood, parts of it broke off, said Darrick Hom, president of the Structural Engineers Assn. of Northern California and an Oakland structural engineer for Estructure, who went to the scene Tuesday afternoon.
“It was decayed. They were touching it with their hand and pieces were coming off. Obviously, if you touch a wood beam on your deck, it should not come off in your hand,” Hom said.
Hom said as he left Tuesday afternoon workers were starting to cut open the intact balcony just below the collapsed one, possibly examining the condition of that balcony.
He agreed that had the balcony been built to the minimum code and was in good condition, the balcony should’ve been able to support 13 people. “Just the pure weight is not the deciding factor,” Hom said.
Hom said he expected investigators would look at how the balcony was designed by a structural engineer, and whether it was constructed based on the approved drawings.
He said it’s surprising to see this kind of collapse for such a new building. “To see something like this is very unexpected,” he said. It will be important to learn from this and prevent this from happening in the future, he said.
City records show the Library Gardens apartments at 2020 Kittredge Street were proposed as a mixed-use development in 2000 that was ultimately completed in 2007. The building has more than 175 rental units and 3,000 square feet of commercial space.
The owner of the land is listed as Granite Library Gardens, an investment fund managed by New York-based BlackRock. BlackRock leases the property to Greystar, a Virginia company that owns more than 400,000 residences nationwide, including Library Gardens.
Rent for one- and two-bedroom apartments at Library Gardens ranges from $2,150 a month to $4,000.
Waterproofing the supports that hold up balconies is extremely important. The wooden horizontal beams that hold them up protrude from the building. If the beams start to rot, the entire balcony can come tumbling down.
“That junction, where the [wooden] members come up beyond the exterior wall, is critically important to waterproof properly,” St. Onge said. “It appears as though that something failed there. Either the detailing wasn’t adequate, or the construction was not done properly, or something happened that allowed water to intrude.
St. Onge said it’s important to inspect apartment balconies.
“We’re seeing a lot of structures going back to the ’60s and ’70s -- they were built properly at the time – they’re starting to fail or failing completely because of age, and they’ve been neglected and not taken care of,” he said. “There have been a number of cases where decks have failed just simply because the owners haven’t been paying attention and repairing or replacing them as they should.”
City officials declined to comment at an early afternoon news briefing as to what caused the balcony to collapse. They said they are investigating.
Authorities said three of the building's other balconies have been red-tagged, meaning people are not allowed on them. They have asked for a complete structural evaluation.
The property management company of the apartment, the Library Gardens Apartment Complex, released a statement Tuesday expressing the firm's condolences over the tragedy.
"Our hearts go out to the families and friends of the deceased and those injured in this tragic accident. As the property management company, we have taken precautionary steps to limit access to other balconies at the apartment complex as law enforcement completes its investigation," the statement said. "The safety of our residents is our highest priority and we will be working with an independent structural engineer and local authorities to determine the cause of the accident. We will share more details as we have them." ||||| 6 dead, 7 hurt in Berkeley balcony collapse
A 21st birthday party in Berkeley full of students from Ireland turned into a scene of chaos and tragedy early Tuesday when the small fourth-floor balcony of the apartment hosting the bash gave way, killing six young people who crashed to the street below and injuring seven more.
The accident at the four-story Library Gardens complex at 2020 Kittredge St., near the UC Berkeley campus, unleashed waves of grief across Ireland as well as Rohnert Park, where one of the victims was from. It launched Berkeley homicide detectives and building inspectors into what they said would be a swift investigation.
While city officials wouldn’t comment on their initial findings, independent experts who viewed the damage in person or through photographs said it appeared rainwater had penetrated the balcony’s wood structure, causing dry rot that weakened it. Such rot, they said, can happen in just a few years if a building isn’t properly sealed from the elements.
Four people died at the scene and two others died at a hospital, police said. Two of the seven who were injured were in critical condition at trauma centers. And those who witnessed the balcony’s sudden plunge at 12:40 a.m. remained stunned hours later.
“I saw a bunch of bodies,” said Jason Biswas, 16, a student at nearby Berkeley High School. He said the victims were in “piles of blood,” adding, “It seemed like a movie, but it wasn’t.”
Alameda County officials identified the dead as Ashley Donohoe, 22, of Rohnert Park and Irish nationals Olivia Burke, Eoghan Colligan, Niccolai Schuster, Lorcan Miller and Eimear Walsh, who were each 21.
Donohoe was a soccer star and homecoming queen before she graduated from Rancho Cotate High School in 2011, and she and Burke were cousins. Their grieving family members were too overcome to speak Tuesday.
Irish officials said many of those at the party were in the United States on J-1 nonimmigrant visas, which are given to those approved to participate in work-and-study-based exchange visitor programs.
“The families who have been bereaved in the tragedy in Berkeley earlier today have now all been contacted,” Irish Foreign Affairs Minister Charlie Flanagan said in a statement. “I again want to express my deepest sympathy to the families and loved ones of those who lost their lives in this appalling incident.”
Makeshift memorial
On Kittredge Street, a makeshift memorial sprang up on the sidewalk. Neighbors delivered flowers and cards amid hugs and tears. Onlookers came throughout the day, crowding behind police barricades as city crews, hoisted by cranes, inspected what was left of the balcony and gathered shards for examination.
The balcony itself tumbled over and landed upside down on the third-floor balcony below. Red cups, tree branches and other debris littered the sidewalk.
In the evening, Mayor Tom Bates was joined by Ireland’s general consul in San Francisco, Philip Grant, for a ceremonial wreath-laying to honor the dead. A bagpiper played a mournful tune.
Berkeley officials said the apartment complex, which was built from 2005 to 2007, was subject to city and state building codes established in 1998, which mandated that balconies support at least 60 pounds per square foot. The balcony that collapsed appeared to be roughly 30 square feet. City officials said the apartment had no sign warning of the balcony’s capacity — and was not required by law to do so.
The officials would not speculate on what may have caused the balcony to break away. “In 48 hours we should know more,” said Matthai Chakko, a city spokesman.
As the investigation began, though, officials red-tagged three similar balconies at the 176-unit apartment complex out of concern that they might not be structurally sound. The city ordered the property owner to “perform a structural assessment of the remaining balconies within 48 hours,” said Chakko.
Bates said the tragedy was a “wake-up call,” and that city officials planned to inspect 13 other buildings under construction in the city to ensure they are safe.
Library Gardens, which consists of two buildings, has an assessed value for tax purposes of $65.6 million, according to public records. On its website, the manicured complex is described as the “premiere choice for convenient Berkeley apartments.” Units rent for $2,150 to $4,000 a month.
In a statement, property manager Greystar said, “Our hearts go out to the families and friends of the deceased and those injured in this tragic accident.
“The safety of our residents is our highest priority,” said Greystar, which is headquartered in Charleston, S.C., and has offices in San Francisco, “and we will be working with an independent structural engineer and local authorities to determine the cause of the accident.”
A similar statement was released by New York-based private equity group BlackRock, which bought the complex in 2007 and serves as the investment adviser for a real estate fund that owns the property. The owner is listed under the name Granite Library Gardens LP.
Wood joists ‘degraded’
Bernard Cuzzillo, a mechanical engineer who owns a Berkeley laboratory and studies why structures fail, came to the scene to view the damage and take photographs.
He said the wood structure of the balcony — which sat beyond a set of French doors — appeared to have been exposed to rain and that the “wood joists are obviously degraded due to dry rot.” He was not involved in the investigation.
Gene St. Onge, an Oakland civil and structural engineer, reviewed a picture of the detached balcony at the request of The Chronicle.
While stressing that his assessment was preliminary, St. Onge said, “This appears to be a classic case of there being inadequate waterproofing at the point where the deck meets the house. If the waterproofing is substandard, rainwater can enter the building, causing dry rot, which can destroy the wood members within a short time, i.e. only a few years from construction.”
Carrie Olson, who was on the Berkeley Design Review Committee that approved the building in 2001, said the balcony that collapsed was intended largely as decoration, and was “definitely not large enough to be what the city would call an ‘open space balcony,’ where groups of people could stand outside.” Olson abstained from the 2001 vote.
The number of people who fell, and the distance they dropped, horrified witnesses, who described a frantic scene on the street.
Gerald Robinson of Berkeley said he had just left a movie and was in his car when a young man and woman with blood on them flagged him down. He drove them to Highland Hospital in Oakland and stayed with them for about an hour.
“They were distraught. They were hanging on each other for comfort,” said Robinson, 65.
The two, both Irish, told Robinson that the balcony had collapsed during a 21st birthday celebration for a friend. “They were having a party — suddenly it went down,” Robinson said. “It came down really fast and chucked everybody off.”
Another witness, 18-year-old Xueyao Song, said, “It was really horrible. We came down and saw people crying, holding each other.”
Owen Buckley, who lives on the third floor of the building and is also an Irish student who came to the Bay Area for the summer to work, was not at the party but heard the collapse.
“I thought someone had gotten shot,” he said.
Rushed to hospitals
Two women and a man were taken by ambulance to Highland Hospital, officials said, while three men and a woman were rushed to Eden Medical Center in Castro Valley. At least one victim was taken to John Muir Medical Center in Walnut Creek.
The balcony collapsed about 40 minutes after Berkeley police officers were notified by dispatchers that someone had complained about a “loud party” in the building, said Police Chief Michael Meehan.
The chief said, however, that officers had not yet responded to the complaint because of other more serious calls and that, even if they had, it’s “highly doubtful” that officers would have gone inside.
San Francisco Chronicle staff writers J.K. Dineen and Michael Cabanatuan contributed to this report.
Jaxon Van Derbeken, Henry K. Lee, Hamed Aleaziz and Kurtis Alexander are San Francisco Chronicle staff writers. E-mail: jvanderbeken@sfchronicle.com, hlee@sfchronicle.com, haleaziz@sfchronicle.com and kalexander@sfchronicle.com Twitter: @jvanderbeken @henryklee @haleaziz @kurtisalexander ||||| . (AP) — Five of the Irish college students who died when a fifth-floor balcony collapsed were part of a popular cultural exchange program allowing foreign students to work and travel in the United States.
The U.S. government's J-1 Summer Work Travel program brings 100,000 college students to this country every year, with many finding jobs at resorts, summer camps and other attractions.
Here's a look at the program:
WHAT IS IT?
The program — created under the Fulbright-Hays Act of 1961 — allows foreign college students to spend up to four months living and working in the U.S. It was meant to foster cultural understanding and has become a booming, multimillion-dollar international business. Participation has grown from about 20,000 in 1996 to a peak of more than 150,000 in 2008.
WHO RUNS IT?
The State Department has 41 designated sponsors that help students arrange visas and find jobs and housing. Students pay thousands of dollars to participate in the program. The San Francisco Bay Area is especially popular with Irish students, many of whom work at Fisherman's Wharf and other tourist sites.
HAVE THERE BEEN PROBLEMS?
A 2010 investigation by The Associated Press found that many students came to the U.S. only to learn the jobs they were promised didn't exist. Some had to share beds in crowded houses or filthy apartments. Following the AP's investigation, the State Department tightened its rules governing participating businesses.
IS THERE OVERSIGHT?
In the past, unscrupulous third-party brokers working for sponsors have taken advantage of students, cramming them into tiny, roach-infested apartments while charging exorbitant rent.
Sponsors now take a more active role with housing. They have to keep records on where the students are living and stay in contact with them during their four-month stay. There's currently no requirement for sponsors to vet the housing for the program's participants, said Susan Pittman, a spokesman for the State Department. Still, she insists the department monitors the program, adding that last year they made 717 unannounced visits to sponsors and employers. | In what should be a wake-up call for property owners, investigators believe the horrific balcony accident that killed six young people and injured seven in Berkeley, Calif., yesterday was caused by a "classic case of dry rot"—even though the building was less than 10 years old. A civil engineer who inspected the scene tells the Los Angeles Times that the fourth-floor balcony should have been able to support the weight of 13 people, even if they were football players jumping up and down, but water appears to have seeped in and "totally compromised" the wooden beams holding up the balcony. Officials say the apartment complex was completed in 2007 and three other balconies there have been red-tagged, the San Francisco Chronicle reports. Waterproofing the point where wooden beams come out of an exterior wall is "critically important," and it appears "something failed there," the engineer tells the LA Times. "Either the detailing wasn't adequate, or the construction was not done properly, or something happened that allowed water to intrude." The victims have been named as Irish citizens Niccolai Schuster, Eoghan Culligan, Eimear Walsh, Olivia Burke, and Lorcan Miller, all 21, and 22-year-old Irish-American Ashley Donohoe, who is Burke's cousin, the Irish Times reports. The Irish students were in the US as part of the J-1 Summer Work Travel exchange program that brings 100,000 students to the US every year, the AP reports. |
SOCOM and the Army are purchasing two separate active infrared countermeasure systems to protect U.S. aircraft. They plan to spend a total of approximately $2.74 billion, including about $2.475 billion for 815 ATIRCM systems and associated common missile warning systems and about $261 million for 60 DIRCM systems and its own unique missile warning system. In addition, there are many other potential customers for an active infrared countermeasure system, such as Air Force, Navy, and Marine Corps aircraft that have not yet been committed to either ATIRCM or DIRCM. SOCOM and the Army both have a need for an effective integrated infrared countermeasure system capable of defeating infrared guided weapon systems. The Army considers this capability especially critical to counter newer, more sophisticated, infrared guided missiles. Likewise, SOCOM has established an urgent need for a near-term directional infrared countermeasure system capable of countering currently deployed infrared guided missiles. To meet its urgent need, SOCOM plans to exercise its first production option for 15 DIRCM systems in July 1998 and procure 45 additional systems during fiscal years 1998 and 1999. The Army expects to begin ATIRCM production in April 2001. Two generations of infrared missiles are currently deployed. First generation missiles can be defeated by current countermeasures, such as flares. Second generation infrared guided missiles are more difficult to defeat. More advanced infrared guided missiles are being developed that will have even greater capabilities against current countermeasures. To defeat infrared guided missiles, the ATIRCM and DIRCM systems will emit directed energy to decoy or jam the missile’s seeker. Both systems are composed of a missile approach warning system, a computer processor, a power supply, and energy transmitters housed in a pointing turret. After a missile is detected, the computer is to rotate the turret and point the transmitters at the missile. The transmitters are to then emit the directed energy. Congress and DOD have a long-standing interest in reducing proliferation of electronic warfare systems. By urging development of common systems, Congress expected to reduce the costly proliferation of duplicative systems and achieve cost savings in program development, production, and logistics. DOD agrees on the need for commonality, and its policy statements reflect congressional concerns about electronic warfare system proliferation. DOD policy states that prior to initiating a new acquisition program, the services must consider using or modifying an existing system or initiate a new joint-service development program. DOD policy also requires the services to consider commonality alternatives at various points in the acquisition process. Joint electronic warfare programs and increased commonality among the services’ systems results in economy of scale savings. Buying larger quantities for common use among the services usually results in lower procurement costs. Similarly, lower support costs result from a more simplified logistics system providing common repair parts, maintenance, test equipment, and training. For example, under Army leadership, a common radar warning receiver was acquired for helicopters and other special purpose aircraft of the Army, Marine Corps, and Air Force. In addition, a follow-on radar warning system for certain Army and Marine Corps special purpose aircraft and helicopters was jointly acquired with savings estimated by Army officials of $187.7 million attributable to commonality benefits. The ATIRCM and DIRCM systems will initially have one key difference in technological capability. The DIRCM system will rely on existing flash lamp technology to defeat all currently deployed first and second generation threat missiles. (A flash lamp emits a beam of light energy to confuse the missile’s seeker.) The Army’s ATIRCM system will also be fielded with a flash lamp but it will also have a laser. According to SOCOM officials, after the flash lamp-equipped DIRCM is fielded, they plan to upgrade the DIRCM system with a laser that has completed development and is already in production. As described later in this report, the upgraded DIRCM system could be available around the same time as the ATIRCM system. Furthermore, the DIRCM laser could be the same as the one used in ATIRCM, according to DOD officials. The Army’s cost and effectiveness analysis used to justify the ATIRCM system indicates that with a laser upgrade, DIRCM could provide capability equal to the ATIRCM. The two systems will have a total of three different size turrets. According to DOD and contractor officials, the size of the turret matters because larger aircraft present larger targets and must apply more energy to decoy an incoming missile’s seeker. A larger turret can direct more of the flash lamp’s energy. The larger the amount of directed energy, the greater the likelihood the missile will become confused as to the actual location of the target aircraft. The DIRCM turret, to be used on SOCOM C-130s, is the largest of the three. The United Kingdom intends to use the larger DIRCM turret on its larger aircraft and a smaller turret for its helicopters and smaller aircraft. The ATIRCM turret is between the two DIRCM turrets in size. Since the ATIRCM turret will also have a laser, however, DOD acquisition officials believe it will ultimately be more effective than any system equipped only with a flash lamp. Both the DIRCM and ATIRCM programs are experiencing delays that have moved their projected availability dates significantly closer together. However, DOD has not yet taken advantage of the schedule changes to determine if one system will be more cost-effective than the other and if it can achieve significant savings by procuring only one system to protect all its aircraft. SOCOM plans to exercise the first of three production options and buy 15 DIRCM systems in July 1998. These systems will not be equipped with lasers. Production funds are projected to be included in the fiscal year 2001 budget for the DIRCM laser upgrade. Production of ATIRCM is to begin in April 2001. SOCOM officials maintain that because of their urgent need they cannot wait for the laser-equipped ATIRCM. However, the difference in the time frames for beginning production can be misleading. DIRCM is scheduled to go into production before operational testing begins, while the ATIRCM is not scheduled to begin production until operational testing is completed. If both DIRCM and ATIRCM production begin immediately after their respective operational tests, DIRCM’s production is delayed until April 2000 and ATIRCM is moved up to January 2001. As a result, the systems will start production within 9 months of each other. Additionally, DIRCM, with a laser upgrade, is projected to be available in 2001, about the same time as ATIRCM with a laser. The Army is developing ATIRCM and the United Kingdom with SOCOM is developing DIRCM to work on a variety of aircraft, including some that are the same or similar. (See table 1.) For example, the United Kingdom plans to use the DIRCM system on the CH-47 Chinook helicopter while the Army plans to use ATIRCM on the Chinook. By varying the size of the turret, the United Kingdom intends to use DIRCM on aircraft of a wide range of sizes, from its very large, fixed-wing C-130s to small rotary wing aircraft such as the Lynx. Although the Army currently has no plans to install ATIRCM on fixed-wing aircraft the size of C-130s, it too will be placing its system on a wide range of aircraft from the very large CH-47 heavy lift helicopter, to the small OH-58D helicopter. If development of both systems is successful, therefore, the Army and the United Kingdom will prove that ATIRCM and DIRCM provide redundant capability for many aircraft. In addition to those SOCOM and Army aircraft identified as platforms for DIRCM or ATIRCM, there are many potential Air Force, Navy, and Marine Corps aircraft that are not yet committed to either system. These include large fixed-wing aircraft of the Air Force, as well as 425 future Marine Corps V-22 aircraft and the Navy’s SH-60 helicopters. DOD’s plans to acquire infrared countermeasure capability may not represent the most cost-effective approach. While we recognize SOCOM’s urgent need for a countermeasure capability in the near term, we believe that DOD can satisfy this need and meet the Army’s needs without procuring two separate systems. Specifically, proceeding with procurement of the first 15 DIRCM systems beginning in July 1998 appears warranted. However, continued production of DIRCM may not be the most cost-effective option for DOD since the Army is developing the ATIRCM system, which will have the same technology, be available at about the same time, and is being developed for the same or similar aircraft. We, therefore, recommend that the Secretary of Defense (1) direct that the appropriate tests and analyses be conducted to determine whether DIRCM or ATIRCM will provide the most cost-effective means to protect U.S. aircraft and (2) procure that system for U.S. aircraft that have a requirement for similar Infrared Countermeasure capabilities. Until that decision can be made, we further recommend that the Secretary of Defense limit DIRCM system procurement to the first production option of 15 systems to allow a limited number for SOCOM’s urgent deployment needs. In written comments on a draft of this report, DOD concurred with our recommendation that the appropriate tests and analyses be conducted to determine whether ATIRCM or DIRCM will provide the most cost-effective protection for U.S. aircraft. According to DOD, the results of such analyses were completed in 1994 and 1995 and showed that both systems were the most cost-effective: DIRCM for large, fixed-wing C-130 aircraft and ATIRCM for smaller, rotary wing aircraft. However, as a result of events that have occurred in both programs since the analyses were conducted in 1994 and 1995, DOD’s earlier conclusions as to cost-effectiveness are no longer necessarily valid and a new analysis needs to be conducted as we recommended. For example, the 1994 cost- and operational effectiveness analysis conducted for SOCOM’s C-130s concluded that DIRCM should be selected because it was to be available significantly sooner than ATIRCM. As our report states, the DIRCM schedule has slipped significantly, and by the time the planned laser upgrade for DIRCM is available, ATIRCM is also scheduled to be available. Furthermore, the 1994 analysis justifying DIRCM concluded that ATIRCM would be a less expensive option and did not conclude that DIRCM would be more effective than ATIRCM. Thus, the question of which system would be most cost-effective for SOCOM’s C-130s is a legitimate issue that should be addressed by DOD in a new cost-effectiveness analysis before SOCOM commits fully to DIRCM. In addition, the Army’s 1995 cost- and operational effectiveness analysis justifying ATIRCM also concluded DIRCM could meet the Army’s rotary wing requirement if DIRCM’s effectiveness were to be improved by adding a laser. As our report notes, DOD now plans to acquire a laser as an upgrade for DIRCM. Thus, whether DIRCM or ATIRCM would be most cost-effective for the Army’s rotary wing aircraft remains a legitimate and viable question that DOD should reconsider. Further, in 1994 and 1995, when DOD conducted the prior cost-effectiveness analyses, effectiveness levels for DIRCM and ATIRCM had to be assumed from simulations because no operational test results were available at that time. Operational testing, including live missile shots against the DIRCM system, is scheduled to begin in the summer of 1998 and ATIRCM testing is scheduled for 1999. In the near future, then, DOD may be in a better position to know conclusively how effective DIRCM or ATIRCM will be and this should be taken into consideration in a new cost-effectiveness analysis. DOD did not concur with a recommendation in a draft of this report that one system be procured for all U.S. aircraft, arguing that one system cannot meet all aircraft requirements. We have clarified our recommendation by eliminating the word “all”. Our intent was to focus this recommendation on U.S. aircraft having a requirement for advanced infrared countermeasure protection, such as that to be provided by DIRCM or ATIRCM. For those aircraft that have an advanced infrared countermeasure requirement, we reiterate that the United Kingdom plans to use the DIRCM system on a wide variety of fixed- and rotary wing aircraft of many shapes and sizes, and the Army plans to use ATIRCM on a wide variety of rotary wing aircraft, as well as the fixed-wing CV-22. Thus, DOD should reconsider whether DIRCM or ATIRCM could provide the advanced infrared countermeasure protection necessary to meet the multiple U.S. aircraft requirements. In commenting further on its belief that one system cannot meet all U.S. aircraft requirements, DOD also stated that (1) the SOCOM DIRCM is too heavy for Army helicopters, (2) ATIRCM’s smaller turret drive motors are not designed for the increased wind in SOCOM C-130 applications, and (3) ATIRCM will not emit enough Band I and II jamming energy to protect SOCOM’s C-130s. We agree that the SOCOM DIRCM is too heavy for Army helicopters, but point out that the DIRCM contractor is designing a smaller DIRCM turret for the United Kingdom’s helicopters that would not be too heavy for the Army’s helicopters. DOD has never planned for DIRCM or ATIRCM to be the only means of protection for its aircraft from infrared guided missiles. Other systems are available to DOD to help protect against threat missiles, including those in Bands I and II, and these alternatives should be considered for use in conjunction with DIRCM or ATIRCM as DOD tries to determine how to protect its aircraft in the most cost-effective manner. DOD also did not concur with our recommendation that it limit initial DIRCM production to the first 15 units to begin filling its urgent need and to provide units to be used for testing and analysis before committing SOCOM’s entire fleet of 59 C-130s to the DIRCM program. DOD maintained that SOCOM’s remaining C-130s would remain vulnerable to missile threats such as the one that shot down a SOCOM AC-130 during Operation Desert Storm if any production decisions were delayed. We continue to believe that the additional analysis needs to be conducted before any DIRCM production decisions beyond the first one are made. More than 7 years have passed since the unfortunate loss of the SOCOM AC-130 and its crew in 1991. During that time, DOD delayed the first DIRCM production decision several times. The resolution of the technical problems causing these schedule slips can only be known through successful testing and implementation of our recommendation would allow units to be produced for testing. Finally, we agree with DOD that SOCOM’s need is urgent and believe that the best way to begin fulfilling the urgent need while determining whether DIRCM or ATIRCM is the more cost-effective system for C-130s is to limit DIRCM production to only the first 15 systems. To develop information for this report, we compared and examined the Army’s and the SOCOM’s respective plans and proposed schedules for acquiring the ATIRCM and DIRCM systems. We obtained acquisition and testing plans and the proposed schedule for acquiring and fielding the systems. We compared these plans to legislative and DOD acquisition guidance and to the results of past DOD procurements. We discussed the programs with officials of the ATIRCM Project Office, St. Louis, Missouri, and the DIRCM Project Office, Tampa, Florida. Also, we visited with Lockheed-Sanders, the ATIRCM contractor, and Northrop-Grumman, the DIRCM contractor, and discussed their respective programs. We conducted our review from August 1996 to December 1997 in accordance with generally accepted government auditing standards. As you know, 31 U.S.C. 720 requires the head of a federal agency to submit a written statement on actions taken on our recommendations to the Senate Committee on Governmental Affairs and the House Committee on Government Reform and Oversight not later than 60 days after the date of the report. A written statement must also be submitted to the Senate and House Committees on Appropriations with an agency’s first request for appropriations made more than 60 days after the date of the report. We are sending copies of this report to appropriate congressional committees, the Under Secretary of Defense for Acquisition and Technology, the Secretary of the Army, the Director of the Office of Management and Budget, and the Commander of the U.S. Special Operations Command. We will also make copies available to others on request. Please contact me at (202) 512-4841 if you or your staff have any questions concerning this report. Major contributors to this report were Danny Owens, Wendy Smythe, Charles Ward, and Mark Lambert. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO reviewed the Army's Advanced Threat Infrared Countermeasure system (ATIRCM) and the U.S. Special Operations Command's (SOCOM) Directional Infrared Countermeasure (DIRCM) system to determine whether the Department of Defense (DOD) is justified in acquiring both systems. GAO noted that: (1) DOD may be able to achieve sizable savings by procuring, supporting, and maintaining only one active infrared countermeasure system to protect its aircraft from infrared guided missiles; (2) despite congressional emphasis on, and DOD's stated commitment to, commonality, SOCOM and the Army are acquiring two separate countermeasure systems that eventually will have the same laser effect technology; (3) DOD should determine which system is more cost-effective and procure that one to protect its aircraft; (4) if DIRCM is determined to be more cost-effective, the ATIRCM program should be terminated; and (5) if ATIRCM is determined to be more cost-effective, no additional DIRCM systems should be procured beyond those planned to be procured in July 1998 to meet SOCOM's urgent need. |
This report analyzes recent laws that relate to the regulation of guns in the District of Columbia (DC or District), and congressional proposals that would further amend these laws. The four main statutes or bills at issue are (1) federal provisions under the National Firearms Act of 1934 and the Gun Control Act of 1968; (2) the D.C. Firearms Control Regulation Act of 1976, as in effect prior to the Supreme Court's decision in District of Columbia v. Heller ; (3) the proposed Second Amendment Enforcement Act introduced in February 2011 ( H.R. 645 ); and (4) the District's legislation that permanently amends its gun laws—the Firearms Control Amendment Act of 2008 (FCAA), and the Inoperable Pistol Amendment Act of 2008 (IPAA). Congressional proposals to address the District's firearms laws often arise when the issue of voting rights for the District is before Congress; thus, it is worth noting another congressional proposal from the 111 th Congress to amend the District's gun laws, Title II of S. 160 , which was the District of Columbia House Voting Rights Act of 2009. While Title II of S. 160 of the 111 th Congress and H.R. 645 from the 112 th Congress are substantially similar, this report will point out the differences where appropriate. This report begins with an overview of the introduction of these bills and their status today. It proceeds to analyze current DC law after the passage of the FCAA and the IPAA, and the effect that the congressional proposals would have on the District's firearms laws. In doing so, the report traces the congressional proposals section by section. Much of the congressional activity on DC firearms laws occurred after the Supreme Court issued its decision in District of Columbia v. Heller . In Heller , the Supreme Court held, by a vote of 5-4, that the Second Amendment protects an individual's right to possess a firearm, unconnected with service in a militia, and the use of such arm for traditionally lawful purposes, such as self-defense within the home. The decision in Heller affirmed the lower court's decision that declared unconstitutional three provisions of the District's Firearms Control Regulation Act: (1) DC Code § [phone number scrubbed].02(a)(4), which generally barred the registration of handguns and thus effectively prohibited the possession of handguns in the District; (2) DC Code § 22-4504(a), which prohibited carrying a pistol without a license, insofar as that provision would prevent a registrant from moving a gun from one room to another within his or her home; and (3) DC Code § [phone number scrubbed].02, which required that all lawfully owned firearms be kept unloaded and disassembled or bound by a trigger lock or similar device. However, the Supreme Court's opinion did not address the District's license to carry requirement, making note of Heller's concession that such a requirement would be permissible if enforced in a manner that is not arbitrary and capricious. After the Supreme Court issued its decision, the DC Council enacted emergency legislation to temporarily amend the city's gun laws to comply with the ruling in Heller while considering permanent legislation. The DC Council enacted the Firearms Control Emergency Amendment Act of 2008, the first of several emergency enactments, and this attempt was met with criticism, as some felt that the changes did not comply with the decision in Heller . At the same time, perhaps in reaction to the Court's decision or the District's first attempt to temporarily amend its gun laws, H.R. 6691 , the Second Amendment Enforcement Act, was introduced in the 110 th Congress by Representative Travis Childers. The proposal appeared to overturn or loosen provisions of the District's existing gun laws (i.e., the DC Code as it was prior to any of the city's emergency regulations). The content of H.R. 6691 was subsequently adopted in the nature of a substitute into H.R. 6842 , which was passed in the House of Representatives by a vote of 266-152. The Senate did not pass H.R. 6842 , and the bill did not become law. In the 111 th Congress, Senator John Ensign had introduced S.Amdt. 575 to S. 160 , the District of Columbia Voting Rights Act of 2009. This amendment, which also used the language of H.R. 6842 (110 th Congress), was approved by the Senate on February 26, 2009, and became Title II of S. 160 (hereinafter Title II- S. 160 ). Although S. 160 was passed in the Senate by a vote of 61-37, it was later reported that movement on this legislation was stalled. As the House passed H.R. 6842 (110 th Congress) in September 2008, the DC Council continued to enact emergency legislation until permanent legislation could become effective. Language contained in the emergency acts later was encompassed in the permanent legislation. In 2009, the Firearms Control Amendment Act of 2008 (FCAA) and the Inoperable Pistol Amendment Act of 2008 (IPAA) were passed by the DC Council and transmitted to Congress for the requisite 60 days before becoming effective, respectively, on March 31, 2009, and May 20, 2009. Overall, the FCAA and IPAA not only amended firearms provisions of the DC Code that were at issue in Heller , but also provided a different range of restrictions on the regulation of firearms and firearm ownership. It is worth noting that the District's new firearms amendments under the FCAA and IPAA were challenged and upheld in the United States District Court for the District of Columbia on March 26, 2010. As discussed above, the language of Title II- S. 160 had been adopted from a bill ( H.R. 6842 ) introduced in the 110 th Congress, which originated prior to the enactment of the two new DC acts. The most recent congressional legislation, H.R. 645 , though it also seeks to overturn or loosen many of the District's gun provisions, takes into consideration the passage of these two new acts, the FCAA and IPAA. Sections 3-8 of H.R. 645 would amend firearms provisions in the DC Code in substantially the same manner as Title II- S. 160 , by limiting the District's ability to promulgate rules regulating firearm possession, and repealing the District's registration scheme, among other things. Sections 9-13 would preserve certain provisions of IPAA, while Section 14 would repeal other provisions of the IPAA and all of the FCAA. In general, federal firearms laws establish the minimum standards in the United States for firearms regulations. The states, territories, and the District of Columbia may choose to supplement the federal statutes—the National Firearms Act of 1934 (NFA) and the Gun Control Act of 1968 (GCA)—with their own more restrictive firearms laws in a manner that does not run counter to the Supreme Court's decision in District of Columbia v. Heller . Under the District of Columbia Self-Government and Governmental Reorganization Act (the Home Rule Act), the District generally has authority to promulgate its own laws pursuant to the act's procedures. For instance, the Home Rule Act provides that "the legislative power of the District shall extend to all rightful subjects of legislation within the District ..." More specifically, the Home Rule Act authorizes the DC Council "to make … all such usual and reasonable police regulations … as the Council may deem necessary for the regulation of firearms." Since much of the District of Columbia's law that existed prior to home rule consisted of congressional enactments, this power has often been used by the District of Columbia to amend laws passed by Congress. Congress nonetheless retains the ability to legislate for the District, as well as to impose limits on the legislative authority of the District of Columbia government. In the Home Rule Act, Congress specifically reserved for itself "the right, at any time, to exercise its constitutional authority as legislature for the District by enacting legislation for the District on any subject, whether within or without the scope of legislative power granted to the Council ... including legislation to amend or repeal any law in force in the District prior to or after enactment of this chapter and any act passed by the Council." Because the District legislates within delegated congressional authority under the Home Rule Act, the question of whether the District of Columbia can amend or repeal a particular congressional enactment would appear to depend upon whether Congress, either expressly or by inference, intended that such congressional act not be amended by the District. For instance, Section 3 of H.R. 645 , like Title II- S. 160 , would explicitly provide a limit upon the District of Columbia's authority to legislate in this area: Nothing in this section or any other provision of law shall authorize, or shall be construed to permit the Council, the Mayor, or any governmental or regulatory authority of the District of Columbia to prohibit, constructively prohibit, or unduly burden the ability of persons not prohibited from possessing firearms under Federal law from acquiring, possessing in their homes or businesses, transporting for legitimate purposes , or using for sporting, self-protection or other lawful purposes, any firearm neither prohibited by Federal law nor subject to the National Firearms Act. The District of Columbia shall not have the authority to enact laws or regulations that discourage or eliminate the private ownership or use of firearms (emphasis added). It is worth noting that the phrase—"transporting for legitimate purposes"—is included in H.R. 645 , presumably to address the transportation requirements that it would adopt from the IPAA. This phrase does not otherwise affect the analysis of this section's language under H.R. 645 . The proposed language emphasizes that the Council would not be empowered to promulgate laws relating to firearms regulation either by virtue of the authority granted under the DC Code or any other provision of law that could otherwise be interpreted as granting similar police power. It is unclear, however, what would constitute "a constructive prohibition or undue burden" on the ability of individuals to acquire firearms. The language would also appear to prevent the District from barring firearms possession by any persons not prohibited from possessing a firearm under current federal law and, moreover, appears to prevent the District from prohibiting the possession of any firearm that was not already prohibited or regulated under federal law. In other words, with the exception of carrying, discussed below, it appears that District firearms laws would be substantially the same as federal firearms laws because the District would be limited in its ability to create its own stricter provisions beyond that of the NFA and GCA. Furthermore, the language does not make clear what elements would render a law or regulation in violation of the proscription against discouraging or eliminating the private ownership or use of firearms. In addition, while the proposed language would not directly revoke the District's general authority to enact and enforce sanctions for the criminal misuse of firearms, it appears that the scope of this authority would be limited as well. The last part of Section 3 would not "prohibit the District of Columbia from regulating the carrying of firearms by a person, either concealed or openly, other than at the person's dwelling place, place of business, or on other land possessed by the person" (emphasis added). Under this phrase, it seems clear that the District could regulate concealed or open carry, but it would not be explicitly empowered to prohibit individuals from carrying firearms altogether. Because "regulating the carrying of firearms" could encompass an outright prohibition on such activity, the District could still argue that it would be able to prohibit open or concealed carry altogether (with the stated exceptions), notwithstanding the congressional provision; however, an opposing argument could be made that "regulating the carrying of firearms" does not give the District authority to have a ban on the open or concealed carriage of firearms. This last sentence of H.R. 645 differs from Title II- S. 160 , which stated that nothing "shall be construed to prohibit the District ... from regulating or prohibiting the carrying of firearms" (emphasis added). Prior to Heller , the DC Code's definition of "machine gun" included "any firearm, which shoots, is designed to shoot or can be readily converted to shoot ... semiautomatically, more than 12 shots without manual reloading." By virtue of this broad definition, any semiautomatic weapon that could shoot more than 12 shots without manual reloading, whether pistol, rifle, or shotgun, was deemed a "machine gun," and prohibited from being registered. It appears that under the District's old definition, registration of a pistol was largely limited to revolvers. Under the NFA, "machine gun" is defined as any weapon which shoots, is designed to shoot, or can be readily restored to shoot, automatically more than one shot, without manual reloading, by a single function of the trigger. The term shall also include the frame or receiver of any such weapon, any part designed and intended solely and exclusively, or combination of parts designed and intended, for use in converting a weapon into a machinegun, and any combination of parts from which a machinegun can be assembled into such parts are in the possession of or under the control of a person. In the FCAA, the District amended its definition of "machine gun" to conform with the federal definition, above. By doing so, semiautomatic firearms are generally no longer prohibited from being registered. However, the District has also chosen to mirror other state laws, like California, and has enacted a list of prohibited firearms. (See " Assault Weapons/Handgun Roster ," below.) If H.R. 645 were enacted, the definition of "machine gun" would be restored to its pre- Heller state because the bill would undo any changes made by the FCAA. Thus, Section 4 of H.R. 645 would essentially continue the definition of "machine gun" to conform with the federal definition, above. Although the DC Code has a scheme for registering firearms, the pre- Heller provisions prohibited registration of sawed-off shotguns, machine guns, short barreled rifles, or pistols not validly registered prior to September, 24, 1976. Together, the pre- Heller definition of "machine gun" and the ban on registering pistols post-1976 also acted as a virtual prohibition on handguns, which the Supreme Court declared unconstitutional in Heller . Pursuant to the FCAA, the District now allows the registration of pistols for self-defense, and because "machine gun" conforms to the federal definition, semiautomatic handguns may be registered so long as the applicant meets other requirements. Furthermore, the FCAA includes an exemption from the registration requirement for a person who temporarily possesses a firearm registered to another while in the home of the registrant, provided the temporary possessor is not barred from possessing a firearm and the person reasonably believes that possession is necessary for self-defense in that home. The FCAA makes several amendments to the provisions that set forth the qualification and information requirements for the registration of a firearm. For example, a person who has been convicted, within five years prior to applying for a registration certificate, of an intrafamily offense, or two or more violations of the District's or any other jurisdiction's law that restricts driving under the influence of alcohol or drugs, is prohibited from registering. Similarly, applicants who, within five years of applying, (1) have a history of violent behavior; (2) have been a respondent in either an intrafamily proceeding in which a civil protection order was issued against him or her; or (3) have been a respondent in a proceeding in which a foreign protection order was issued against him or her, are prohibited from registering a firearm. The FCAA also requires applicants to complete a firearms training or safety course and provide an affidavit signed by the certified firearms instructor, in addition to expanding the firearms competency test. Additionally, the Chief of Police (Chief) is required to have any registered pistol submitted for a ballistics identification procedure; further, the Chief is barred from registering more than one pistol per registrant during any 30-day period, except for new residents who are able to register more than one pistol if such pistols have been lawfully owned in another jurisdiction for six months prior to the application. The District's existing registration scheme is all-encompassing, as the registration of a firearm is a method to also license firearms owners and acts as a permit to purchase. Though H.R. 645 would continue to prohibit the possession of sawed-off shotguns, short barreled rifles, and machine guns, it would, however, repeal all sections pertaining to the registration requirement. Thus, DC residents would no longer be required to have a registration certificate for the firearm, or as a prerequisite to purchasing a firearm, and there would be no provision for licensing of gun owners. H.R. 645 would also make other conforming amendments to eliminate all registration language. It is worth noting that with the repeal of the FCAA provisions under H.R. 645 , it appears that the Chief would no longer be required to have any pistol submitted for ballistics testing, nor would the Chief be required to limit registration of pistols to one per month. In other words, there would be no restriction on how many handguns an individual would be able to purchase per month. H.R. 645 would amend DC Code § [phone number scrubbed].02, which sets forth permissible sales and transfers of both ammunition and firearms. Currently, under DC law, a licensed dealer may sell or transfer ammunition only to "any nonresident person or business licensed under the acts of Congress," "any other licensed dealer," or "any law enforcement officer." A provision under Section 5 of H.R. 645 would allow the transfer of ammunition, excluding restricted pistol bullets, "to any person," which would include DC residents. In addition to eliminating any ammunition certificate language, this section would also eliminate the requirement of a licensed dealer to keep track of ammunition received or sold from his or her inventory. Under 18 U.S.C. § 922(b)(3), a firearms dealer is generally prohibited from selling handguns to out-of-state persons, and must conduct such transactions by transferring the handgun to another firearms dealer in the state where the purchaser resides. Both H.R. 645 and Title II- S. 160 permit interstate purchase of firearms, but do so in different ways. In Title II- S. 160 , there would have been an amendment to the federal statute that would carve out an exception to the federal law to allow federal licensees whose places of business are located in Maryland or Virginia to sell and deliver handguns to residents of the District of Columbia. H.R. 645 , however, would place the amendment of interstate firearms transfer in the DC Code itself. Thereafter, under the DC Code, a federally licensed importer, manufacturer, or dealer of firearms in Maryland or Virginia would be treated as a dealer licensed under DC law. Thus, notwithstanding 18 U.S.C. § 922(b)(3), Maryland and Virginia firearms dealers would be permitted to sell handguns to District residents if "the transferee meets in person with the transferor to accomplish the transfer, and the sale, delivery, and receipt fully comply with the legal conditions in both the District of Columbia and the jurisdiction in which the transfer occurs." The GCA requires that licensed dealers sell or deliver handguns with a secure gun storage or a safety device, but there is no federal requirement on how firearms should be stored or whether trigger locks must be used. The District's trigger lock requirement, which was declared unconstitutional by the Supreme Court, went further than federal law and required any firearm in the possession of a registrant, even if within the home, to be "unloaded and disassembled or bound by a trigger lock or similar device" unless the firearm was kept at the owner's place of business, or was being used for lawful recreational purposes within the District. Under the FCAA, the District amended the provisions of the trigger lock requirement so that it would be the policy of the District that any firearm in one's lawful possession be unloaded and either disassembled or secured by trigger lock. The FCAA prohibits a person from storing or keeping any loaded firearm on any premises under his control if "he knows or reasonably should know that a minor is likely to gain access to the firearm without the permission of the parent or guardian of the minor" unless he or she "keeps the firearms in a securely locked box ... container ... or in a location which a reasonable person would believe to be secure" or "carries the firearm on his person or within such close proximity that he can readily retrieve and use it as if he carried it on his person." The FCAA further provides that a person in violation of these firearm storage responsibilities can be found guilty of criminally negligent storage of a firearm or other criminal penalties. Title II- S. 160 would repeal this section of the FCAA. By contrast, Section 7 of H.R. 645 , similar to existing DC law, would create penalties for allowing access of minors to loaded firearms if injury results. Under this section of H.R. 645 , a person would be guilty of unlawful storage if the person knowingly stores or leaves a loaded firearm at any premises under the person's control; the person knows or reasonably should know that a minor is likely to gain access to the firearm without permission of the minor's parent or legal guardian; and the minor kills or injures any person (including the minor) by discharging the firearm. Any person who violates this section would be subject to a fine not to exceed $1,000 and/or a term of imprisonment not to exceed one year. However, there would be several exceptions. Penalties would not apply if (1) the firearm was stored in a securely locked container and the person did not inform the minor of the location of the key to, or the combination of, the container's lock; (2) the firearm was secured by a trigger lock and the person did not inform the minor of the location of the key to, or the combination of, the trigger lock; (3) the firearm was stored on the person's body or in such proximity that it could be used as quickly as if it were on the person's body; (4) the minor's access to the firearm was as a result of unlawful entry; (5) the minor was acting in self-defense; (6) the minor was engaged in hunting or target shooting under the supervision of a parent or adult over the age of 18; or (7) the firearm is in possession or control of a law enforcement officer while the officer is engaged in official duties. If the victim of a shooting under the section is the child of the person who committed the violation, "no prosecution shall be brought ... unless the person who committed the violation behaved in a grossly negligent manner, or unless similarly egregious circumstances exist." Currently, under DC law, a general violation of the registration scheme, including the maintenance of an unregistered firearm in a dwelling place, place of business, or on other land possessed by the owner of a firearm, warrants a fine of not more than $1,000 or not more than one year's imprisonment, or both. A person who is convicted a second time for unregistered possession of a firearm in such areas shall be fined not more than $5,000 or imprisoned not more than five years, or both. As a conforming amendment to repealing the registration scheme, H.R. 645 would amend the DC Code to remove this provision. It is worth noting that Title II- S. 160 would have further removed the criminal penalties for the intentional sale or transfer of a firearm or destructive device to a person under the age of 18. When the District amended its firearms laws, it also amended several definitions such as "machine gun," (discussed above) "sawed off shotgun," and "firearm." The FCAA and IPAA are presumably meant to complement each other so that amended definitions or newly created terms are consistent in both Titles 7 and 22 of the DC Code. Section 9 of H.R. 645 would continue the harmonization of definitions between Titles 7 and 22 for certain definitions. These include the terms "firearm," "machine gun," "pistol," "place of business," "sawed off shotgun," and "shotgun." The IPAA amended DC law to permit the District of Columbia to prohibit or restrict the possession of firearms on its property or any property under its control. It also allows private persons or entities who own property in the District to prohibit or restrict possession of firearms on their property, with the exception of law enforcement personnel when they are lawfully authorized to enter. Section 10 of H.R. 645 also addresses property owners restricting firearms on their premises. Under the first part of this section, "[p]rivate persons or entities owning property in the District of Columbia may prohibit or restrict the possession of firearms on their property by any persons, other than law enforcement personnel when lawfully authorized to enter onto the property or lessees occupying residential or business premises " (emphasis added). This provision is unlike existing DC law because it would further prohibit the ability of private landlords of businesses or residential premises to restrict their tenants from possessing firearms on such premises. The second part of Section 10 relates to the District's authority to restrict or prohibit the possession of firearms on public property. Specifically, the District would be able to prohibit or restrict the possession of firearms within any building or structure under its control, or in any area of such building or structure, which has implemented security measures (including but not limited to guard posts, metal detection devices, x-ray or other scanning devices, or card-based or biometric access devices) to identify and exclude unauthorized or hazardous persons or articles, except that no such prohibition or restriction may apply to lessees occupying residential or business premises. This proposed language is arguably narrower in application in that it would apply to "buildings or structures under its control," whereas current law gives the District authority to regulate over "property under its control." Under the proposed language, it is not clear if the District could regulate firearms on real property under its control other than buildings and structures. Furthermore, while it is explicit that the District would not be able to exercise the granted authority upon lessees that occupy buildings under the District's control, it is unclear as to what kinds of buildings over which the District would be able to exercise this authority. Would the District be able to regulate the possession of firearms in any building that is under its control but that does not necessarily have the requisite security measures, or would it be limited to regulating firearm possession only in buildings and structures that have security measures. Should the phrase—", or in any other area of such building or structure, which has implemented security measures ..."—be read as disjunctive from the preceding phrase? Alternatively, could the phrase be read to relate back to describe the buildings or structures under DC's control, thereby narrowing the range of areas that would fall under this provision? Under the IPAA, the District repealed the Chief's authority to issue licenses to a carry a concealed firearm. This provision would be repealed upon enactment of H.R. 645 , thus re-permitting the Chief to issue licenses for concealed carry within her discretion. H.R. 645 would continue a provision from the IPAA that explicitly prohibits a person from carrying a rifle or a shotgun within the District of Columbia, except as otherwise permitted by law. The exceptions for where a rifle or shotgun may be carried are discussed below. Congress passed a provision that regulates a qualified current or retired law enforcement officer's ability to carry a concealed firearm. Beyond this, states may impose their own laws on carrying firearms. As amended by the IPAA, the District currently permits persons who hold a valid registration for a firearm (handgun/rifles/shotguns) to carry it (1) within the registrant's home; (2) while it is being used for lawful recreational purposes; (3) while it is kept at the registrant's place of business; or (4) while it is being transported for a lawful purposes in accordance with the law. Because the Chief's authority to issue licenses to carry appears to be revoked under the IPAA, these four circumstances currently seem to be the only scenarios under which a person may possess and carry firearms. H.R. 645 would re-adapt from the IPAA and slightly modify the provision granting persons authority to carry their firearms in certain places for certain purposes without a license to carry. H.R. 645 would allow a person to carry a firearm, whether loaded or unloaded, without needing to obtain a license to carry in the person's dwelling house or place of business or on other land owned by the person; by invitation on land owned or lawfully possessed by another; while it is being used for lawful recreational, sporting, education, or training purposes; or while it is being transported for lawful purposes as expressly authorized by District or federal law and in accordance with the requirements of that law. As noted above, because the Chief's authority to issue licenses to carry concealed would be restored if H.R. 645 were enacted, it is likely that a firearm owner could obtain a concealed carry license and carry his or her firearm outside these four circumstances. H.R. 645 would continue provisions similar to those already enacted by the IPAA pertaining to the lawful transportation of firearms. Thus, it would remain that a person, who is not otherwise prohibited from transporting, shipping, or receiving a firearm, would be permitted to transport a firearm for any lawful purpose from any place he may lawfully possess the firearm to any other place where he may lawfully possess the firearm if the firearm is transported in accordance with this section. If the transportation is by vehicle, the firearm shall be unloaded, and neither the firearm nor any ammunition being transported may be readily accessible or directly accessible from the passenger compartment of the transporting vehicle. Also, if the firearm is not being transported by vehicle, the firearm must be "unloaded, inside a locked container, and separate from any ammunition." The IPAA made a technical change to the District Code by including toy and antique pistols as types of firearms that are prohibited from being used to commit a violent or dangerous crime, and violators are subject to certain criminal penalties. Section 12 of H.R. 645 would continue this technical change. H.R. 645 would provide jurisdiction to the Office of Administrative Hearings to hear cases pertaining to the denial or revocation of firearms dealer licenses. The FCAA had provided such authority to the Office of Administrative Hearings, except that it went further to grant the office jurisdiction over the denial or revocation of a firearm registration certificate. However, since the registration scheme would be repealed under H.R. 645 , it is likely unnecessary to give the office such jurisdiction. The next provisions discussed were all amendments to the DC Code pursuant to the enactment of the FCAA and IPAA. These provisions would no longer exist under the congressional proposal because Section 14 of H.R. 645 would repeal the two acts, "and any provision of law amended or repealed by either of such Acts [would be] restored or revived as if such Acts had not been enacted into law." The DC Code's provisions that govern who may qualify to apply for a dealer's license, who is eligible to sell and transfer firearms to a dealer, and to whom a dealer can sell are dependent upon one's ability to obtain a registration certificate. Thus, anyone who wishes to obtain a dealer's license, or engage in purchasing or transferring a firearm, must meet the new requirements created by the FCAA (discussed in " Registration "), to obtain a registration certificate. Because H.R. 645 would repeal the District's registration scheme, it would allow any person who is not prohibited from possessing or receiving a firearm under federal or District law to qualify in applying for a dealer's license, selling or transferring ammunition or any firearm to a licensed dealer, or making such purchase from a licensed dealer of firearms. The federal prohibitions are discussed in the next section. Furthermore, as noted in the discussion on " Ammunition Sales and Registration ," duties such as reporting the loss, theft, or destruction of any firearms or ammunition in the dealer's inventory would be repealed. The federal GCA lists nine categories of persons who are prohibited from possessing, shipping, or receiving firearms. They are (1) persons who have been convicted of a crime punishable by imprisonment exceeding one year; (2) persons who are fugitives; (3) persons who are users of or addicted to any controlled substances; (4) persons who have been adjudicated as a mental defective or who have been committed to a mental institution; (5) persons who are unlawfully in the United States or admitted under a nonimmigrant visa; (6) persons who have been dishonorably discharged from the Armed Forces; (7) persons who have renounced U.S. citizenship; (8) persons who are on notice of or are subject to a court order restraining them from harassing, stalking or threatening an intimate partner; and (9) persons who have been convicted in any court of a misdemeanor crime of domestic violence. Among other federal regulations, it is unlawful for both licensed dealers and non-licensed persons to sell or transfer a firearm to another if he knows or has reasonable cause to believe that the purchaser falls within one of the nine categories above. Because the FCAA imposes new eligibility requirements before an applicant can be approved for a registration certificate (see " Registration ") it follows that a non-licensed person or licensed dealer wishing to transfer firearms must meet not only what is required by federal law but also the additional eligibility requirements under the FCAA since anyone wishing to transfer firearms must be eligible to register a firearm under DC law. Under D.C. law, a non-licensed person may sell or transfer firearm or ammunition only to a licensed dealer. In other words, a private sale between two non-licensed people must take place through a licensed dealer. This would remain unchanged in H.R. 645 . Under H.R. 645 , it would still be unlawful for licensed dealers to make transfers to those prohibited from receiving or possessing a firearm under federal or DC law , but because the bill would essentially remove any registration requirement that is required by the DC Code, it appears that the only disqualifications that would prohibit a transfer are those listed under federal law. Another amendment to DC law that would be affected by Section 14 of H.R. 645 is the assault weapons ban created by the FCAA. The FCAA created a new definition of "assault weapon" that includes a list of specific rifles, shotguns, and pistols and their variations, regardless of the manufacturer. It also includes semiautomatic rifles, pistols, and shotguns based on the presence of a single military-type characteristic. The definition of "assault weapon" also includes any shotgun with a revolving cylinder, except that it does not apply to "a weapon with an attached tubular device designed to accept, and capable of operating only with, .22 caliber rimfire ammunition." Currently, the Chief also has the power to designate as an assault weapon any firearm that he or she believes would reasonably pose the same threat as those weapons enumerated in the definition. The definition of assault weapon does not include antique firearms or certain pistols sanctioned for Olympic target shooting. The FCAA also makes this new definition of "assault weapon" applicable in the Assault Weapon Manufacturing Strict Liability Act of 1990. Thus, any manufacturer, importer, or dealer of a weapon deemed an "assault weapon" pursuant to this new definition can be held strictly liable in tort for all direct and consequential damage arising from bodily injury or death if either proximately results from the discharge of the assault weapon in the District of Columbia. These particular changes made by the FCAA would be repealed by Section 14 of H.R. 645 . The FCAA prohibits any person in the District from possessing, selling, or transferring any large capacity ammunition feeding device. The meaning of "large capacity ammunition feeding device" includes a "magazine, belt, drum, feed strip or similar device that has a capacity of, or that can be readily restored or converted to accept, more than 10 rounds of ammunition." However, the term does not include "an attached tubular device designed to accept, and capable of operating only with, .22 caliber rimfire ammunition." Thus, even though residents of DC are now allowed to register and possess semiautomatic firearms (see " DC Semiautomatic Ban "), the FCAA prevents them from possessing large capacity ammunition feeding devices, which some semiautomatic firearms are capable of holding. This provision would be repealed by Section 14 of H.R. 645 . Under the Brady Handgun Violence Prevention Act, which amended the GCA to establish the National Instant Criminal Background Check System (NICS), a licensed dealer is generally prohibited from transferring a firearm to any other non-licensed person without running a background check by contacting NICS. The licensee may transfer the firearm if the system provides the licensee with a unique identification number, or if three business days have elapsed with no response from the system and the licensee has verified the identity of the transferee by examining valid identification documents that contain a photograph of the transferee. Generally, once the background check has been completed and the transferee approved, the licensee may transfer the firearm unless a state imposes a waiting period. Non-licensed persons are not required to perform a background check under federal law. The DC Code imposed a waiting period of 48 hours before a seller within the District can deliver a pistol or handgun. Under the IPAA, however, the waiting period for the transfer of a "firearm" is now 10 days. Firearm, as amended by the IPAA, means "any weapon regardless of operability , which will, or is designed or redesigned, made or remade, readily converted, restored or repaired or is intended to expel a projectile or projectiles by the action of an explosive" (emphasis added). Thus, the IPAA makes the new waiting period apply to all firearms, not just pistols. Title II- S. 160 would not have changed the waiting period, which would have remained applicable to the transfer of pistols, and not shotguns or rifles. It should also be noted that if the IPAA is repealed under H.R. 645 , the waiting period to obtain a handgun would revert back to 48 hours. The FCAA added new provisions with regard to microstamping. The DC Code had already prohibited the sale of a firearm that does not have imbedded in it an identification or serial number unique to the manufacturer or dealer of the firearm. The FCAA now adds a new provision requiring that beginning January 1, 2011, "no licensee shall sell or offer for sale any semiautomatic pistol manufactured on or after January 1, 2011, that is not microstamp-ready as required by and in accordance with sec. 503." The FCAA creates two new sections, 503 and 504. New Section 503 sets forth requirements that determine if a semiautomatic pistol is microstamp-ready, and it also contains provisions that require manufacturers to provide the Chief with the make, model, and serial number of the semiautomatic pistol when presented with a code from a cartridge that was recovered as part of a legitimate law enforcement investigation. New Section 504 prohibits a pistol that is not on the California Roster of Handguns Certified for Sale (California Roster) from being manufactured, sold, given, loaned, exposed for sale, transferred, or imported into the District of Columbia as of January 1, 2009. Such a pistol is prohibited from being owned or possessed unless it was lawfully owned and registered prior to January 1, 2009. Furthermore, if a resident of DC lawfully owns a pistol not on the California Roster, that individual can sell or transfer ownership only through a licensed firearms dealer; or a licensed dealer who has such a pistol in its inventory prior to January 1, 2009, can only transfer it to another licensed firearms dealer. The FCAA also requires the Chief to review the California Roster at least annually for any additions or deletions, and the Chief is authorized to revise, by rule, the roster of handguns determined not to be unsafe. Under the IPAA, the District also makes unlawful the discharge of a firearm without a special written permit from the Chief, except as permitted by law which includes legitimate self-defense. It further allows the District to prohibit or restrict the possession of firearms on its property and any property under its control, and would similarly allow private persons owning property in the District to prohibit or restrict the possession of firearms on their property, except where law enforcement personnel is concerned. These two new provisions would be also repealed if H.R. 645 became law. | In the wake of the Supreme Court's decision in District of Columbia v. Heller, which declared three firearms provisions of the DC Code unconstitutional, a flurry of legislation was introduced both in Congress and in the District of Columbia Council. In the 110th Congress, the House of Representatives passed H.R. 6842, the Second Amendment Enforcement Act. In the 111th Congress, similar provisions were incorporated as an amendment to the District of Columbia Voting Rights Act of 2009 (S. 160), which was passed by the Senate. Later, separate measures, which also would have overturned or loosened many of the District's gun provisions, were introduced in both the House of Representatives (H.R. 5162) and the Senate (S. 3265). Meanwhile, the District Council passed its own legislation that made permanent amendments to DC's firearms control regulations. The two bills from the District are the Firearms Control Amendment Act of 2008 and the Inoperable Pistol Amendment Act of 2008, which amended the DC Code in an effort to comply with the ruling in Heller as well as provide a different range of restrictions on firearm possession. In the 112th Congress, Representative Mike Ross introduced H.R. 645, "To restore Second Amendment rights in the District of Columbia." This measure is identical to H.R. 5162 from the previous Congress. This report provides an analysis of the District's firearms laws and congressional proposals. |
Senate Rule XXVI spells out specific requirements for Senate committee procedures. In addition, each Senate committee is required to adopt rules that govern its organization and operation. Those committee rules then elaborate, within Senate rules, how the committee will handle questions of order and procedure. A committee's rules may "not be inconsistent with the Rules of the Senate." Committees may add to the basic rules, but they may not add anything that is in conflict with Senate rules. Examining the rules for each committee can show how each approaches issues of comity and fairness in the conduct of its business. The rules also serve to illustrate how each committee handles the division of power and the allocation of responsibility within its membership. Several committees, for example, require that if the committee is conducting business with a quorum that is less than a majority of its members, a member from the minority party must be present. When issuing subpoenas or starting investigations, committees may take different approaches on how to give authority to the chair of the committee while still allowing the ranking minority member a role in the process. Some committees require the agreement of the ranking minority member, others require that he or she be notified before the subpoena is issued. The requirement that each committee must adopt its own set of rules dates back to the 1970 Legislative Reorganization Act (P.L. 91-510). That law built on the 1946 Legislative Reorganization Act (P.L. 79-601), which created a framework for most Senate committees by setting out some basic requirements that most committees must adhere to. Under the provisions of the 1970 law, Senate committees must adopt their rules and have them printed in the Congressional Record not later than March 1 of the first year of a Congress. Typically, the Senate also publishes a compilation of the rules of all the committees each Congress, and some individual committees also publish their rules as a committee print. Although committee rules govern the actions of Senators in committee proceedings, there is no means for the Senate to enforce rules on committee conduct if the requirement that a physical majority be present for reporting a measure or matter is met. There also is no means for the Senate to enforce committee rules that go beyond those set out in the Senate's standing rules. So, for example, if a committee's rules contain a provision requiring that a member of the minority party be present for a quorum, but the committee acts without regard to that provision, the minority could register their disapproval with the committee's actions, but there is no point of order that could be raised on the Senate floor. It should be noted that other factors may come into play when studying a committee's procedural profile. Along with the formal rules of the Senate and the individual rules for each committee, many committees have traditions or precedents they follow in practice that can affect their procedures. One committee, for example, does not allow Senators to offer second degree amendments during committee markups. This restriction is not contained in either the Senate or the committee's rules, it is a tradition. It is a tradition, however, the committee follows closely. This report analyzes the different approaches Senate committees have taken with their rules, focusing on additions to the overall Senate committee rules structure or unique provisions. A committee's rules can be extensive and detailed or general and short. The tables that conclude this report compare key features of the rules by committee. The tables, however, represent only a portion of each committee's rules. Provisions of the rules that are substantially similar to or that are essentially restatements of the Senate's standing rules are not included. This report will review the requirements contained in Senate rules for committees, then explore how each Senate committee handles 11 specific procedural issues: meeting day, hearing and meeting notice requirements, scheduling of witnesses, hearing quorum, business quorum, amendment requirements, proxy voting, polling, nominations, investigations, and subpoenas. Also, the report looks at unique provisions some committees have included in their rules in a "miscellaneous" category. Although there is some latitude for committees to set their own rules, the standing rules of the Senate set out the specific requirements that each committee must follow. The following provisions are taken from Rule XXVI of the Standing Rules of the Senate. Some committees reiterate these rules in their own rules, but even for those committees that do not, these restrictions apply. This is not an exhaustive explanation of Senate Rules and their impact on committees, rather this summary is intended to provide a background against which to understand each committee's individual rules. Rules. Each committee must adopt rules; those rules must be published in the Congressional Record not later than March 1 of the first year of each Congress. If a committee adopts an amendment to its rules, that change only becomes effective when it is published in the Record . (Rule XXVI, paragraph 2). Meetings. Committees and subcommittees are authorized to meet and to hold hearings when the Senate is in session and when it has recessed or adjourned. A committee may not meet on any day (1) after the Senate has been in session for two hours, or (2) after 2:00 p.m. when the Senate is in session. Each committee must designate a regular day on which to meet weekly, biweekly or monthly (this requirement does not apply to the Appropriations Committee). A committee is to announce the date, place, and subject of each hearing at least one week in advance, though any committee may waive this requirement for "good cause." (Rule XXVI, paragraph 5(a); Rule XXVI, paragraph 3). Special meeting. Three members of a committee may make a written request to the chair to call a special meeting. The chair then has three calendar days in which to schedule the meeting, which is to take place within the next seven calendar days. If the chair fails to do so, a majority of the committee members can file a written motion to hold the meeting at a certain date and hour. (Rule XXVI, paragraph 3). Open meetings. Unless closed for reasons specified in Senate rules, such as a need to protect national security information, committee and subcommittee meetings, including hearings, are open to the public. When a committee or subcommittee schedules or cancels a meeting, it is required to provide that information, including the time, place, and purpose of the meeting, for inclusion in the Senate's computerized schedule information system. Any hearing that is open to the public also may be open to radio and television broadcasting, at the committee's discretion. Committees and subcommittees may adopt rules to govern how the media may broadcast the event. A vote by the committee in open session is required to close a meeting. (Rule XXVI, paragraph 5(b)). Quorums. Committees may set a quorum for doing business that is not less than one-third of the membership. A majority of a committee must be physically present when the committee votes to order the reporting of any measure, matter, or recommendation. The motion to order the reporting of a measure or matter requires the support of a majority of the members who are present and, in turn, the members who are physically present must constitute a majority of the committee. Proxies cannot be used to constitute a quorum. (Rule XXVI, paragraph Rule XXVI, paragraph 7(a)(1)). Proxy voting. A committee may adopt rules permitting proxy voting. A committee may not permit a proxy vote to be cast unless the absent Senator has been notified about the question to be decided and has requested that his or her vote be cast by proxy. A committee may prohibit the use of proxy votes on votes to report. (Rule XXVI, paragraph 7(a)(3)). Investigations and subpoenas. Each standing committee and its subcommittees is empowered to investigate matters within its jurisdiction and to issue subpoenas for persons and papers. (Rule XXVI, paragraph 1). Witnesses selected by the minority. During hearings on any measure or matter, the minority shall be allowed to select witnesses to testify on at least one day, when the chair receives such a request from a majority of the minority party members. This provision does not apply to the Appropriations Committee. (Rule XXVI, paragraph 4(d)). Reporting. Senate committees may report original bills and resolutions, in addition to those that have been referred to the panel. As stated in the quorum requirement, a majority of the committee must be physically present for a measure or matter to be reported. Also, a majority of those present are required to order a measure or matter reported. A Senate Committee is not required to issue a written report to accompany a measure or matter it reports; if the committee does write such a report, Senate rules specify a series of required elements that must be included in the report. (Rule XXVI, paragraph 7(a)(3); Rule XXVI, paragraph 10(c) . In their rules for the 111 th Congress, no Senate Committee uses either Monday or Friday for its regular meeting day, and the committees are relatively evenly spread over the remaining three days: 6 committees chose Tuesdays, 7 committees selected Wednesdays, and six committees picked Thursdays as their regular meeting days (see Table 1 ). The Armed Services Committee chose both Tuesday and Thursday. Two committees, Appropriations and Select Aging, meet at the call of the chair. Within those categories, some committees, including the Armed Services; Foreign Relations; Indian Affairs; and Judiciary provide for meeting at least once a week. The other committees set the meetings at once or twice a month. Committees must, according to Senate Rules, provide one week's notice of their hearings and business meetings. The rule, however, allows shorter notice, if "the committee determines there is good cause" to hold a hearing or meeting with less notice. When it comes to the determination of what "good cause" is, Senate committees allocate the task of making that decision differently (see Table 1 ). The rules of the Armed Services Committee, for example, say it is the decision of the committee as a whole. Three committees, Agriculture, Nutrition, and Forestry; Banking, Housing, and Urban Affairs; and Finance give the chair of the panel the authority to schedule a hearing or meeting with less than a week's notice. Six committees require some type of cooperation between the chair and ranking member of the committee to meet with less than a week's notice. Four of those committees, Budget; Environment and Public Works; Judiciary; and Special Aging, require the chair to obtain the agreement of the ranking member to make the decision to hold a hearing or meeting with less than usual notice. The Energy and Natural Resources Committee gives the responsibility to the chair and the committee together, while the Foreign Relations Committee chair must consult with the ranking minority member on the committee. Several committees go beyond Senate requirements in their rules regarding scheduling of witnesses, giving greater opportunity to the minority to include witnesses of their choosing during a hearing (see Table 1 ). The Finance Committee calls on its staff to ensure there is a "balance of views" early on in a hearing, and allows each member of the committee to designate individuals to testify. The Foreign Relations Committee minority may request an equal number of witnesses as the majority, and the Small Business and Entrepreneurship Committee allows for an equal number of witnesses for the majority and minority unless there is to be just one administration witness. Similarly, if the Senate is evenly divided, the Budget Committee provides for equal numbers of witnesses for the majority and minority, with the same exception for a single administration witness. The Ethics and Select Intelligence committees' rules have provisions according an opportunity for an individual to testify before the committee if that person believes his or her reputation is at issue or if his or her name came up in previous testimony. For receiving testimony at hearings, most Senate committees reduce their quorum requirements to one or sometimes two Senators. One panel, the Armed Services Committee, requires that a member of the minority be present, unless the full committee stipulates otherwise. The "conduct of business" at a committee meeting typically refers to actions such as debating and voting on amendments, that allow the committee to proceed on measures up to the point of reporting the measure to the full Senate. For the conduct of business, the requirement that a member of the minority be present is a common feature of committee quorum rules. In order to report out a measure, Senate rules require that a majority of the committee be physically present. A dozen committees feature some kind of minority attendance requirement for the conduct of business during a committee business meeting (see Table 2 ). The Environment and Public Works Committee's business quorum requires two members of the minority and one-third of the committee in total. The Homeland Security and Governmental Affairs, and Small Business and Entrepreneurship committees require the presence of one member of the minority, as do the Veterans' Affairs and Special Aging committees. The Veterans' Affairs Committee rules also contain a provision designed to make sure that the lack of a minority member cannot indefinitely delay action on a measure or matter. The Finance Committee requires one member from the majority and one member of the minority for its business quorum as do the Agriculture, Nutrition, and Forestry; Foreign Relations and Ethics committees. The Health, Education, Labor, and Pensions Committee requires that any business quorum that is less than a majority of the committee include a member of the minority. The Armed Services Committee sets a business quorum at nine members, which must include a member of the minority party, but the committee may bypass the minority representation requirement if a simple majority of the committee is present. The Judiciary Committee also specifies a quorum of eight, with two members of the minority present. The Indian Affairs Committee has a rule stating that a quorum is presumed to be present unless the absence of a quorum is noted by a Member. Several committees require that Senators file any first degree amendments they may offer during a committee markup before the committee meets (see Table 2 ). This provision allows the chair and ranking member of the committee to see what kind of issues may come up at the markup, and also may allow them the opportunity to try to negotiate agreements with amendment sponsors before the formal markup session begins. It also provides an opportunity to Members to draft second degree amendments to possible first degree amendments before the markup begins. The Banking, Housing, and Urban Affairs and Small Business and Entrepreneurship committees call for submitting such amendments two business days before the markup, if sufficient notice of the markup has been given. The Appropriations; Environment and Public Works; Health, Education, Labor, and Pensions; Homeland Security and Governmental Affairs; and Veterans' Affairs committees require 24 hours notice of first degree amendments. The Judiciary Committee requires that first degree amendments be filed with the committee by 5 p.m. of the day before the markup. All of these committees allow the full committee to waive this filing requirement and, in some cases, it is waived automatically if Senators were not given sufficient notice of the markup. All Senate committees except Special Aging permit some form of proxy voting, where a Senator does not have to be physically present to record his or her position on a measure or matter before the committee (see Table 3 ). The Armed Services; Foreign Relations; Homeland Security and Governmental Affairs; Select Intelligence; Veterans' Affairs and Ethics committees require that proxies be executed in writing. The Small Business and Entrepreneurship Committee requires that the responsibility for voting the proxy be assigned to a Senator or staffer who is present at the markup. The Commerce, Science and Transportation; Environment and Public Works; Judiciary and Small Business and Entrepreneurship committees allow several other methods of transmitting a Senator's proxy intentions, including telephone or personal instructions to another Member of the committee. Proxies cannot be used in any committee to count toward a quorum for reporting a measure or matter. The Budget Committee prohibits proxy voting during its annual markup of the budget resolution, and the Ethics Committee does not permit a Senator to vote by proxy on a motion to initiate an investigation. Polling is a method of taking a "vote" of the committee on a matter without the committee physically coming together. As such, it cannot be used to report out measure or matters (that would violate Senate rules that require a physical majority to be present to report a measure or matter). Polling can be used, however, for internal housekeeping matters before the committee, such as questions concerning staffing or perhaps how the committee ought to proceed on a measure or matter (see Table 3 ). Five committees have general provisions for polling in their rules: Agriculture, Nutrition, and Forestry; Budget; Health, Education, Labor, and Pensions; Homeland Security and Government Affairs; and Aging. Of those, all the committees except the Health, Education, Labor, and Pensions Committee, allow a member to request that the matter being polled be formally voted on by the committee at the next business meeting. The Health, Education, Labor, and Pensions Committee only permits polling if there is unanimous consent from the committee to do so. Many committees set out timetables in their rules for action on presidential nominations, and most committees also contain provisions allowing the timetables to be waived (see Table 3 ). The Banking, Housing, and Urban Affairs, Health, Education, Labor, and Pensions, and Veterans' Affairs committees require a five-day layover between receipt of the nomination and committee action on it. The Foreign Relations Committee requires a six-day delay, the Armed Services Committee a seven-day delay and the Intelligence Committee calls for a fourteen-day waiting period before action on a nomination. In addition, the Intelligence panel rules require that the committee not act until seven days after the committee receives background and financial information on the nominee. The Agriculture, Nutrition, and Forestry; Banking, Housing, and Urban Affairs; Budget; Homeland Security and Governmental Affairs and Small Business committees require that nominees testify before their committees under oath. The Energy and Natural Resources; Indian Affairs; and Veterans' Affairs committees have provisions requiring the nominee and, if requested, anyone testifying at a nomination hearing to testify under oath. The Finance Committee allows any member to request that the testimony from witnesses be taken under oath. Several committees require advance permission for staff or a Senator to launch an investigation (see Table 4 ). The Select Intelligence Committee, for example, prohibits investigations unless five committee members request it. The Banking, Housing, and Urban Affairs Committee requires that either the full Senate, the full committee, or the chair and ranking member jointly authorize an investigation before it may begin. The Select Aging Committee authorizes its staff to initiate an investigation with the approval of the chair and ranking minority member and requires that all investigations be conducted in a bipartisan basis. The Energy and Natural Resources Committee requires that the full committee authorize any formal investigation. The Agriculture, Nutrition, and Forestry Committee requires full committee approval for any investigation involving subpoenas and depositions, and the Health, Education, Labor, and Pensions Committee requires majority approval for any investigation involving a subpoena. Five Senate committees do not have specific rules that set out how the panel will decide to issue subpoenas (see Table 4 ). The lack of a subpoena provision does not mean the committees cannot issue subpoenas, just that the process for doing so is not specified in the committee's written rules. Of the committees that do have rules on subpoenas, one, the Special Committee on Aging, grants the authority to issue the subpoena to the chair alone. Nine other committees, Agriculture, Nutrition, and Forestry; Banking; Commerce; Energy and Natural Resources; Finance; Homeland Security and Governmental Affairs; Indian Affairs; Small Business and Entrepreneurship; and Veterans' Affairs, require that the chair seek the agreement, approval, concurrence, or consent of the ranking member before issuing a subpoena. In all instances, however, the chair also may gain approval for a subpoena from a majority of the committee. Three committees—Foreign Relations; Health, Education, Labor, and Pensions; and Select Intelligence—give the decision as to whether to issue a subpoena to the full committee as a whole, while Ethics allows the chair and ranking minority member acting jointly or a majority of the committee to approve a subpoena. It is not clear how the Members would communicate their support to the chair, either by polling or through a committee vote. Some committees have unique provisions that are not included in other committee rules. The Budget Committee's rules limit the size and number of charts a Senator can display during debate on a subject. The Commerce, Science and Transportation Committee permits broadcasting of its proceedings only upon agreement by the chair and ranking member. The chair and ranking member of the Rules Committee are authorized to approve any rule or regulation that the committee must approve, and the Small Business and Entrepreneurship Committee allows any member to administer the oath to any witness testifying "as to fact." Both the Finance and the Judiciary committees allow the chair to call a vote on whether to end debate on a pending measure or matter. This ability to end debate on a measure or matter does not appear in any other committees' rules and may allow these committees to move controversial measure through their panels. The Foreign Relations Committee includes in its rules a provision stating that, as much as possible, the committee not "resort" to formal parliamentary procedure. That would seem to suggest a committee where Senators attempt to resolve controversial issues before the committee markup, rather than relying on parliamentary tools to push legislation or nominations through. Both the Veterans' Affairs and the Environment and Public Works committees are charged with naming certain federal facilities, so their rules provide guidance on how those names may be chosen. The rules of the Banking, Housing, and Urban Affairs Committee require that any measure seeking to give out the Congressional Gold Medal have 67 cosponsors to be considered. The Select Intelligence Committee gives direction to its staff director to ensure that covert programs are reviewed at least once per quarter. The Appropriations Committee rules empower any member of the committee who is managing an appropriations bill on the floor to make points of order against amendments being offered that would seem to violate Senate rules. The Armed Services Committee's rules reach out to the executive branch and call on the committee to obtain executive branch response to any measure referred to the committee. The Homeland Security and Governmental Affairs Committee requires that any report on a measure also include an evaluation of the regulatory impact of the measure. The Select Committee on Aging requires that investigative reports containing findings or recommendations may be published only with the approval of a majority of committee members. The Indian Affairs Committee urges its Members to disclose their finances in the same way in which they require nominees to presidentially appointed positions to do. The Energy and Natural Resources Committee appears to allow any Member to place a measure or matter on the committee's agenda, if the Member does so at least one week in advance of the business meeting at which it will be considered. The Judiciary Committee allows any member to delay consideration for one week any item on its agenda. The Select Committee on Ethics also allows any member of the committee to postpone discussion of a pending matter until a majority of the committee is present. | Senate Rule XXVI spells out specific requirements for Senate committee procedures. In addition, each Senate committee is required to adopt rules that govern its organization and operation. Those committee rules then elaborate, within Senate rules, how the committee will handle its business. Rules adopted by a committee may "not be inconsistent with the Rules of the Senate" (Senate Rule XXVI, paragraph 2). Committees may add to the basic rules, but they may not add anything that is in conflict with Senate rules. This report first provides a brief overview of Senate rules as they pertain to committees. The report then compares the different approaches Senate committees have taken when adopting their rules. A committee's rules can be extensive and detailed or general and short. The tables that conclude this report compare selected, key features of the rules by committee. The tables, however, represent only a portion of each committee's rules. Provisions of the rules that are substantially similar to, or that are essentially restatements of, the Senate's standing rules are not included. This report will review the requirements contained in Senate rules pertaining to committees; it will then explore how each Senate committee addresses 11 specific issues: meeting day, hearing and meeting notice requirements, scheduling of witnesses, hearing quorum, business quorum, amendment filing requirements, proxy voting, polling, nominations, investigations, and subpoenas. In addition, the report looks at the unique provisions some committees have included in their rules in the miscellaneous category. This report will be updated during the first session of each Congress after all Senate committees have printed their rules in the Congressional Record. |
Even in my most religious moments, I have never been able to take the idea of hell seriously. Prevailing Christian theology asks us to believe that an all-powerful, all-knowing being would do what no human parent could ever do: create tens of billions of flawed and fragile creatures, pluck out a few favourites to shower in transcendent love, and send the rest to an eternity of unrelenting torment. That story has always seemed like an intellectual relic to me, a holdover from barbarism, or worse, a myth meant to coerce belief. But stripped of the religious particulars, I can see the appeal of hell as an instrument of justice, a way of righting wrongs beyond the grave. Especially in unusual circumstances.
Take the case of Adolf Hitler. On the afternoon of 29 April 1945, Hitler was stashed deep in his Berlin bunker, watching his Third Reich collapse, when he received word that Benito Mussolini was dead. Hitler was aghast at the news, not because he’d lost yet another ally, but because of the way Mussolini had died. The Italian dictator had been trying to slink into Switzerland when he was caught, shot, and dragged to a public square in Milan, where a furious mob kicked and spat on his body, before hanging it upside down on a meat hook.
Worried that he might meet a similar fate, Hitler decided to test the strength of his cyanide capsules by feeding a few of them to his dog, Blondie. By midafternoon on the following day, 30 April, the Red Army was rampaging through Berlin, and the Fuhrer's empire had shrunk to a small island of land in the city centre. Rather than fight to the end and risk capture, Hitler bit into one of his cyanide pills, and fired a bullet into his head for good measure. When the Soviets reached the bunker two days later, his body had been burned and his ashes buried, in a shallow bomb crater just above ground.
It is hard to avoid the conclusion that Hitler got off easy, given the scope and viciousness of his crimes. We might have moved beyond the Code of Hammurabi and ‘an eye for an eye’, but most of us still feel that a killer of millions deserves something sterner than a quick and painless suicide. But does anyone ever deserve hell?
That used to be a question for theologians, but in the age of human enhancement, a new set of thinkers is taking it up. As biotech companies pour billions into life extension technologies, some have suggested that our cruelest criminals could be kept alive indefinitely, to serve sentences spanning millennia or longer. Even without life extension, private prison firms could one day develop drugs that make time pass more slowly, so that an inmate's 10-year sentence feels like an eternity. One way or another, humans could soon be in a position to create an artificial hell.
At the University of Oxford, a team of scholars led by the philosopher Rebecca Roache has begun thinking about the ways futuristic technologies might transform punishment. In January, I spoke with Roache and her colleagues Anders Sandberg and Hannah Maslen about emotional enhancement, ‘supercrimes’, and the ethics of eternal damnation. What follows is a condensed and edited transcript of our conversation.
Suppose we develop the ability to radically expand the human lifespan, so that people are regularly living for more than 500 years. Would that allow judges to fit punishments to crimes more precisely?
Roache: When I began researching this topic, I was thinking a lot about Daniel Pelka, a four-year-old boy who was starved and beaten to death [in 2012] by his mother and stepfather here in the UK. I had wondered whether the best way to achieve justice in cases like that was to prolong death as long as possible. Some crimes are so bad they require a really long period of punishment, and a lot of people seem to get out of that punishment by dying. And so I thought, why not make prison sentences for particularly odious criminals worse by extending their lives?
But I soon realised it’s not that simple. In the US, for instance, the vast majority of people on death row appeal to have their sentences reduced to life imprisonment. That suggests that a quick stint in prison followed by death is seen as a worse fate than a long prison sentence. And so, if you extend the life of a prisoner to give them a longer sentence, you might end up giving them a more lenient punishment.
The life-extension scenario may sound futuristic, but if you look closely you can already see it in action, as people begin to live longer lives than before. If you look at the enormous prison population in the US, you find an astronomical number of elderly prisoners, including quite a few with pacemakers. When I went digging around in medical journals, I found all these interesting papers about the treatment of pacemaker patients in prison.
Suppose prisons become more humane in the future, so that they resemble Norwegian prisons instead of those you see in America or North Korea. Is it possible that correctional facilities could become truly correctional in the age of long lifespans, by taking a more sustained approach to rehabilitation?
Roache: If people could live for centuries or millennia, you would obviously have more time to reform them, but you would also run into a tricky philosophical issue having to do with personal identity. A lot of philosophers who have written about personal identity wonder whether identity can be sustained over an extremely long lifespan. Even if your body makes it to 1,000 years, the thinking goes, that body is actually inhabited by a succession of persons over time rather than a single continuous person. And so, if you put someone in prison for a crime they committed at 40, they might, strictly speaking, be an entirely different person at 940. And that means you are effectively punishing one person for a crime committed by someone else. Most of us would think that unjust.
Let’s say that life expansion therapies become a normal part of the human condition, so that it’s not just elites who have access to them, it’s everyone. At what point would it become unethical to withhold these therapies from prisoners?
Roache: In that situation it would probably be inappropriate to view them as an enhancement, or something extra. If these therapies were truly universal, it’s more likely that people would come to think of them as life-saving technologies. And if you withheld them from prisoners in that scenario, you would effectively be denying them medical treatment, and today we consider that inhumane. My personal suspicion is that once life extension becomes more or less universal, people will begin to see it as a positive right, like health care in most industrialised nations today. Indeed, it’s interesting to note that in the US, prisoners sometimes receive better health care than uninsured people. You have to wonder about the incentives a system like that creates.
Where is that threshold of universality, where access to something becomes a positive right? Do we have an empirical example of it?
Roache: One interesting case might be internet access. In Finland, for instance, access to communication technology is considered a human right and handwritten letters are not sufficient to satisfy it. Finnish prisons are required to give inmates access to computers, although their internet activity is closely monitored. This is an interesting development because, for years, limiting access to computers was a common condition of probation in hacking cases – and that meant all kinds of computers, including ATMs [cash points]. In the 1980s, that lifestyle might have been possible, and you could also see pulling it off in the ’90s, though it would have been very difficult. But today computers are ubiquitous, and a normal life seems impossible without them; you can’t even access the subway without interacting with a computer of some sort.
In the late 1990s, an American hacker named Kevin Mitnick was denied all access to communication technology after law enforcement officials [in California] claimed he could ‘start a nuclear war by whistling into a pay phone’. But in the end, he got the ruling overturned by arguing that it prevented him from living a normal life.
What about life expansion that meddles with a person’s perception of time? Take someone convicted of a heinous crime, like the torture and murder of a child. Would it be unethical to tinker with the brain so that this person experiences a 1,000-year jail sentence in his or her mind?
Roache: There are a number of psychoactive drugs that distort people’s sense of time, so you could imagine developing a pill or a liquid that made someone feel like they were serving a 1,000-year sentence. Of course, there is a widely held view that any amount of tinkering with a person’s brain is unacceptably invasive. But you might not need to interfere with the brain directly. There is a long history of using the prison environment itself to affect prisoners’ subjective experience. During the Spanish Civil War [in the 1930s] there was actually a prison where modern art was used to make the environment aesthetically unpleasant. Also, prison cells themselves have been designed to make them more claustrophobic, and some prison beds are specifically made to be uncomfortable.
I haven’t found any specific cases of time dilation being used in prisons, but time distortion is a technique that is sometimes used in interrogation, where people are exposed to constant light, or unusual light fluctuations, so that they can’t tell what time of day it is. But in that case it’s not being used as a punishment, per se, it’s being used to break people’s sense of reality so that they become more dependent on the interrogator, and more pliable as a result. In that sense, a time-slowing pill would be a pretty radical innovation in the history of penal technology.
I want to ask you a question that has some crossover with theological debates about hell. Suppose we eventually learn to put off death indefinitely, and that we extend this treatment to prisoners. Is there any crime that would justify eternal imprisonment? Take Hitler as a test case. Say the Soviets had gotten to the bunker before he killed himself, and say capital punishment was out of the question – would we have put him behind bars forever?
Roache: It’s tough to say. If you start out with the premise that a punishment should be proportional to the crime, it’s difficult to think of a crime that could justify eternal imprisonment. You could imagine giving Hitler one term of life imprisonment for every person killed in the Second World War. That would make for quite a long sentence, but it would still be finite. The endangerment of mankind as a whole might qualify as a sufficiently serious crime to warrant it. As you know, a great deal of the research we do here at the Oxford Martin School concerns existential risk. Suppose there was some physics experiment that stood a decent chance of generating a black hole that could destroy the planet and all future generations. If someone deliberately set up an experiment like that, I could see that being the kind of supercrime that would justify an eternal sentence.
In your forthcoming paper on this subject, you mention the possibility that convicts with a neurologically stunted capacity for empathy might one day be ‘emotionally enhanced’, and that the remorse felt by these newly empathetic criminals could be the toughest form of punishment around. Do you think a full moral reckoning with an awful crime the most potent form of suffering an individual can endure?
Roache: I’m not sure. Obviously, it’s an empirical question as to which feels worse, genuine remorse or time in prison. There is certainly reason to take the claim seriously. For instance, in literature and folk wisdom, you often hear people saying things like, ‘The worst thing is I’ll have to live with myself.’ My own intuition is that for very serious crimes, genuine remorse could be subjectively worse than a prison sentence. But I doubt that’s the case for less serious crimes, where remorse isn’t even necessarily appropriate – like if you are wailing and beating yourself up for stealing a candy bar or something like that.
I remember watching a movie in school, about a teen that killed another teen in a drunk-driving accident. As one of the conditions of his probation, the judge in the case required him to mail a daily cheque for 25 cents to the parents of the teen he’d killed for a period of 10 years. Two years in, the teen was begging the judge to throw him in jail, just to avoid the daily reminder.
Roache: That’s an interesting case where prison is actually an escape from remorse, which is strange because one of the justifications for prison is that it’s supposed to focus your mind on what you have done wrong. Presumably, every day you wake up in prison, you ask yourself why you are there, right?
What if these emotional enhancements proved too effective? Suppose they are so powerful, they turn psychopaths into Zen masters who live in a constant state of deep, reflective contentment. Should that trouble us? Is mental suffering a necessary component of imprisonment?
Roache: There is a long-standing philosophical question as to how bad the prison experience should be. Retributivists, those who think the point of prisons is to punish, tend to think that it should be quite unpleasant, whereas consequentialists tend to be more concerned with a prison’s reformative effects, and its larger social costs. There are a number of prisons that offer prisoners constructive activities to participate in, including sports leagues, art classes, and even yoga. That practice seems to reflect the view that confinement, or the deprivation of liberty, is itself enough of a punishment. Of course, even for consequentialists, there has to be some level of suffering involved in punishment, because consequentialists are very concerned about deterrence.
I wanted to close by moving beyond imprisonment, to ask you about the future of punishment more broadly. Are there any alternative punishments that technology might enable, and that you can see on the horizon now? What surprising things might we see down the line?
Roache: We have been thinking a lot about surveillance and punishment lately. Already, we see governments using ankle bracelets to track people in various ways, and many of them are fairly elaborate. For instance, some of these devices allow you to commute to work, but they also give you a curfew and keep a close eye on your location. You can imagine this being refined further, so that your ankle bracelet bans you from entering establishments that sell alcohol. This could be used to punish people who happen to like going to pubs, or it could be used to reform severe alcoholics. Either way, technologies of this sort seem to be edging up to a level of behaviour control that makes some people uneasy, due to questions about personal autonomy.
It’s one thing to lose your personal liberty as a result of being confined in a prison, but you are still allowed to believe whatever you want while you are in there. In the UK, for instance, you cannot withhold religious manuscripts from a prisoner unless you have a very good reason. These concerns about autonomy become particularly potent when you start talking about brain implants that could potentially control behaviour directly. The classic example is Robert G Heath [a psychiatrist at Tulane University in New Orleans], who did this famously creepy experiment [in the 1950s] using electrodes in the brain in an attempt to modify behaviour in people who were prone to violent psychosis. The electrodes were ostensibly being used to treat the patients, but he was also, rather gleefully, trying to move them in a socially approved direction. You can really see that in his infamous [1972] paper on ‘curing’ homosexuals. I think most Western societies would say ‘no thanks’ to that kind of punishment.
To me, these questions about technology are interesting because they force us to rethink the truisms we currently hold about punishment. When we ask ourselves whether it’s inhumane to inflict a certain technology on someone, we have to make sure it’s not just the unfamiliarity that spooks us. And more importantly, we have to ask ourselves whether punishments like imprisonment are only considered humane because they are familiar, because we’ve all grown up in a world where imprisonment is what happens to people who commit crimes. Is it really OK to lock someone up for the best part of the only life they will ever have, or might it be more humane to tinker with their brains and set them free? When we ask that question, the goal isn’t simply to imagine a bunch of futuristic punishments – the goal is to look at today’s punishments through the lens of the future. ||||| Whole brain emulation (WBE), mind upload or brain upload (sometimes called "mind copying" or "mind transfer") is the hypothetical futuristic process of scanning the mental state (including long-term memory and "self") of a particular brain substrate and copying it to a computer. The computer could then run a simulation model of the brain's information processing, such that it responds in essentially the same way as the original brain (i.e., indistinguishable from the brain for all relevant purposes) and experiences having a conscious mind.[1][2][3]
Mind uploading may potentially be accomplished by either of two methods: Copy-and-transfer or gradual replacement of neurons. In the case of the former method, mind uploading would be achieved by scanning and mapping the salient features of a biological brain, and then by copying, transferring, and storing that information state into a computer system or another computational device. The biological brain may not survive the copying process. The simulated mind could be within a virtual reality or simulated world, supported by an anatomic 3D body simulation model. Alternatively the simulated mind could reside in a computer that is inside (or connected to) a (not necessarily humanoid) robot or a biological body.[4]
Among some futurists and within the transhumanist movement, mind uploading is treated as an important proposed life extension technology. Some believe mind uploading is humanity's current best option for preserving the identity of the species, as opposed to cryonics. Another aim of mind uploading is to provide a permanent backup to our "mind-file", to enable interstellar space travels, and a means for human culture to survive a global disaster by making a functional copy of a human society in a Matrioshka brain, i.e. a computing device that consumes all energy from a star. Whole brain emulation is discussed by some futurists as a "logical endpoint"[4] of the topical computational neuroscience and neuroinformatics fields, both about brain simulation for medical research purposes. It is discussed in artificial intelligence research publications as an approach to strong AI. Computer-based intelligence such as an upload could think much faster than a biological human even if it were no more intelligent. A large-scale society of uploads might, according to futurists, give rise to a technological singularity, meaning a sudden time constant decrease in the exponential development of technology.[5] Mind uploading is a central conceptual feature of numerous science fiction novels and films.
Substantial mainstream research in related areas is being conducted in animal brain mapping and simulation, development of faster supercomputers, virtual reality, brain–computer interfaces, connectomics and information extraction from dynamically functioning brains.[6] According to supporters, many of the tools and ideas needed to achieve mind uploading already exist or are currently under active development; however, they will admit that others are, as yet, very speculative, but still in the realm of engineering possibility. Neuroscientist Randal Koene has formed a nonprofit organization called Carbon Copies to promote mind uploading research.
Overview [ edit ]
Neuron anatomical model
The human brain contains, on average, about 86 billion nerve cells called neurons, each individually linked to other neurons by way of connectors called axons and dendrites. Signals at the junctures (synapses) of these connections are transmitted by the release and detection of chemicals known as neurotransmitters. The established neuroscientific consensus is that the human mind is largely an emergent property of the information processing of this neural network.[citation needed]
Neuroscientists have stated that important functions performed by the mind, such as learning, memory, and consciousness, are due to purely physical and electrochemical processes in the brain and are governed by applicable laws. For example, Christof Koch and Giulio Tononi wrote in IEEE Spectrum:
Consciousness is part of the natural world. It depends, we believe, only on mathematics and logic and on the imperfectly known laws of physics, chemistry, and biology; it does not arise from some magical or otherworldly quality.[7]
The concept of mind uploading is based on this mechanistic view of the mind, and denies the vitalist view of human life and consciousness.[citation needed]
Eminent computer scientists and neuroscientists have predicted that specially programmed[clarification needed] computers will be capable of thought and even attain consciousness, including Koch and Tononi,[7] Douglas Hofstadter,[8] Jeff Hawkins,[8] Marvin Minsky,[9] Randal A. Koene, and Rodolfo Llinás.[10]
However, even though uploading is dependent upon such a general capability, it is conceptually distinct from general forms of AI in that it results from dynamic reanimation of information derived from a specific human mind so that the mind retains a sense of historical identity (other forms are possible but would compromise or eliminate the life-extension feature generally associated with uploading). The transferred and reanimated information would become a form of artificial intelligence, sometimes called an infomorph or "noömorph".[citation needed]
Many theorists have presented models of the brain and have established a range of estimates of the amount of computing power needed for partial and complete simulations.[4][citation needed] Using these models, some have estimated that uploading may become possible within decades if trends such as Moore's law continue.[11]
Theoretical benefits and applications [ edit ]
"Immortality" or backup [ edit ]
In theory, if the information and processes of the mind can be disassociated from the biological body, they are no longer tied to the individual limits and lifespan of that body. Furthermore, information within a brain could be partly or wholly copied or transferred to one or more other substrates (including digital storage or another brain), thereby – from a purely mechanistic perspective – reducing or eliminating "mortality risk" of such information. This general proposal was discussed in 1971 by biogerontologist George M. Martin of the University of Washington.[12]
Space exploration [ edit ]
An “uploaded astronaut” could be used instead of a "live" astronaut in human spaceflight, avoiding the perils of zero gravity, the vacuum of space, and cosmic radiation to the human body. It would allow for the use of smaller spacecraft, such as the proposed StarChip, and it would enable virtually unlimited interstellar travel distances.[13]
Relevant technologies and techniques [ edit ]
The focus of mind uploading, in the case of copy-and-transfer, is on data acquisition, rather than data maintenance of the brain. A set of approaches known as loosely coupled off-loading (LCOL) may be used in the attempt to characterize and copy the mental contents of a brain.[14] The LCOL approach may take advantage of self-reports, life-logs and video recordings that can be analyzed by artificial intelligence. A bottom-up approach may focus on the specific resolution and morphology of neurons, the spike times of neurons, the times at which neurons produce action potential responses.
Computational complexity [ edit ]
[15] Estimates of how much processing power is needed to emulate a human brain at various levels (from Ray Kurzweil and the chart to the left), along with the fastest supercomputer from TOP500 mapped by year. Note the logarithmic scale and exponential trendline, which assumes the computational capacity doubles every 1.1 years. Kurzweil believes that mind uploading will be possible at neural simulation, while the Sandberg, Bostrom report is less certain about where consciousness arises.
Advocates of mind uploading point to Moore's law to support the notion that the necessary computing power is expected to become available within a few decades. However, the actual computational requirements for running an uploaded human mind are very difficult to quantify, potentially rendering such an argument specious.
Regardless of the techniques used to capture or recreate the function of a human mind, the processing demands are likely to be immense, due to the large number of neurons in the human brain along with the considerable complexity of each neuron.
In 2004, Henry Markram, lead researcher of the "Blue Brain Project", stated that "it is not [their] goal to build an intelligent neural network", based solely on the computational demands such a project would have.[16]
It will be very difficult because, in the brain, every molecule is a powerful computer and we would need to simulate the structure and function of trillions upon trillions of molecules as well as all the rules that govern how they interact. You would literally need computers that are trillions of times bigger and faster than anything existing today.[17]
Five years later, after successful simulation of part of a rat brain, Markram was much more bold and optimistic. In 2009, as director of the Blue Brain Project, he claimed that “A detailed, functional artificial human brain can be built within the next 10 years.”[18]
Required computational capacity strongly depend on the chosen level of simulation model scale:[4]
Level CPU demand
(FLOPS) Memory demand
(Tb) $1 million super‐computer
(Earliest year of making) Analog network population model 1015 102 2008 Spiking neural network 1018 104 2019 Electrophysiology 1022 104 2033 Metabolome 1025 106 2044 Proteome 1026 107 2048 States of protein complexes 1027 108 2052 Distribution of complexes 1030 109 2063 Stochastic behavior of single molecules 1043 1014 2111 Estimates from Sandberg, Bostrom, 2008
Simulation model scale [ edit ]
A high-level cognitive AI model of the brain architecture is not required for brain emulation
Simple neuron model: Black-box dynamic non-linear signal processing system
Metabolism model: The movement of positively charged ions through the ion channels controls the membrane electrical action potential in an axon.
Since the function of the human mind and how it might arise from the working of the brain's neural network, are poorly understood issues, mind uploading relies on the idea of neural network emulation. Rather than having to understand the high-level psychological processes and large-scale structures of the brain, and model them using classical artificial intelligence methods and cognitive psychology models, the low-level structure of the underlying neural network is captured, mapped and emulated with a computer system. In computer science terminology,[dubious – discuss] rather than analyzing and reverse engineering the behavior of the algorithms and data structures that resides in the brain, a blueprint of its source code is translated to another programming language. The human mind and the personal identity then, theoretically, is generated by the emulated neural network in an identical fashion to it being generated by the biological neural network.
On the other hand, a molecule-scale simulation of the brain is not expected to be required, provided that the functioning of the neurons is not affected by quantum mechanical processes. The neural network emulation approach only requires that the functioning and interaction of neurons and synapses are understood. It is expected that it is sufficient with a black-box signal processing model of how the neurons respond to nerve impulses (electrical as well as chemical synaptic transmission).
A sufficiently complex and accurate model of the neurons is required. A traditional artificial neural network model, for example multi-layer perceptron network model, is not considered as sufficient. A dynamic spiking neural network model is required, which reflects that the neuron fires only when a membrane potential reaches a certain level. It is likely that the model must include delays, non-linear functions and differential equations describing the relation between electrophysical parameters such as electrical currents, voltages, membrane states (ion channel states) and neuromodulators.
Since learning and long-term memory are believed to result from strengthening or weakening the synapses via a mechanism known as synaptic plasticity or synaptic adaptation, the model should include this mechanism. The response of sensory receptors to various stimuli must also be modelled.
Furthermore, the model may have to include metabolism, i.e. how the neurons are affected by hormones and other chemical substances that may cross the blood–brain barrier. It is considered likely that the model must include currently unknown neuromodulators, neurotransmitters and ion channels. It is considered unlikely that the simulation model has to include protein interaction, which would make it computationally complex.[4]
A digital computer simulation model of an analog system such as the brain is an approximation that introduces random quantization errors and distortion. However, the biological neurons also suffer from randomness and limited precision, for example due to background noise. The errors of the discrete model can be made smaller than the randomness of the biological brain by choosing a sufficiently high variable resolution and sample rate, and sufficiently accurate models of non-linearities. The computational power and computer memory must however be sufficient to run such large simulations, preferably in real time.
Scanning and mapping scale of an individual [ edit ]
When modelling and simulating the brain of a specific individual, a brain map or connectivity database showing the connections between the neurons must be extracted from an anatomic model of the brain. For whole brain simulation, this network map should show the connectivity of the whole nervous system, including the spinal cord, sensory receptors, and muscle cells. Destructive scanning of a small sample of tissue from a mouse brain including synaptic details is possible as of 2010.[19]
However, if short-term memory and working memory include prolonged or repeated firing of neurons, as well as intra-neural dynamic processes, the electrical and chemical signal state of the synapses and neurons may be hard to extract. The uploaded mind may then perceive a memory loss of the events and mental processes immediately before the time of brain scanning.[4]
A full brain map has been estimated to occupy less than 2 x 1016 bytes (20,000 TB) and would store the addresses of the connected neurons, the synapse type and the synapse "weight" for each of the brains' 1015 synapses.[4][not in citation given] However, the biological complexities of true brain function (e.g. the epigenetic states of neurons, protein components with multiple functional states, etc.) may preclude an accurate prediction of the volume of binary data required to faithfully represent a functioning human mind.
Serial sectioning [ edit ]
Serial sectioning of a brain
A possible method for mind uploading is serial sectioning, in which the brain tissue and perhaps other parts of the nervous system are frozen and then scanned and analyzed layer by layer, which for frozen samples at nano-scale requires a cryo-ultramicrotome, thus capturing the structure of the neurons and their interconnections.[20] The exposed surface of frozen nerve tissue would be scanned and recorded, and then the surface layer of tissue removed. While this would be a very slow and labor-intensive process, research is currently underway to automate the collection and microscopy of serial sections.[21] The scans would then be analyzed, and a model of the neural net recreated in the system that the mind was being uploaded into.
There are uncertainties with this approach using current microscopy techniques. If it is possible to replicate neuron function from its visible structure alone, then the resolution afforded by a scanning electron microscope would suffice for such a technique.[21] However, as the function of brain tissue is partially determined by molecular events (particularly at synapses, but also at other places on the neuron's cell membrane), this may not suffice for capturing and simulating neuron functions. It may be possible to extend the techniques of serial sectioning and to capture the internal molecular makeup of neurons, through the use of sophisticated immunohistochemistry staining methods that could then be read via confocal laser scanning microscopy. However, as the physiological genesis of 'mind' is not currently known, this method may not be able to access all of the necessary biochemical information to recreate a human brain with sufficient fidelity.
Brain imaging [ edit ]
[22] Process from MRI acquisition to whole brain structural network
It may be possible to create functional 3D maps of the brain activity, using advanced neuroimaging technology, such as functional MRI (fMRI, for mapping change in blood flow), magnetoencephalography (MEG, for mapping of electrical currents), or combinations of multiple methods, to build a detailed three-dimensional model of the brain using non-invasive and non-destructive methods. Today, fMRI is often combined with MEG for creating functional maps of human cortex during more complex cognitive tasks, as the methods complement each other. Even though current imaging technology lacks the spatial resolution needed to gather the information needed for such a scan, important recent and future developments are predicted to substantially improve both spatial and temporal resolutions of existing technologies.[23]
Brain simulation [ edit ]
There is ongoing work in the field of brain simulation, including partial and whole simulations of some animals. For example, the C. elegans roundworm, Drosophila fruit fly, and mouse have all been simulated to various degrees.[citation needed]
The Blue Brain Project by the Brain and Mind Institute of the École Polytechnique Fédérale de Lausanne, Switzerland is an attempt to create a synthetic brain by reverse-engineering mammalian brain circuitry.
Issues [ edit ]
Philosophical issues [ edit ]
Underlying the concept of "mind uploading" (more accurately "mind transferring") is the broad philosophy that consciousness lies within the brain's information processing and is in essence an emergent feature that arises from large neural network high-level patterns of organization, and that the same patterns of organization can be realized in other processing devices. Mind uploading also relies on the idea that the human mind (the "self" and the long-term memory), just like non-human minds, is represented by the current neural network paths and the weights of the brain synapses rather than by a dualistic and mystic soul and spirit. The mind or "soul" can be defined as the information state of the brain, and is immaterial only in the same sense as the information content of a data file or the state of a computer software currently residing in the work-space memory of the computer. Data specifying the information state of the neural network can be captured and copied as a "computer file" from the brain and re-implemented into a different physical form.[24] This is not to deny that minds are richly adapted to their substrates.[25] An analogy to the idea of mind uploading is to copy the temporary information state (the variable values) of a computer program from the computer memory to another computer and continue its execution. The other computer may perhaps have different hardware architecture but emulates the hardware of the first computer.
These issues have a long history. In 1775 Thomas Reid wrote:[26] “I would be glad to know... whether when my brain has lost its original structure, and when some hundred years after the same materials are fabricated so curiously as to become an intelligent being, whether, I say that being will be me; or, if, two or three such beings should be formed out of my brain; whether they will all be me, and consequently one and the same intelligent being.”
A considerable portion of transhumanists and singularitarians place great hope into the belief that they may become immortal, by creating one or many non-biological functional copies of their brains, thereby leaving their "biological shell". However, the philosopher and transhumanist Susan Schneider claims that at best, uploading would create a copy of the original person's mind.[27] Susan Schneider agrees that consciousness has a computational basis, but this does not mean we can upload and survive. According to her views, "uploading" would probably result in the death of the original person's brain, while only outside observers can maintain the illusion of the original person still being alive. For it is implausible to think that one's consciousness would leave one's brain and travel to a remote location; ordinary physical objects do not behave this way. Ordinary objects (rocks, tables, etc.) are not simultaneously here, and elsewhere. At best, a copy of the original mind is created.[27] Neural correlates of consciousness, a sub-branch of neuroscience, states that consciousness may be thought of as a state-dependent property of some undefined complex, adaptive, and highly interconnected biological system.[28]
Others have argued against such conclusions. For example, Buddhist transhumanist James Hughes has pointed out that this consideration only goes so far: if one believes the self is an illusion, worries about survival are not reasons to avoid uploading,[29] and Keith Wiley has presented an argument wherein all resulting minds of an uploading procedure are granted equal primacy in their claim to the original identity, such that survival of the self is determined retroactively from a strictly subjective position.[30][31] Some have also asserted that consciousness is a part of an extra-biological system that is yet to be discovered and cannot be fully understood under the present constraints of neurobiology. Without the transference of consciousness, true mind-upload or perpetual immortality cannot be practically achieved.[32]
Another potential consequence of mind uploading is that the decision to "upload" may then create a mindless symbol manipulator instead of a conscious mind (see philosophical zombie).[33][34] Are we to assume that an upload is conscious if it displays behaviors that are highly indicative of consciousness? Are we to assume that an upload is conscious if it verbally insists that it is conscious?[35] Could there be an absolute upper limit in processing speed above which consciousness cannot be sustained? The mystery of consciousness precludes a definitive answer to this question.[36] Numerous scientists, including Kurzweil, strongly believe that determining whether a separate entity is conscious (with 100% confidence) is fundamentally unknowable, since consciousness is inherently subjective (see solipsism). Regardless, some scientists strongly believe consciousness is the consequence of computational processes which are substrate-neutral. On the contrary, numerous scientists believe consciousness may be the result of some form of quantum computation dependent on substrate (see quantum mind).[37][38][39]
In light of uncertainty on whether to regard uploads as conscious, Sandberg proposes a cautious approach:[40]
Principle of assuming the most (PAM): Assume that any emulated system could have the same mental properties as the original system and treat it correspondingly.
Verification issues [ edit ]
It is argued that if a computational copy of one's mind did exist, it would be impossible for one to verify this.[41] The argument for this stance is the following: for a computational mind to recognize an emulation of itself, it must be capable of deciding whether two Turing machines (namely, itself and the proposed emulation) are functionally equivalent. This task is uncomputable due to the undecidability of equivalence, thus there cannot exist a computational procedure in the mind that is capable of recognizing an emulation of itself.
Ethical and legal implications [ edit ]
The process of developing emulation technology raises ethical issues related to animal welfare and artificial consciousness.[40] The neuroscience required to develop brain emulation would require animal experimentation, first on invertebrates and then on small mammals before moving on to humans. Sometimes the animals would just need to be euthanized in order to extract, slice, and scan their brains, but sometimes behavioral and in vivo measures would be required, which might cause pain to living animals.[40]
In addition, the resulting animal emulations themselves might suffer, depending on one's views about consciousness.[40] Bancroft argues for the plausibility of consciousness in brain simulations on the basis of the "fading qualia" thought experiment of David Chalmers. He then concludes:[42] “If, as I argue above, a sufficiently detailed computational simulation of the brain is potentially operationally equivalent to an organic brain, it follows that we must consider extending protections against suffering to simulations.”
It might help reduce emulation suffering to develop virtual equivalents of anaesthesia, as well as to omit processing related to pain and/or consciousness. However, some experiments might require a fully functioning and suffering animal emulation. Animals might also suffer by accident due to flaws and lack of insight into what parts of their brains are suffering.[40] Questions also arise regarding the moral status of partial brain emulations, as well as creating neuromorphic emulations that draw inspiration from biological brains but are built somewhat differently.[42]
Brain emulations could be erased by computer viruses or malware, without need to destroy the underlying hardware. This may make assassination easier than for physical humans. The attacker might take the computing power for its own use.[43]
Many questions arise regarding the legal personhood of emulations.[44] Would they be given the rights of biological humans? If a person makes an emulated copy of themselves and then dies, does the emulation inherit their property and official positions? Could the emulation ask to "pull the plug" when its biological version was terminally ill or in a coma? Would it help to treat emulations as adolescents for a few years so that the biological creator would maintain temporary control? Would criminal emulations receive the death penalty, or would they be given forced data modification as a form of "rehabilitation"? Could an upload have marriage and child-care rights?[44]
If simulated minds would come true and if they were assigned rights of their own, it may be difficult to ensure the protection of "digital human rights". For example, social science researchers might be tempted to secretly expose simulated minds, or whole isolated societies of simulated minds, to controlled experiments in which many copies of the same minds are exposed (serially or simultaneously) to different test conditions.[citation needed]
Political and economic implications [ edit ]
Emulations could create a number of conditions that might increase risk of war, including inequality, changes of power dynamics, a possible technological arms race to build emulations first, first-strike advantages, strong loyalty and willingness to "die" among emulations, and triggers for racist, xenophobic, and religious prejudice.[43] If emulations run much faster than humans, there might not be enough time for human leaders to make wise decisions or negotiate. It is possible that humans would react violently against growing power of emulations, especially if they depress human wages. Emulations may not trust each other, and even well-intentioned defensive measures might be interpreted as offense.[43]
Emulation timelines and AI risk [ edit ]
There are very few feasible technologies that humans have refrained from developing. The neuroscience and computer-hardware technologies that may make brain emulation possible are widely desired for other reasons, and logically their development will continue into the future. Assuming that emulation technology will arrive, a question becomes whether we should accelerate or slow its advance.[43]
Arguments for speeding up brain-emulation research:
If neuroscience is the bottleneck on brain emulation rather than computing power, emulation advances may be more erratic and unpredictable based on when new scientific discoveries happen. [43] [45] [46] Limited computing power would mean the first emulations would run slower and so would be easier to adapt to, and there would be more time for the technology to transition through society. [46]
Limited computing power would mean the first emulations would run slower and so would be easier to adapt to, and there would be more time for the technology to transition through society. Improvements in manufacturing, 3D printing, and nanotechnology may accelerate hardware production, [43] which could increase the "computing overhang" [47] from excess hardware relative to neuroscience.
which could increase the "computing overhang" from excess hardware relative to neuroscience. If one AI-development group had a lead in emulation technology, it would have more subjective time to win an arms race to build the first superhuman AI. Because it would be less rushed, it would have more freedom to consider AI risks.[48][49]
Arguments for slowing down brain-emulation research:
Greater investment in brain emulation and associated cognitive science might enhance the ability of artificial intelligence (AI) researchers to create "neuromorphic" (brain-inspired) algorithms, such as neural networks, reinforcement learning, and hierarchical perception. This could accelerate risks from uncontrolled AI. [43] [49] Participants at a 2011 AI workshop estimated an 85% probability that neuromorphic AI would arrive before brain emulation. This was based on the idea that brain emulation would require understanding some brain components, and it would be easier to tinker with these than to reconstruct the entire brain in its original form. By a very narrow margin, the participants on balance leaned toward the view that accelerating brain emulation would increase expected AI risk. [48]
Participants at a 2011 AI workshop estimated an 85% probability that neuromorphic AI would arrive before brain emulation. This was based on the idea that brain emulation would require understanding some brain components, and it would be easier to tinker with these than to reconstruct the entire brain in its original form. By a very narrow margin, the participants on balance leaned toward the view that accelerating brain emulation would increase expected AI risk. Waiting might give society more time to think about the consequences of brain emulation and develop institutions to improve cooperation.[43][49]
Emulation research would also speed up neuroscience as a whole, which might accelerate medical advances, cognitive enhancement, lie detectors, and capability for psychological manipulation.[49]
Emulations might be easier to control than de novo AI because
We understand better human abilities, behavioral tendencies, and vulnerabilities, so control measures might be more intuitive and easier to plan for.[48][49] Emulations could more easily inherit human motivations.[49] Emulations are harder to manipulate than de novo AI, because brains are messy and complicated; this could reduce risks of their rapid takeoff.[43][49] Also, emulations may be bulkier and require more hardware than AI, which would also slow the speed of a transition.[49] Unlike AI, an emulation wouldn't be able to rapidly expand beyond the size of a human brain.[49] Emulations running at digital speeds would have less intelligence differential vis-à-vis AI and so might more easily control AI.[49]
As counterpoint to these considerations, Bostrom notes some downsides:
Even if we better understand human behavior, the evolution of emulation behavior under self-improvement might be much less predictable than the evolution of safe de novo AI under self-improvement.[49] Emulations may not inherit all human motivations. Perhaps they would inherit our darker motivations or would behave abnormally in the unfamiliar environment of cyberspace.[49] Even if there's a slow takeoff toward emulations, there would still be a second transition to de novo AI later on. Two intelligence explosions may mean more total risk.[49]
Advocates [ edit ]
Ray Kurzweil, director of engineering at Google, claims to know and foresee that people will be able to "upload" their entire brains to computers and become "digitally immortal" by 2045. Kurzweil made this claim for many years, e.g. during his speech in 2013 at the Global Futures 2045 International Congress in New York, which claims to subscribe to a similar set of beliefs.[50] Mind uploading is also advocated by a number of researchers in neuroscience and artificial intelligence, such as Marvin Minsky[citation needed] while he was still alive. In 1993, Joe Strout created a small web site called the Mind Uploading Home Page, and began advocating the idea in cryonics circles and elsewhere on the net. That site has not been actively updated in recent years, but it has spawned other sites including MindUploading.org, run by Randal A. Koene, who also moderates a mailing list on the topic. These advocates see mind uploading as a medical procedure which could eventually save countless lives.
Many transhumanists look forward to the development and deployment of mind uploading technology, with transhumanists such as Nick Bostrom predicting that it will become possible within the 21st century due to technological trends such as Moore's law.[4]
Michio Kaku, in collaboration with Science, hosted a documentary, Sci Fi Science: Physics of the Impossible, based on his book Physics of the Impossible. Episode four, titled "How to Teleport", mentions that mind uploading via techniques such as quantum entanglement and whole brain emulation using an advanced MRI machine may enable people to be transported to vast distances at near light-speed.
The book Beyond Humanity: CyberEvolution and Future Minds by Gregory S. Paul & Earl D. Cox, is about the eventual (and, to the authors, almost inevitable) evolution of computers into sentient beings, but also deals with human mind transfer. Richard Doyle's Wetwares: Experiments in PostVital Living deals extensively with uploading from the perspective of distributed embodiment, arguing for example that humans are currently part of the "artificial life phenotype". Doyle's vision reverses the polarity on uploading, with artificial life forms such as uploads actively seeking out biological embodiment as part of their reproductive strategy.
Skeptics [ edit ]
Kenneth D. Miller, a professor of neuroscience at Columbia and a co-director of the Center for Theoretical Neuroscience, raised doubts about the practicality of mind uploading. His major argument is that reconstructing neurons and their connections is in itself a formidable task, but it is far from being sufficient. Operation of the brain depends on the dynamics of electrical and biochemical signal exchange between neurons; therefore, capturing them in a single "frozen" state may prove insufficient. In addition, the nature of these signals may require modeling down to the molecular level and beyond. Therefore, while not rejecting the idea in principle, Miller believes that the complexity of the "absolute" duplication of an individual mind is insurmountable for the nearest hundreds of years.[51]
See also [ edit ] ||||| by Rebecca Roache
Follow Rebecca on Twitter
Edit 26th March 2014: It’s been pointed out to me by various people that this blog post does not make adequately clear that I don’t advocate the punishment methods described here. For a clarification of my views on the subject, please go here. For a Q&A, see here.
Today, the mother and stepfather of Daniel Pelka each received a life sentence for his murder. Daniel was four when he died in March last year. In the last few months of his short life, he was beaten, starved, held under water until he lost consciousness so that his mother could enjoy some ‘quiet time’, denied medical treatment, locked in a tiny room containing only a mattress on which he was expected both to sleep and defecate, humiliated and denied affection, and subjected to grotesquely creative abuse such as being force-fed salt when he asked for a drink of water. His young sibling, who secretly tried to feed and comfort Daniel, was forced to witness much of this; and neighbours reported hearing Daniel’s screams at night.
Daniel’s mother, Magdelena Luczak, and stepfather, Mariusz Krezolek, will each serve a minimum of thirty years in prison. This is the most severe punishment available in the current UK legal system. Even so, in a case like this, it seems almost laughably inadequate. The conditions in which Luczak and Krezolek will spend the next thirty years must, by law, meet certain standards. They will, for example, be fed and watered, housed in clean cells, allowed access to a toilet and washing facilities, allowed out of their cells for exercise and recreation, allowed access to medical treatment, and allowed access to a complaints procedure through which they can seek justice if those responsible for their care treat them cruelly or sadistically or fail to meet the basic needs to which they are entitled. All of these things were denied to Daniel. Further, after thirty years—when Luczak is 57 and Krezolek 64—they will have their freedom returned to them. Compared to the brutality they inflicted on vulnerable and defenceless Daniel, this all seems like a walk in the park. What can be done about this? How can we ensure that those who commit crimes of this magnitude are sufficiently punished?
In cases like this, people sometimes express the opinion that the death penalty should be reintroduced—indeed, some have responded to Daniel’s case with this suggestion. I am not sympathetic to this idea, and I will not discuss it here; the arguments against it are well-rehearsed in many other places. Alternatively, some argue that retributive punishment (reactionary punishment, such as imprisonment) should be replaced where possible with a forward-looking approach such as restorative justice. I imagine, however, that even opponents of retributive justice would shrink from suggesting that Daniel’s mother and stepfather should escape unpunished. Therefore, I assume—in line with the mainstream view of punishment in the UK legal system and in every other culture I can think of—that retributive punishment is appropriate in this case.
We might turn to technology for ways to increase the severity of Luczak and Krezolek’s punishment without making drastic changes to the current UK legal system. Here are some possibilities.
Lifespan enhancement: Within the transhumanist movement, the belief that science will soon be able to halt the ageing process and enable humans to remain healthy indefinitely is widespread. Dr Aubrey de Grey, co-founder of the anti-ageing Sens research foundation, believes that the first person to live to 1,000 years has already been born. The benefits of such radical lifespan enhancement are obvious—but it could also be harnessed to increase the severity of punishments. In cases where a thirty-year life sentence is judged too lenient, convicted criminals could be sentenced to receive a life sentence in conjunction with lifespan enhancement. As a result, life imprisonment could mean several hundred years rather than a few decades. It would, of course, be more expensive for society to support such sentences. However, if lifespan enhancement were widely available, this cost could be offset by the increased contributions of a longer-lived workforce.
Mind uploading: As the technology required to scan and map human brain processes improves, some believe it will one day be possible to upload human minds on to computers. With sufficient computer power, it would be possible to speed up the rate at which an uploaded mind runs. Professor Nick Bostrom, head of Oxford’s Future of Humanity Institute, calls a vastly faster version of human-level intelligence ‘speed superintelligence’. He observes that a speed superintelligence operating at ten thousand times that of a biological brain ‘would be able to read a book in a few seconds and write a PhD thesis in an afternoon. If the speed‑up were instead a factor of a million, a millennium of thinking would be accomplished in eight and a half hours’.1 Similarly, uploading the mind of a convicted criminal and running it a million times faster than normal would enable the uploaded criminal to serve a 1,000 year sentence in eight-and-a-half hours. This would, obviously, be much cheaper for the taxpayer than extending criminals’ lifespans to enable them to serve 1,000 years in real time. Further, the eight-and-a-half hour 1,000-year sentence could be followed by a few hours (or, from the point of view of the criminal, several hundred years) of treatment and rehabilitation. Between sunrise and sunset, then, the vilest criminals could serve a millennium of hard labour and return fully rehabilitated either to the real world (if technology facilitates transferring them back to a biological substrate) or, perhaps, to exile in a computer simulated world. For this to be a realistic punishment option, however, some important issues in the philosophy of mind and personal identity would need to be addressed. We would need to be sure, for example, that scanning a person’s brain and simulating its functions on a computer would be equivalent to literally transferring that person from his or her body onto a computer—as opposed to it being equivalent to killing him or her (if destroying the brain is necessary for the scanning process), or just copying his or her brain activity. Personally, I have serious doubts that such theoretical issues are ever likely to be resolved to the extent where mind uploading could be practicable as a form of punishment.
Altering perception of duration: Various factors can cause people to perceive time as passing more slowly. Science can explain some of these factors, and we can expect understanding in this area to continue to progress. Our emotional state can influence our perception of how quickly time passes: one recent study revealed that time seems to pass more slowly when people are experiencing fear than when experiencing sadness or a neutral state.2 Another study demonstrated that our perception of other people’s emotions—read through their facial expressions—affects our experience of duration: time seems to pass more slowly when faced with a person expressing anger, fear, joy, or sadness.3 In addition, our experience of duration changes throughout life, with time seeming to pass more slowly for children than for adults. Exactly why is not fully understood, but some believe that it relates to attention and the way in which information is processed.4 Time also appears to pass more slowly for people taking psychoactive drugs,5 engaging in mindfulness meditation,6 and when the body temperature is lowered. 7 This research on subjective experience of duration could inform the design and management of prisons, with the worst criminals being sent to special institutions designed to ensure their sentences pass as slowly and monotonously as possible.
Robot prison officers: The extent to which prison can be made unpleasant for prisoners is limited by considerations of the welfare of the prison staff who must deal with prisoners on a day-to-day basis. It is in the interests of these staff to keep prisoners relatively content to ensure that they can be managed safely and calmly. If human staff could one day be replaced by robots, this limiting factor would be removed. Robotics technology has already produced self-driving cars and various other impressive machines, which places robot prison officers within the bounds of possibility.
Technology, then, offers (or will one day offer) untapped possibilities to make punishment for the worst criminals more severe without resorting to inhumane methods or substantially overhauling the current UK legal system. What constitutes humane treatment in the context of the suggestions I have made is, of course, debatable. But I believe it is an issue worth debating.
References
1 Bostrom, N. 2010: ‘Intelligence explosion: groundwork for a strategic analysis’. Unpublished manuscript.
2 Droit-Volet, S., Fayolle, S.L. and Gil, S. 2011: ‘Emotion and time perception: effects of film-induced mood’, Frontiers in Integrative Neuroscience 5: 33.
3 Gil, S. and Droit-Volet, S. 2011: ‘How do emotional facial expressions influence our perception of time?’, in Masmoudi, S., Yan Dai, D. and Naceur, A. (eds.) Attention, Representation, and Human Performance: Integration of Cognition, Emotion and Motivation (London: Psychology Press, Taylor & Francis).
4 Gruber, R.P., Wagner, L.F. and Block, R.A. 2000: ‘Subjective time versus proper (clock) time’, in Saniga, M., Buccheri, R. and Di Gesù, V. (eds.) Studies on the Structure of Time: From Physics to Psycho(patho)logy (New York: Kluwer Academic/Plenum Publishers).
5 Wittmann, M., Carter, O., Hasler, F., Cahn, B.R., Grimberg, U., Spring, P., Hell, D., Flohr, H. and Vollenweider, F.X. 2007: ‘Effects of psilocybin on time perception and temporal control of behaviour in humans’, Journal of Psychopharmacology 21/1: 50–64.
6 Kramer, R.S., Weger, U.W. and Sharma, D. 2013: ‘The effect of mindfulness meditation on time perception’, Consciousness and Cognition 22/3: 846–52.
7 Wearden, J.H. and Penton-Voak, I.S. 1995: ‘Feeling the heat: body temperature and the rate of subjective time, revisited’, The Quarterly Journal of Experimental Psychology Section B: Comparative and Physiological Psychology 48/2: 129–41. | We could someday see prison sentences radically altered—in the prisoner's own mind, the Telegraph reports. An Oxford philosophers are considering how future technology could, for instance, make a jail sentence feel as though it lasted 1,000 years, they tell Aeon magazine. After all, there are already "a number of psychoactive drugs that distort people’s sense of time," and existing interrogation scenarios tinker with lighting to prevent subjects from knowing the time, says Rebecca Roache. In a blog post, Roache advances an even more mind-bending idea: Some point to a future in which brain scans allow us to upload the human brain onto a computer (there's even a Wikipedia page about it). If that happens, we could perhaps "speed up" a prisoner's mind. "Uploading the mind of a convicted criminal and running it a million times faster than normal would enable the uploaded criminal to serve a 1,000 year sentence in eight-and-a-half hours," Roache writes. That would, she notes, "be much cheaper for the taxpayer." But, she points out, "the goal isn’t simply to imagine a bunch of futuristic punishments—the goal is to look at today’s punishments through the lens of the future." |
“There has been unbelievable, unprecedented obstruction,” Mr. Reid said as he set in motion the steps for the vote on Thursday. “The Senate is a living thing, and to survive it must change as it has over the history of this great country. To the average American, adapting the rules to make the Senate work again is just common sense.”
Republicans accused Democrats of irreparably damaging the character of an institution that in many ways still operates as it did in the 19th century, and of disregarding the constitutional prerogative of the Senate as a body of “advice and consent” on presidential nominations.
“You think this is in the best interest of the United States Senate and the American people?” asked the Republican leader, Senator Mitch McConnell, sounding incredulous.
“I say to my friends on the other side of the aisle, you’ll regret this. And you may regret it a lot sooner than you think,” he added.
Mr. Obama applauded the Senate’s move. “Today’s pattern of obstruction, it just isn’t normal,” he told reporters at the White House. “It’s not what our founders envisioned. A deliberate and determined effort to obstruct everything, no matter what the merits, just to refight the results of an election is not normal, and for the sake of future generations we can’t let it become normal.”
Photo
Only three Democrats voted against the measure.
The changes will apply to all 1,183 executive branch nominations that require Senate confirmation — not just cabinet positions but hundreds of high- and midlevel federal agency jobs and government board seats.
Advertisement Continue reading the main story
This fight was a climax to the bitter debate between the parties over electoral mandates and the consequences of presidential elections. Republicans, through their frequent use of the various roadblocks that congressional procedure affords them, have routinely thwarted Democrats. Democrats, in turn, have accused Republicans of effectively trying to nullify the results of a presidential election they lost, whether by trying to dismantle his health care law or keep Mr. Obama from filling his cabinet.
Newsletter Sign Up Continue reading the main story Please verify you're not a robot by clicking the box. Invalid email address. Please re-enter. You must select a newsletter to subscribe to. Sign Up You agree to receive occasional updates and special offers for The New York Times's products and services. Thank you for subscribing. An error has occurred. Please try again later. View all New York Times newsletters.
Republicans saw their battle as fighting an overzealous president who, left to his own devices, would stack a powerful and underworked court with judges sympathetic to his vision of big-government liberalism, pushing its conservative tilt sharply left. The court is of immense political importance to both parties because it often decides questions involving White House and federal agency policy.
Republicans proposed eliminating three of its 11 full-time seats. When Democrats balked, the Republicans refused to confirm any more judges, saying they were exercising their constitutional check against the executive.
Senator Pat Roberts, Republican of Kansas, said Democrats had undercut the minority party’s rights forever. “We have weakened this body permanently, undermined it for the sake of an incompetent administration,” he said. “What a tragedy.”
With the filibuster rules now rewritten — the most significant change since the Senate lowered its threshold to break a filibuster from two-thirds of the body to three-fifths, or 60 votes, in 1975 — the Senate can proceed with approving a backlog of presidential nominations.
Photo
There are now 59 nominees to executive branch positions and 17 nominees to the federal judiciary awaiting confirmation votes. The Senate acted immediately on Thursday when it voted with just 55 senators affirming to move forward on the nomination of Patricia A. Millett, a Washington lawyer nominated to the Washington appeals court. Two other nominees to that court, Cornelia T. L. Pillard and Robert L. Wilkins, are expected to be confirmed when the Senate returns from its Thanksgiving recess next month.
The filibuster or threats to use it have frustrated presidents and majority parties since the early days of the republic. Over the years, and almost always after the minority had made excessive use of it, the Senate has adjusted the rules. Until 1917, the year Woodrow Wilson derided his antiwar antagonists as “a little group of willful men” who had rendered the government helpless through blocking everything in front of it, there was no rule to end debate. From 1917 to 1975, the bar for cutting off debate was set at two-thirds of the Senate.
Some would go even further than Thursday’s action. Senator Jeff Merkley, Democrat of Oregon, said that he would like to see the next fight on the filibuster to be to require senators to actually stand on the floor and talk if they wanted to stall legislation.
The gravity of the situation was reflected in an unusual scene on the Senate floor: Nearly all 100 senators were in their seats, rapt, as their two leaders debated.
As the two men went back and forth, Mr. McConnell appeared to realize there was no way to persuade Mr. Reid to change his mind. As many Democrats wore large grins, Republicans looked dour as they lost on a futile, last-ditch parliamentary attempt by Mr. McConnell to overrule the majority vote.
Advertisement Continue reading the main story
When Mr. McConnell left the chamber, he said, “I think it’s a time to be sad about what’s been done to the United States Senate.” ||||| The Senate voted Thursday to change its rules to prevent the minority party from filibustering any nominations other than nods to the Supreme Court.
The change was approved after Senate Majority Leader Harry Reid Harry ReidThe Trail 2016: When elections and terrorism intersect Democratic super-PAC launches ad against Portman Warren steps into 2016 fray, calls Trump a ‘loser’ MORE (D-Nev.) triggered the “nuclear option,” which allows a change to Senate rules by majority vote.
The 52-48 vote dramatically changes the rules of the Senate and limits the minority party's ability to prevent confirmation of presidential nominees. Sens. Carl Levin (Mich.), Mark Pryor (Ark.) and Joe Manchin (W.Va.) were the only Democrats to vote against Reid's rules change.
ADVERTISEMENT
It will allow all three of President Obama's nominees to the D.C. Circuit Court of Appeals to go forward, as well as his nomination of Rep. Mel Watt (D-N.C.) to lead a housing regulatory agency.
Obama praised the action.
“The gears of government have to work and the step that a majority of senators took today I think will help make those gears work just a little bit better,” he said in a statement from the White House briefing room.
Reid said the change was necessary to get the Senate working again.
“It’s time to change the Senate before this institution becomes obsolete,” Reid said on the Senate floor.
“The American people believe Congress is broken. The American people believe the Senate is broken. And I agree.”
The procedural motion is known as the nuclear option because critics warn it would obliterate bipartisan relations in the Senate. Senate Minority Leader Mitch McConnell (R-Ky.) ripped Reid for triggering it.
McConnell accused Democrats of picking a “fake fight over judges” to try and “distract the public” from the problems of ObamaCare.
“It only reinforces the narrative of party willing to do or say just about anything to get its way,” said McConnell. “One again, Democrats are threatening to break the rules of the Senate ... in order to change the rules of the Senate,” he said.
“And over what? Over a court that doesn’t have enough work to do.”
After the vote, McConnell declined to comment on the prospect of Republican retaliation.
“I don’t think this is the time to be talking about reprisals. I think it’s a time to be sad about what’s been done to the United States Senate,” he said.
The specific procedural vote to change the Senate's rules was to sustain the ruling of the chair that nominees need 60 votes to advance to final passage.
Democrats voted against sustaining the ruling of the chair and in favor of changing the Senate's rules. The final vote was 48-52.
In his floor comments, Reid said the filibuster had rendered the Senate’s basic duty of confirming presidential nominees “completely unworkable.”
“The need for change is so, so very obvious,” he said.
“These nominees deserve at least an up-or-down vote, but Republican filibusters deny them a fair vote, any vote, and deny the president his team.”
The two parties have effectively changed sides on the nuclear option since Democrats gained control of the upper chamber in the 2006 election.
Republicans accused Democrats of hypocrisy for embracing a controversial tactic they criticized in 2005, when Republicans threatened to go nuclear to move then-President George W. Bush’s stalled nominees.
“To change the rules in the Senate can't be done by a simple majority. It can only be done if there is extended debate by 67 votes,” Reid said in May of 2005.
“They are talking about doing something illegal. They are talking about breaking the rules to change the rules, and that is not appropriate. That is not fair, and it is not right,” he said in April of that year.
But Democrats countered that McConnell was ready to vote for it when then-Majority Leader Bill Frist (R-Tenn.) wanted to strip the minority of the power to filibuster eight years ago.
The dispute that triggered the rules change was over three of President Obama’s nominees to the D.C. Circuit Court of Appeals, the second-most-powerful court in the nation.
Republicans have blocked the nominations of Patricia Millett, an appellate litigator; Cornelia Pillard, a Georgetown Law School professor; and Robert Wilkins, a judge on the District Court for the District of Columbia.
Reid said Republicans floated a last-minute deal by offering to confirm one of the D.C. Circuit nominees to avoid the rules change.
The Senate voted 55 to 43 immediately after the rules change to end a filibuster of Millett.
Reid proposed scheduling a final vote on her nomination after the Thanksgiving recess if Republicans agreed to speed up consideration of the defense authorization bill.
Reid had come under growing pressure from his conference to use the nuclear option to fill the court's vacancies.
Sen. Jeff Merkley (D-Ore.), an outspoken proponent of rules reform, circulated a memo to the media Thursday morning defending the tactic.
He noted that then-Senate Majority Leader Robert Byrd (D-W.Va.) used it on March 5, 1980, when he eliminated filibusters on motions to proceed to nominations.
He argued the Senate has changed its procedures by a simple majority vote at least 18 times since 1977.
“The notion that changing Senate procedure with a simple majority vote is ‘changing the rules by breaking the rules’ is false,” he wrote.
— This story was posted at 10:22 a.m. and updated at 2:20 p.m. ||||| Senate Majority Leader Sen. Harry Reid, D-Nev., speaks to the media on Capitol Hill in Washington, Tuesday, Nov. 19, 2013. The Senate is nearing a potential showdown on curbing the power that the Republican... (Associated Press)
Senate Majority Leader Sen. Harry Reid, D-Nev., speaks to the media on Capitol Hill in Washington, Tuesday, Nov. 19, 2013. The Senate is nearing a potential showdown on curbing the power that the Republican... (Associated Press)
The Democrat majority in the Senate on Thursday pushed through a major rules change, one that curbs the power of the Republican minority to block President Barack Obama's nominations for high-level judgeships and cabinet and agency officials. The move was certain to only deepen the partisan divide that has crippled passage of legislation.
Infuriated over repeated Republican blocks of Obama candidates for critical judgeships, Senate Majority Leader Harry Reid took the dramatic step, calling it "simple fairness" because the change would work in Republicans' favor whenever they regain the White House and a Senate majority.
Current Senate rules allowed any one member of the chamber, using a tactic called a filibuster, to block a president's nominations unless 60 of the 100 Senators vote to move forward with the nomination. The 60-vote threshold has proven difficult for Democrats to assemble given they hold only a 55-45 edge over the Republicans in a hyper-partisan political climate and stalemated Congress.
"The gears of government have to work," Obama said shortly after the Senate vote. In a brief White House appearance to congratulate his fellow Democrats, Obama complained that the old filibuster rule allowed opposition senators to avoid voting their conscience on legislation on which a yes vote could put them under attack from the far rightwing of the party.
Known as the "nuclear option," Reid said the rules change would help break partisan gridlock that has sent voter approval of Congress to record lows. He forced a vote on requiring only 51 votes to end a filibuster. The change would not end the 60-vote threshold for overcoming blocking action for Supreme Court nominees or legislation.
The change is the most far-reaching to filibuster rules since 1975, when a two-thirds requirement for cutting off filibusters against legislation and all nominations was eased to today's three-fifths, or 60-vote, level. It would make it harder for the opposition party to block presidential appointments.
The latest battle is over Obama's choices to fill three vacancies at the U.S. Court of Appeals for the District of Columbia Circuit. Since Oct. 31, Republican filibusters derailed the president's nominations of District Judge Robert L. Wilkins, law professor Cornelia Pillard and attorney Patricia Millett for those lifelong appointments. The D.C. Circuit Court is viewed as second only to the Supreme Court in power because it rules on disputes over White House and federal agency actions. The circuit's eight sitting judges are divided evenly between Democratic and Republican presidential appointees. Three seats are vacant.
"They have decided that their base demands a permanent campaign against the president and maximum use of every tool available," said Democrat Sen. Jeff Merkley, a leading advocate of revamping filibuster rules that have been used a record number of times during the Obama presidency. He said Republican tactics are "trumping the appropriate exercise of advice and consent."
Republicans said they are weary of repeated Democratic threats to rewrite the rules. They say Democrats similarly obstructed some of President George W. Bush's nominees and argue that the D.C. Circuit's caseload is too low, which Democrats reject.
"I suspect the reason they may be doing it is hoping Republicans overreact, and it's the only thing that they could think of that would change the conversation about Obamacare," said Republican Sen. Lamar Alexander, using the nickname for Obama's troubled health care law. "But we're not that dumb."
Reid use of the nuclear option procedural move would allow him to change the filibuster rule with just 51 votes, meaning Democrats could push it through without Republican support. Senate rules are more commonly changed with 67 votes.
Nomination fights are not new in the Senate, but as the hostility has grown the two sides have been edging toward a collision for much of this year.
___
Associated Press writer Alan Fram contributed to this report. ||||| The Senate approved a historic rules change on Thursday by eliminating the use of the filibuster on all presidential nominees except those to the U.S. Supreme Court.
Invoking the long-threatened “nuclear option” means that most of President Barack Obama’s judicial and executive branch nominees no longer need to clear a 60-vote threshold to reach the Senate floor and get an up-or-down vote.
Text Size -
+
reset Mitch McConnell responds In 90 secs: What's driving the day
Speaking at the White House, Obama praised the Senate action, accusing Republicans of attempting to block his nominees based on politics alone, not on the merits of the nominee.
“This isn’t obstruction on substance, on qualifications. It’s just to gum up the works,” he said.
(PHOTOS: Harry Reid’s career)
Senate Majority Leader Harry Reid (D-Nev.) used the nuclear option Thursday morning, meaning he called for a vote to change the Senate rules by a simple majority vote. It passed, 52 to 48. Three Democrats voted against changing the rules — Sen. Carl Levin of Michigan, Joe Manchin of West Virginia and Mark Pryor of Arkansas.
“It’s time to change the Senate before this institution becomes obsolete,” Reid said in a lengthy floor speech on Thursday morning.
A furious Senate Minority Leader Mitch McConnell (R-Ky.), who tried to recess the Senate for the day before the rules change could get a vote, said after the minority’s power was limited by Democrats: “I don’t think this is a time to be talking about reprisal. I think it’s a time to be sad about what has been done to the United States Senate.”
But McConnell quickly noted that Republicans could fix the problem in the upcoming midterm elections if they regain the majority: “The solution to this problem is an election. The solution to this problem is at the ballot box. We look forward to having a great election on 2014.”
The debate over the filibuster — and specifically its use on D.C. Circuit nominees — has been raging for nearly a decade, stretching back to when George W. Bush was president and Democrats were in the minority. But changing the Senate rules has always been avoided through a piecemeal deal, a gentleman’s agreement or a specific solution, not a historic change to the very fabric of the Senate.
(Also on POLITICO: Reid: GOP playing with fire on judges)
But since Obama’s nomination, the “nuclear option” has reared its head three times in less than a year — each time getting closer to the edge. Many in the Senate privately expected that this go-round would be yet another example of saber rattling, but Reid said pressure was increasing within his own party to change the rules.
The blockade of three consecutive nominees to a powerful appellate court was too much for Democrats to handle — and Reid felt compelled to pull the trigger, explaining that “this is the way it has to be.”
It didn’t take long for Republicans to begin circulating both Reid’s and Obama’s past statements opposing a rules change. But the majority leader said that things escalated to a level that even he had not thought possible in 2005, when a “Gang of 14” banded together to stop a rules change.
“They have done everything they can to deny the fact that Obama has been elected and then reelected,” he said. “I have a right to change how I feel about things.”
(Earlier on POLITICO: Court nominees: Battleground for partisan politics)
Senate Democrats were quick to use their newfound powers, voting in the early afternoon to end the filibuster on Patricia Millett’s nomination to the D.C. Circuit Court of Appeals. The vote was 55-43, with two senators voting present. Before the change earlier Thursday, Millett would have needed 60 votes to clear the procedural hurdle and move on to a confirmation vote. But now, she needed just 51 to advance.
In his speech, Obama noted that in the few decades before he took office, about 20 nominees were filibustered. Since he took office, close to 30 judicial and political nominees have had their nominations blocked.
“It’s no secret that the American people have probably never been more frustrated with Washington, and one of the reasons why that is, is that over the past five years, we’ve seen an unprecedented pattern of obstruction in Congress that’s prevented too much of the American people’s business from getting done,” Obama said. “Today’s pattern of obstruction just isn’t normal. We can’t allow it to become normal.”
Obama also cited the filibuster of a gun control bill earlier this year, although Thursday’s rule change would preserve the filibuster for Supreme Court picks and legislation. | The so-called "nuclear option" just happened: The Senate voted to weaken filibusters and make it all but impossible for Republicans to block confirmation of the president's nominees for judges and other top posts, reports the AP. While the filibuster can't be used on those nominations, it's still fair game on legislation and for Supreme Court nominees. The mostly party-line vote on the historic move was 52-48, and this line from the New York Times fits the general tone of the coverage: "The change is the most fundamental shift in the way the Senate functions in more than a generation." Party leaders traded shots like these before the vote: “It’s time to change the Senate before this institution becomes obsolete," said Harry Reid, as per Politico. Minority Leader Mitch McConnell countered, "You’ll regret this, and you might regret it even sooner than you might think." For the record, the "nuclear option" refers to Reid changing the Senate rules via a majority vote, explains the Hill. In the short term, it means that three of President Obama's stalled appointments to the DC Circuit Court of Appeals will be able to move forward. |
Hillary Clinton Holds 'Tough, Candid' Meeting With Black Lives Matter Activists
NPR's Kelly McEvers speaks with DeRay McKesson of the group, "We The Protesters," about the meeting with presidential candidate Hillary Clinton in Washington, D.C., Friday.
KELLY MCEVERS, HOST:
Today, activists with the Black Lives Matter movement met privately with presidential candidate Hillary Clinton. The activists have been calling attention to issues of racism and police brutality and marching in the streets of places like Ferguson, New York and Baltimore. DeRay McKesson is one of the best-known names of this movement, and after he met with Clinton, he streamed this video on twitter.
(SOUNDBITE OF ARCHIVED RECORDING)
DERAY MCKESSON: Hey. Thank you so much for the meeting.
HILLARY CLINTON: Thank you.
MCKESSON: You're on Periscope, so can you just say hi to the Internet.
CLINTON: Hi. Hi, everybody.
(LAUGHTER)
CLINTON: You are the social media emperor.
MCEVERS: DeRay McKesson joins me now from Washington. Thanks for being with us.
MCKESSON: Thank you, Kelly. I'm humbled to be here.
MCEVERS: Well, first off, can you just tell us, how did this meeting with Hilary Clinton come about?
MCKESSON: You know, I tweeted to her and said, I would love to find time to talk about a set of issues before you release the platform. And - in her campaign - she responded on Twitter, and we subsequently worked to schedule the meeting. So we met for about 90 minutes - about 10 or 11 protesters from around the country.
MCEVERS: And what did you talk about?
MCKESSON: We talked about a range of issues spanning from private prisons to mental health services for young kids to the role of the federal government ensuring equity in local and state governments in a community. So it was a tough conversation, candid conversation. I'm hopeful that it will lead to an informed platform that she eventually rolls out. You know, we didn't agree about everything, namely the role of the police in communities, you know? We had a lot of conversation about that.
MCEVERS: You said that the - some of the interactions with Clinton were tough - tough how?
MCKESSON: Yeah, so we just didn't agree, right? So there were pushes from protestors that are saying people don't believe that the police are always these beacons of safety in communities. And she, you know, at the beginning, felt strongly that police presence was necessary. She listened and heard people sort of talk about how safety is more expensive than police. And we worked through that, but it was a tough exchange.
And I think around some other issues - around the private prisons - right? - it was like, you know, will you end private prisons? And she was adamant about ending private prisons. There was a question about, will she stop taking money from lobbyists who lobby for private prisons? And it was unclear where she landed, but that exchange was - we had, like, tough conversation around it.
MCEVERS: When you talk about alternatives to police, what are you talking about?
MCKESSON: So, sort of highlighting this question, what does it mean that we have a police-first response to everything? With kids in schools - right? - like, do we need the police to be the people that help schools be safe places? So just trying to push on that, you know, and thinking about, like - there are some models around the country where we've seen that when you employ people, that crime decreases in some places. And that's, like, an alternative to this idea that police - that we can arrest our way out of the issue of crime.
MCEVERS: I mean, this activist movement got its start in the streets. Just over a year ago, you were protesting in the streets of Ferguson after the killing of Michael Brown. And now you're in Washington having this private meeting with a presidential candidate. I mean, what do you make of this?
MCKESSON: Yeah. So the protest highlighted this need to focus on black America in a way that structures had not before. Hillary knows, just as Bernie and O'Malley, that they cannot win without the black vote. And what we're seeing is, like, a new generation get mobilize around their own understanding of power and our understanding of, like, what the systemic response should be.
MCEVERS: And that's candidate Martin O'Malley. I mean, do you have any plans to meet with Republican candidates?
MCKESSON: Yes. I formally requested a meeting with Marco Rubio. And they replied saying that someone else would reach out to me, and I've not heard a reply yet. And I will likely reach out to Ben Carson's team, and I'm also trying to get a meeting on the books with RNC.
MCEVERS: That's DeRay McKesson. He's with the group We The Protesters. Earlier today, he and other activists met privately with Hillary Clinton in Washington. DeRay, thank you so much.
MCKESSON: Thank you.
MCEVERS: And one more note here. After today's meeting, Hillary Clinton tweeted, quote, "racism is America's original sin. To those I met with today, thank you for sharing your ideas."
Copyright © 2015 NPR. All rights reserved. Visit our website terms of use and permissions pages at www.npr.org for further information.
NPR transcripts are created on a rush deadline by Verb8tm, Inc., an NPR contractor, and produced using a proprietary transcription process developed with NPR. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record. ||||| In a surprise move, civil rights activist DeRay Mckesson jumped into the already crowded contest for Baltimore mayor Wednesday night, shaking up the Democratic field minutes before the deadline to file.
"Baltimore is a city of promise and possibility," the Black Lives Matter member told The Baltimore Sun. "We can't rely on traditional pathways to politics and the traditional politicians who walk those paths if we want transformational change."
He said he planned to release a platform within a week. He said it would include a call for internal school system audits to be made public.
Mckesson was the 13th and final candidate to jump into the primary race. In deep-blue Baltimore, the Democratic primary has long determined the winner of the general election.
Mayor Stephanie Rawlings-Blake has declined to run for re-election. Leading candidates include former Mayor Sheila Dixon, state Sen. Catherine E. Pugh, City Councilmen Carl Stokes and Nick J. Mosby, lawyer Elizabeth Embry and businessman David L. Warnock.
Mckesson, 30, a Baltimore native and former public school administrator here and in Minnesota, is part of a team called Campaign Zero, which seeks to end police killings in America. The group wants to end "broken windows" policing, increase community oversight of police and limit use of force, among other goals.
Mckesson has gained widespread attention in the protest movement that began in Ferguson, Mo., and came to Baltimore last year to demonstrate against police brutality after the death of Freddie Gray. He has nearly 300,000 followers on Twitter.
He has met with top White House officials and presidential candidates in recent months to discuss civil rights. Former Secretary of State Hillary Clinton has called him a "social media emperor."
Several candidates have filed to run for mayor in 2016 or have announced their intentions to run, including politicians, a bar owner and a crime victim. Others have said they are considering entering the race.
In recent months, Mckesson has been living in North Baltimore.
Dixon, the front-runner in the Democratic primary, said Wednesday she had not heard of Mckesson. She noted there are less than three months to go until the election, and said she wouldn't be distracted.
"We all want the best for Baltimore," she said. "There are 84 days left. I'm staying focused."
Recent polls showed Dixon leading the Democratic primary, followed by Pugh and Stokes.
Mosby, who has been doing well among younger voters, welcomed Mckesson to the race. "I welcome anyone to the race and look forward to the discussion about building a better Baltimore," he said. "I have seen the best and the worst of Baltimore and so far I am the only candidate for mayor to offer a comprehensive plan to tackle Baltimore's toughest challenges."
The crowded Democratic field means a candidate could win the April 26 primary with a small fraction of the vote.
Sean Yoes, the host of the "First Edition" radio show at Morgan State University's WEAA station, said Mckesson's candidacy would "represent a departure from business as usual."
But he added that Mckesson is likely not well-known among the older African-American women who have long decided Baltimore's elections.
"If the electorate consisted of celebrities who were politically conscious, then maybe he would have a chance," Yoes said. "I suspect the vast majority of the most prolific voting bloc in Baltimore City do not know who he is. That's going to be problematic for him."
Mckesson said he would have to catch up to candidates who have been running for months.
"We can build a Baltimore where more and more people want to live and work and where everyone can thrive," he said.
Later, on the blogging site Medium.com, he contrasted his background with those of the leading candidates. "It is true that I am a non-traditional candidate — I am not a former Mayor, City Councilman, state legislator, philanthropist or the son of a well-connected family. I am an activist, organizer, former teacher, and district administrator that intimately understands how interwoven our challenges and our solutions are," Mckesson wrote.
Johnetta Elzie, another well-known activist with the Black Lives Matter movement, said she was moving to Baltimore to work on Mckesson's campaign.
"I'm definitely going to be on the campaign," she said. "I'm excited."
Elzie, who met Mckesson during the Ferguson protests, said she believes in his "passion and the love he has for the city he's from.
"He talks about poverty and race and how those are intertwined," Elzie said. "DeRay's path is to be a truth-teller while improving the city he grew up in."
Wednesday was also the deadline to file for Congress. Candidates to succeed retiring Sen. Barbara A. Mikulski include Democratic Reps. Chris Van Hollen and Donna F. Edwards and Republicans Kathy Szeliga, Richard J. Douglas, Chrys Kefalas and Anthony Seda.
In the Baltimore mayor's race, other Democrats running include engineer Calvin Allen Young III, former bank operations manager Patrick Gutierrez, Baltimore police Sgt. Gersham Cupid, author Mack Clifton, former UPS manager Cindy Walsh and nurse Wilton Wilson.
Republican candidates are Armand F. Girard, a retired math teacher; Chancellor Torbit, the brother of a slain police officer; Brian Charles Vaeth, a former city firefighter; Alan Walden, a former WBAL radio anchor; and Larry O. Wardlow Jr., who filed late Wednesday.
The Green Party will hold a primary election between community activist Joshua Harris, Army veteran Emanuel McCray and U.S. Marine David Marriott.
Candidates have until Friday to withdraw from the ballot. Citizens have until Feb. 12 to challenge the residency of any of the candidates. The general election is Nov. 8.
There are 369,000 registered voters in Baltimore, including 288,000 Democrats, 47,000 unaffiliated voters and 30,000 Republicans. There are about 1,200 Libertarians and 1,100 Greens.
Statewide, there are about 3.8 million registered voters, including 2 million Democrats, 972,000 Republicans, and 694,000 unaffiliated voters.
lbroadwater@baltsun.com
twitter.com/lukebroadwater ||||| I Am Running for Mayor of Baltimore
I love Baltimore. This city has made me the man that I am.
Like an old friend, I’ve seen it at its best and its most challenged. From Ms. Rainey’s second grade class at Rosemont Elementary School to the mixes of K Swift & Miss Tony on 92Q, to the nights at Afram, Shake N Bake and the Inner Harbor, I was raised in the joy and charm of this city.
Like many others, I know this city’s pain. As the child of two now-recovered addicts, I have lived through the impact of addiction. I too have received the call letting me know that another life has fallen victim to the violence of our city. Like so many other residents, I have watched our city deal what seems like an endless series of challenges and setbacks.
Perhaps because I have seen both the impact of addiction and the power of recovery, I hold tight to the notion that our history is not our destiny. That we are, and always will be, more than our pain. What we choose to do today and tomorrow will shape our future and build our reality. It is why I believe so strongly that Baltimore is, and has always been, a city of promise and possibility.
I have come to realize that the traditional pathway to politics, and the traditional politicians who follow these well-worn paths, will not lead us to the transformational change our city needs. Many have accepted that our current political reality is fixed and irreversible — that we must resign ourselves to accept the way that City Hall functions, or the role of money and connections in dictating who runs and wins elections. They have bought into the notion that there is only one road that leads to serve as an elected leader.
It is easy to accept this, because those of us from Baltimore live and experience the failures of traditional politics and pathways to leadership. Too often the elected individuals we put our public trust in, disappoint us. We have lived through lofty promises and vague plans. We have come to expect little and accept less. When we rely on this traditional model of politics we are rewarded with consistent, disappointing results.
In order to achieve the promise of our city and become the Baltimore we know we can become, we must challenge the practices that have not and will not lead to transformation. We must demand more from our leaders and local government.
Getty Images
At its core, being the Mayor is about having a vision for the city that is both aspirational and grounded in reality. It is about demonstrating the ability to turn intentions into reality and maintaining the fortitude to see our ambitions met with strong implementation.
I am running to be the 50th Mayor of Baltimore in order to usher our city into an era where the government is accountable to its people and is aggressively innovative in how it identifies and solves its problems. We can build a Baltimore where more and more people want to live and work, and where everyone can thrive.
It is true that I am a non-traditional candidate — I am not a former Mayor, City Councilman, state legislator, philanthropist or the son of a well-connected family. I am an activist, organizer, former teacher, and district administrator that intimately understands how interwoven our challenges and our solutions are.
I am a son of Baltimore.
Getty Images
I understand that issues of safety are more expansive than policing, and that to make the city as safe as we want it to be, we will have to address issues related to job development, job access, grade-level reading, transportation, and college readiness, amongst others.
I also understand that transparency is a core pillar of government integrity. We deserve to know where our city services — from housing and sanitation, to schools and police — are doing well and falling short. To this end, we must invest in a broad range of systems and structures of accountability and transparency, including the release of the internal audits of the Baltimore City Public School System along with annual and timely audits of all city agencies.
I am not the silver bullet for the challenges of our city — no one individual is. But together, with the right ideas, the right passion, the right people, we can take this city in a new direction.
In the weeks ahead, I will release a policy platform and a plan for Baltimore that is rooted in these beliefs.
I look forward to earning your vote and working with you to change the trajectory of Baltimore.
Together, we will win. | A civil rights activist who has gained prominence in the Black Lives Matter movement is now throwing his hat in the ring for the mayorship of Baltimore. DeRay Mckesson, 30, the 13th and last candidate in the Democratic primary, filed right before the deadline in what the Baltimore Sun labels "a surprise move." The Baltimore native—whom Hillary Clinton has called a "social media emperor"—has been a public school administrator in the city, as well as in Minnesota, and has lobbied to end police killings and boost community policing. In a Medium post Wednesday night, Mckesson painted himself as "a non-traditional candidate," someone who's not a political insider or linked to a "well-connected family." "I am an activist, organizer, former teacher, and district administrator that intimately understands how interwoven our challenges and our solutions are," he wrote. "I am a son of Baltimore." The Democratic primary is slated for April 26, and the Sun points out the winner has historically taken the general election, too. Former mayor Sheila Dixon is currently the front-runner. |
The search for survivors continued into the night following a deadly gas explosion Wednesday that collapsed two buildings where residents complained of smelling gas for weeks, killing six people.
Dozens of people were injured and at least a dozen more are still missing as authorities searched the wreckage with cadaver dogs, sources said.
One of the deceased was identified as Griselde Camacho, 48, a public safety officer at Hunter College.
A second victim was later identified as 67-year-old dental assistant Carmen Tanco, who was an active church member and traveled to third world countries to provide dental services.
Sources said that a third victim was also identified as Rosaura Hernandez, 21.
The fourth person was found in the rubble at around 12:30 a.m. Thursday, FDNY sources confirmed.
Two more bodies were pulled from the rubble early Thursday morning, bringing the total to six – four women and two men.
“This is a tragedy of the worst kind,” Mayor Bill de Blasio said at a press conference Wednesday. “There was a major explosion that destroyed two buildings, the explosion was based on a gas leak.”
Con Edison was called at 9:13 a.m. about a possible gas leak at the building and dispatched a team to check it out.
Before the team could get there, the gas ignited a thunderous explosion that caused the building to collapse and sent massive amounts of debris tumbling into the street, burying cars and bystanders.
The explosion brought down two five-story buildings on Park Avenue near the corner of East 116th Street around 9:30 a.m., reducing them to rubble.
“It certainly has all the marks of such an incident [a gas line explosion]. We are waiting to inspect the lines to determine the cause,” said Alfonso Quiroz, a spokesman for Con Edison.
The mayor said there was no warning in advance of the blast and that several people are still unaccounted for.
“The only indication of danger came about 15 minutes earlier when a gas leak was reported to Con Edison,” de Blasio said. “There will be a search through the rubble of the building after the fire has been put out,” de Blasio said.
Con Edison is in the process of shutting off the gas mains leading to the building.
But despite the mayor’s claim about there being no prior warning, people who live nearby said the area around the destroyed buildings reeked of gas for years.
“It smelled like gas, it always smells of gas around here,” said Julie Mark, a local resident.
“There were people everywhere crying.”
Keema Thomas, 23, lives next to one of the collapsed buildings and said she too regularly smells gas.
“For the past two years we’ve been smelling gas in the building,” Thomas said. “And police from the 25th Precinct would come and evacuate it sometimes.”
“But then they’d let us back in 10, 15 minutes later,” she added. “And they’d say it’s fine. But we’ve definitely been smelling gas for a while now.”
The National Transportation Safety Board said it will conduct a lengthy investigation that will probe the call logs of Con Edison to find out when people started complaining about the smell of gas.
“We will be looking at all reports,” said Robert Sumwalt, a NTSB board member. “We will be looking at Con Eds call logs to see when the first calls started coming in. That will definitely be part of our investigation,”
Around 50 units, including 250 firefighters, battled flames and searched wreckage for victims.
Hundreds of people who live in building surrounding the area where the collapse occurred were evacuated and it was unclear when they would be allowed to return to their homes. All of the buildings evacuated were deemed to be structurally sound.
“It’s a shame that the cries of these people were ignored,” said Marsha Daniels, 61. “People turned their heads and it cost people their lives.”
Daniels said the gas problem should have been evident because people in the area complained about it regularly.
“Look how many people are out here that are affected by this. Something needs to be done,” she said. “This here is a terrible tragedy. It didn’t have to happen.”
Witnesses said the blast shook them to the core as they were going about their business Wednesday morning.
“I was on my way to the deli when I heard a big noise,” said Maria Velez, 55. “I thought it was the train above me, but then I looked and saw pieces of a building falling on top of cars.”
Witnesses said the blast shook them to the core as they were going about their business Wednesday morning.
“People were inside the cars and chunks of building landed on them,” she added. “Then the whole building came down, 1, 2, 3, like that, and all I could see was black smoke all over the place.”
The collapsed buildings, 1644 and 1646 Park Ave., each had a storefront on the first floor and apartments above. One housed a Spanish Christian Church and the other had a piano shop called Absolute Piano.
Department of Building records show that 120 feet of new gas piping was to be installed at 1644 Park last June to supply a stove in a fifth-floor apartment.
Both buildings have received numerous violations and complaints over the years, according to the DOB’s website. In 2008, the agency issued a violation at 1646 Park because the building’s rear wall had several cracks which could have endangered the structure’s stability.
William Greaves, 24, was asleep in his apartment at 110th Street and Park after working a night shift as a freight operator when the blast woke him up.
Department of Building records show that 120 feet of new gas piping was to be installed at 1644 Park last June to supply a stove in a fifth-floor apartment.
“I heard a BOOM! It sounded like a freaking missile,” he said. “It was so loud, I thought the explosion happened in my building — and I’m six blocks away.”
Another witness, Carlos Perez, said he was just opening up his flower shop when the explosion rattled him.
“I heard a huge explosion and all the windows shattered. I thought a train had fallen off the track and onto the street — that’s how loud it was,” said Perez, 55.
“When I looked, I saw everybody was running and there was smoke everywhere,” he continued.
“Then the building collapsed. Cars that were parked and even cars that were driving were shattered. The building — there was nothing there. Where there was supposed to be a building, there was nothing.”
Another witness, Carlos Perez, said he was just opening up his flower shop when the explosion rattled him.
Dennis Osorio, 40, lives on 116th between Park and Lexington avenues and was on his computer when he heard the devastating explosion and ran outside.
“It seriously felt like a crane just toppled over outside my door,” he said. “I was really scared. I stuck my head out and my super told me a bomb went off.”
“Outside was chaos,” he continued. “There were pillars of smoke. It was so hard to see. There were tons of people and it looked like they were trying to dig people out from the rubble and dust. I saw a woman strapped to a board taken away in an ambulance. She was sobbing. Storefront windows were blown out. This is shocking. It’s shocking.”
In addition to the six who were killed, dozens of others were taken to area hospitals with varying degrees of injury.
Officials at Mount Sinai Hospital confirmed that they were treating at least 21 people, including two children, while Harlem Hospital confirmed treating at least 14 people, one of whom was a child with serious injuries.
Other patients were being treated at Saint Luke’s and Metropolitan hospitals, FDNY officials said.
Friends and relatives of possible victims were frantically searching for their missing loved ones at area hospitals, but with little success.
Montserrat Acevedo, 24, arrived at Harlem Hospital looking for his brother-in-law, who also lived in one of the buildings.
“He was in the building at the time,” Acevedo said, referring to Jordy Salas. “He’s not answering his phone.”
“So far we haven’t found him,” she continued. “It’s very stressful because we don’t know anything since 9 a.m.” ||||| Mr. Louire said he was told not to turn on a cellphone or anything else, and to leave the apartment and the building, at 1652 Park Avenue. He was in the lobby when he heard the explosion.
Elizabeth Matthews, a spokeswoman for the utility, confirmed that a customer at 1652 Park Avenue called to report a heavy gas odor at 9:13 a.m. Two minutes later, two Con Ed crews were dispatched, and they arrived just after the explosion.
The Fire Department said it received the first report of a fire at 9:31 a.m., and discovered on arriving two minutes later that the buildings had collapsed. There were a total of 15 apartments in the two buildings; one had a church on the ground floor, and the other had a piano store.
The buildings were five stories and about 55 feet tall, according to Buildings Department records.
The injured were taken to several area hospitals; most were treated and released. Officials said 13 people went to Harlem Hospital Center, including a 15-year-old boy in critical condition; 22 people at Mount Sinai Hospital, including a woman in critical condition with head trauma; and 18 at Metropolitan Hospital Center, all with minor injuries.
City officials urged families trying to find loved ones to call 311. Many congregated at a center set up by the Red Cross at a nearby school, even as they made appeals on social media for information about the missing.
About 250 firefighters from 44 units responded to the explosion. Heavy equipment was used to clear destroyed vehicles outside the buildings as firefighters started the painstaking task of searching the rubble brick by brick. By late afternoon, they could be seen sifting through the wreckage, passing buckets of debris hand to hand to clear it from the site. As night fell, there were still hot spots in the debris that limited rescue efforts. ||||| Rescuers using spotlights and cadaver dogs searched for survivors into Thursday morning after a thunderous blast in East Harlem killed six people and leveled two apartment buildings.
At least four women and two men died in the Wednesday morning explosion and collapse of 1644 and 1646 Park Ave.
James Keivom/New York Daily News FDNY works at the scene of a deadly gas explosion Wednesday morning in East Harlem where two women were killed in a blast.
More than 50 people were injured during the uptown horror that erupted into flames, clouds of dust and smoke, and a desperate search among the rubble of the five-story buildings. Those who weren’t injured were traumatized nonetheless.
Pearl Gabel/New York Daily News FDNY and other officials continue to clear the site of an explosion that collapsed two buildings on Park Avenue in Harlem.
“We saw people flying out of the windows,” said Ashley Rivera, 21, as she held back tears. “Those are my neighbors.”
Brian Wilson Aerial view of the scene of Wednesday's powerful explosion that claimed 6 lives.
Hospital officials reported at least eight children were among the injured in the 9:31 a.m. gas explosion. Two of the them — one each at Harlem Hospital and Mount Sinai Medical Center — were in critical condition, including 15-year-old Oscar Hernandez, who suffered burns in the explosion.
MySpace Wednesday's explosion at 1644 Park Ave. in Harlem claimed the life of Griselde Camacho, a City University of New York public safety officer.
Nine people who lived in apartments above the Spanish Christian Church are still listed as missing, authorities said.
Andrew Theodorakis/New York Daily News Firefighters overlook the smoldering pit where two buildings once stood on Park Ave. in East Harlem.
“It felt like the world shook,” said witness Mustafa Shohataa, 27, who was standing near the buildings on Park Ave. at 116th St.
James Keivom/New York Daily news Man clutches desperately to a child as he escapes the chaos amid the fire and collapse that injured 22.
Authorities identified the women who died in the blast as Carmen Tanco, 67, a dental hygienist, Griselde Camacho, 44, a sergeant with Hunter College’s school safety patrol and Rosaura Hernandez-Barrios, 21. The name and age of the man, who was found early Thursday, were not released.
mrsashmarie via Instagram Carmen Tanco (center), 67, is missing and feared dead after the Harlem explosion on Wednesday.
Camacho, who also was a former police officer in Puerto Rico, was home with her mother when their apartment above the church suddenly collapsed.
James Keivom/New York Daily News NYPD officers cover their mouths from the smoke at the scene in East Harlem.
“She was one of the few security guards who’d say goodnight,” student Molly Ryan, 28, said of Camacho. “She was always ready with a smile and would say, ‘Get home safe.’ ”
David Handschuh/New York Daily News A firefighter checks a car buried in rubble for victims of the fire and collapse on Wednesday.
Diana Cortez’s long day of searching for her cousin, Tanco, ended in sorrow when police confirmed she had died.
Jeremy Sailing/AP Firefighters keep battling the blaze Wednesday even after the buildings collapsed.
At one point, Cortez, 56, rushed to Harlem Hospital in hopes of finding Tanco.
Andrew Theodorakis/New York Daily News Officials said 250 firefighters responded to the five-alarm fire.
“I’m just in total disbelief,” Cortez said Wednesday night, after learning of her cousin’s fate. “I just spoke to her on Saturday.”
Griselde Camacho, a Sergeant at Hunter College New York perished in the explosion at 1644 Park Avenue in Harlem on Wednesday, March 12, 2014. Credit - Hunter College
Isabel Villaverde, 51, a friend of Tanco’s, described her as “a beautiful person.”
Joe Marino/New York Daily News Mayor Bill de Blasio says at a press conference Wednesday that there was no advance warning and no indication of foul play.
“I was in shock. No one believes this could happen to someone you love,” Villaverde said. “It’s unexpected and a terrible pain. But at least she’s in a better place now.”
Barry Williams for New York Daily News Aisha Watts speaks to the media outside a Red Cross Shelter set up at PS 57 on 115th St. in Harlem on Wednesday.
Carmen Vargas-Rosa, 65, the administrator of the church, said she was still shaken by the explosion Wednesday night.
James Keivom/New York Daily News Emergency responders wheel away one of the many victims in Upper Manhattan.
“I’ve been involved in the church for 60 years and never seen anything like this,” said Vargas-Rosa. “The only thing that’s keeping me standing right now is the Lord’s help.”
Barry Williams for New York Daily news Firefighters respond in East Harlem during the five-alarm blaze that proved fatal.
“We ask everyone for their prayers,” she said.
Barry Williams for New York Daily News One of the victims watches the chaotic scene on Park Ave. after two buildings exploded and collapsed.
The building came down with such force it shattered windows a block away and registered on the seismic scale at Columbia University, which measures earthquake activity in and around New York City.
Mark Bonifacio/New York Daily News Ariel view of the smoke moving over East Harlem from scene of deadly blast.
Mayor de Blasio blamed a gas leak for the blast, but a source said the explosion was the result of a water main collapsing onto a gas line under the street.
Jeremy Sailing Emergency crews respond to the explosion that leveled two apartment buildings.
Asked about that account, Con Edison spokesman Michael Clendenin said it wasn’t clear if the water main break came before or after the explosion. Officials did confirm there was a report of a gas leak 18 minutes before the blast.
Marcus Santos/New York Daily News Firefighters douse the destroyed building with water on Wednesday as the blaze raged on.
The city’s gas mains that are run by Con Edison in Manhattan, the Bronx, and Queens are 53 years old on average and 60% of those lines are composed of unprotected steel and cast iron, the most leak-prone material, according to a study released by the Center for an Urban Future, an infrastructure think tank, earlier this week.
Barry Williams for New York Daily News Emergency personnel rush victim from the scene where at least two people were reported dead.
The calamity churned up horrible memories for many New Yorkers of the Sept. 11 attacks. One scene seemed to be a sad replay from that terrible day — the sight of terrified people scouring city hospitals for missing loved ones.
James Keivom Firefighters bravely trying to put out flames following the explosion.
“It’s very stressful because we don’t know anything,” said 24-year-old Montserrat Acevedo, who went to four hospitals in search of her brother-in-law, Jordy Salas. “We’re trying to reach him.”
David Handschuh/New York Daily News Medics treat victims at the scene of the fatal fire on Wednesday.
Salas, 23, lived on the second floor of 1644 Park Ave. If he was at home at the time of the blast, he didn’t have a chance to escape, said de Blasio.
Joe Marino/New York Daily News Emergency personnel rush victim from the scene where more than one person was reported dead.
“There was no warning in advance,” de Blasio said. “It’s a tragedy of the worst kind because there was no indication.”
Marcus Santos/New York Daily News MTA employee clears debris from the nearby tracks. Metro-North trains were suspended indefinitely.
Flanked by Police Commissioner Bill Bratton and City Council Speaker Melissa Mark-Viverito, whose district office is less than a block from the blast, de Blasio said the search will continue until everybody is found. De Blasio also vowed to investigate reports from residents that they had been smelling gas for days and made repeated calls to 311.
Marcus Santos/New York Daily News Firefighters battle through the smoke as they climb over the rubble in East Harlem.
Among other things, the city will check the work of a contractor who nine months ago installed a gas line from the basement to the fifth floor of one of the wrecked buildings. He was allowed to sign off on his own work under a common practice known as “self-certification,” records show.
Google; David Handschuh/New York Daily News A 'before' and 'after' view of the buildings lost in the gas explosion.
City officials said a sinkhole has developed due to a water main break related to the explosion. But Carlos Carabajo, who lived at 1644 Park Ave. for 25 years, said that sinkhole has been there for three or four years.
“It was like the street was buckling,” said Carabajo, 54.
Witnesses told the Daily News they saw flames inside the Absolute Piano store at 1646 Park Ave. just before the explosion.
Fire Commissioner Salvatore Cassano said it will be a while before that can be confirmed.
Jessica Ortega watched firefighters roll in powerful lights as the search went into the night. Salas is her brother-in-law, too, and she feared the worst. “We are very worried,” she said. “We don’t know what happened to him.”
Tulio Gomez said his friend Andrea Pogapoulos also lived in one of the blown-up buildings.
“He’s not answering his phone," Gomez said of Pogapoulos, 41. “We don’t know anything.”
Sarah Borrero, who lived at 1646 Park Ave., said everything she owned is now “rubble.”
“I remember this building that was standing up,” she said. “I can still see it in my mind. And now it’s not standing.” Borrero said she is just thankful her 16-year-old daughter, Kimberly, was at school when the building exploded.
The blast also damaged the Park 95 Deli three doors down, owned by Qusai Hezimeh.
“All the windows were blown out,” said Hezimeh, 32. “Everything was turned upside-down in the store.”
Hundreds of people who live near the site were evacuated, and it was not clear when they would be allowed back into their homes.
Most of the injured were taken to Harlem Hospital, St. Luke’s Hospital, Mount Sinai Medical Center and the Metropolitan Hospital.
Among the injured were two FBI agents who happened to be in the area, FBI spokesman Chris Sinos said. A source said they were driving by when chunks of the buildings fell on their car.
In Washington, Rep. Charles Rangel (D-Harlem), called the calamity “our community’s 9/11.”
“What 9/11 was to the world, this is to me,” he said. “It’s my congressional district.”
The White House said President Obama had been briefed on the situation.
The first sign that something was amiss came at 9:13 a.m., when a woman called Con Ed and reported a gas smell, officials said. Eighteen minutes later, the earth shook.
Justine Rodriguez was sleeping next door when the blast blew out her windows.
“I woke up and there was debris and a white cloud and chunks of the building falling around me,” she said. “I grabbed my cat and ran down the stairs.”
Aisha Watts, who also lives next door to the wrecked buildings, said she was in the bathroom when she heard the blast and suddenly there was sunshine spilling into the apartment. “We have no windows, no walls,” she said.
Rubble from the destroyed buildings sent up a cloud of dust that coated much of the surrounding neighborhood.
Metro-North service to and from Grand Central Terminal was suspended in both directions. Some of the tracks were later reopened and limited service resumed.
The MTA said the 4, 5, and 6 trains were slowed to a crawl due to concerns about “vibrations” from the collapse.
“They’re fine,” said Terri Ciaramello. “It’s the people who left pets behind that are the main concern right now.”
With Jennifer H. Cunningham, Michael J. Feeney, Thomas Tracy, Annie Karni, Denis Slattery, Joe Kemp, Rocco Parascandola, Edgar Sandoval
Reading on a mobile device? Click here to watch the video. | Rep. Charlie Rangel called it "our community's 9/11": Dozens of police officers and firefighters spent the day digging through rubble and dousing flames after a gas leak blew up two buildings in Harlem today, the Daily News reports. The latest tally, per the New York Times: three dead, more than 50 wounded, and about 10 still missing. Many of the injured have minor wounds, while others had broken bones and one woman rescued from the rubble was in critical but stable condition with head injuries. "There was no warning in advance," said Mayor Bill de Blasio. "It’s a tragedy of the worst kind because there was no indication." He promised to investigate claims by residents that they had smelled gas for days and called 311 many times; one neighbor tells the New York Post that gas smells persisted for years. The city says it will look into a contractor's installation of a new gas line in one of the buildings nine months ago. What's more, one of the buildings had "several vertical cracks" deemed dangerous in 2008, according to city records, but there's no record of them ever being fixed. |
Image copyright Getty Images Image caption Tributes have been left near the scene of the attack
Police investigating Saturday night's terror attack in London say they know the identity of the three attackers who killed seven people and injured 48.
The Met Police said their names would be released "as soon as operationally possible" as officers work to establish if they were part of a wider network.
PM Theresa May said victims included a "number of nationalities", saying it was "an attack on the free world".
It comes as police have searched more addresses in east London.
Police said a "number of people" had been detained following the raids, in Newham and Barking.
Eleven people are being held after police raids in Barking on Sunday. One of the properties is believed to be the home of one of the attackers.
The three attackers were shot dead by police after driving into pedestrians on London Bridge and stabbing people in Borough Market.
The Independent Police Complaints Commission said 46 shots were fired by eight police officers - three from the City of London force and five from the Met.
NHS England said 36 people remained in hospital, with 18 in a critical condition.
Bank card on body
Canadian national Chrissy Archibald, 30, was the first victim to be named.
Her family said she had worked in a homeless shelter until she moved to Europe to be with her fiancé.
The sister of 32-year-old James McMullan, from Hackney, east London, said he was also believed to be among those who died after his bank card was found on a body at the scene.
A French national was also killed in the attack, according to foreign minister Jean-Yves Le Drian.
Image copyright Reuters Image caption Chrissy Archibald moved to Europe to be with her fiancé
Security sources in Dublin have said one of the attackers was carrying an identification card issued in the Republic of Ireland when he was shot dead, the Press Association has reported.
Meanwhile, Met Police Commissioner Cressida Dick said a "huge amount" of forensic material and evidence had been seized from the van - as well as from the police raids.
She told BBC Breakfast the investigation was moving very quickly and the priority now was to establish if anybody else was involved in the plot.
Media playback is unsupported on your device Media caption Florin tried to fight off an attacker at Borough Market
The so-called Islamic State group has claimed responsibility for the attack.
A message from Muslim faith leaders was read outside New Scotland Yard by Met Commander Mak Chishty urging the community to "root out the scourge of terrorism which hides amongst their own people and masquerades as Islam".
Two people have claimed they had warned the police about the behaviour of one of the attackers.
Media playback is unsupported on your device Media caption A man tried to escape from police in East Ham by climbing on to a shop roof
Speaking to the BBC's Asian Network, an unnamed man said one of the attackers had become more extreme over the past two years.
"We spoke about a particular attack that happened and, like most radicals, he had a justification for anything - everything and anything.
"And that day I realised that I need to contact the authorities," he said.
He said no action was taken.
"I did my bit... but the authorities didn't do their bit".
Media playback is unsupported on your device Media caption Someone who knew one of the London attackers has spoken about him to the BBC
London Bridge rail and Tube stations both reopened early on Monday morning. The bridge and surrounding roads have also reopened.
At the scene
Image copyright Getty Images
By BBC's Katie Wright
While the usual hordes of commuters stream across London Bridge, the large police presence and multitude of global TV crews show it's not a typical Monday morning.
James Hartley, an HR manager in the city, crosses London Bridge every morning and would "normally be marching across". Today, he took a moment to stop and reflect.
"I'm just taking in the city and thinking about what happened over the weekend. It feels really emotional."
The overriding message to Londoners has been to carry on as normal, but for some of those who work near where the attack happened, that wasn't possible logistically.
Abigail Barclay and her colleagues were gathered on the street near Borough Market unable to get into their office.
She said: "We're just working out our plan at the moment. There are lots of people out here trying to figure out what they're going to do."
Speaking on Sunday, Metropolitan Police Assistant Commissioner Mark Rowley said four police officers were among those injured, two of them seriously.
The Met said one officer received stitches to a head injury and another received injuries to his arm.
An off-duty officer, who was one of the first on the scene, remains in a serious condition.
Image caption Police carried out raids in east London after Saturday's terror attack in the capital
Australian Prime Minister Malcolm Turnbull has confirmed four of the country's nationals had been caught up in the attack.
Two, Candice Hedge from Brisbane and Andrew Morrison from Darwin, are among the injured after both were stabbed.
Seven French nationals were injured in the attack, including four with serious injuries, while one French national is still missing.
Media playback is unsupported on your device Media caption Watch: How the attacks unfolded
Among other developments:
Prime Minister Theresa May chaired a meeting of the government's emergency committee Cobra
A vigil is being held at 18:00 BST at Potters Field Park near London Bridge to remember the victims
There will also be a minute's silence on Tuesday at 11:00 BST in memory of those who lost their lives and all those affected by the attacks
Barriers have been installed on Westminster, Lambeth and Waterloo bridges following the attack to stop vehicles from mounting the pavement
The Metropolitan Police has set up a casualty bureau on 0800 096 1233 and 020 7158 0197 for people concerned about friends or relatives
Image copyright Reuters Image caption Barriers have been installed on three bridges following the attack
Police say the attack began at 21:58 BST on Saturday, when a white Renault van hired by one of the attackers drove onto London Bridge from the north side.
Eyewitnesses described it travelling at high speed, hitting pedestrians, before crashing close to the Barrowboy and Banker pub.
It is the third terror attack in the UK in three months, following the car and knife attack on Westminster Bridge in March, in which five people were killed, and the Manchester bombing less than two weeks ago, in which 22 people were killed.
A new normal?
Image copyright Getty Images
Dominic Casciani, BBC home affairs correspondent
With three attacks in three months, acts of terror against soft targets is beginning to feel, to some people, like the new normal.
The brutal reality is that this kind of threat is absolutely typical of what jihadists sought to achieve in all their attacks across Europe.
Since 2013 security services in the UK have foiled 18 plots. A large proportion of those have involved suspects who set out to commit acts of violence similar to the attacks on Westminster Bridge and London Bridge.
Plans to use bombs, such as at Manchester Arena, are rarer because plotters need to have the technical skills for such an appalling attack - but attacking people with cars and knives is far easier and has long been encouraged by so-called Islamic State and other jihadists.
The aim of the three attackers last night is abundantly clear - not only did they want to kill, but they wanted to lose their own lives.
They would have known full well that attacking people in the street would draw armed police in their direction and the fake bomb belts they were wearing would, in their own warped minds, hasten their demise.
Read more from Dominic
Were you in the area? Did you witness the attack or the arrests? Share your experiences by emailing haveyoursay@bbc.co.uk.
Please include a contact number if you are willing to speak to a BBC journalist. You can also contact us in the following ways: ||||| Top anti-terrorism officer says Met is urgently investigating whether there was ‘assistance or support’ for attack that left seven dead and 21 in critical condition
Investigators are racing to find out how Britain’s counter-terrorism defences were breached for the third time in 10 weeks, as police mounted multiple raids and arrested a dozen people following the London Bridge attack.
Britain’s top anti-terrorism officer, Metropolitan police assistant commissioner Mark Rowley, said detectives were urgently investigating to discover if the three men who killed seven people and left 21 in a critical condition on Saturday night by running over pedestrians in a rented van and stabbing people in Borough Market were “assisted or supported”. The attack was stopped when eight armed officers killed them in a hail of about 50 bullets, fearing it was a “life or death situation”.
Police respond to incidents at London Bridge and Borough Market – live Read more
Armed police raided homes in Barking, east London, on Sunday, including the home of one of the suspected attackers who neighbours described as a married father of two young children who regularly attended two local mosques.
Five people – four men and a woman – were taken by armed police from the apartment block where he was believed to have lived. Three women were led away from the same flats and police raided a flat above a bookmakers on Barking Road.
“Work is ongoing to understand more about [the three attackers], about their connections and about whether they were assisted or supported by anyone else,” said Rowley.
Play Video 1:23 One member of the public ‘caught in police crossfire’ during London attack – video
He would not comment on whether or not the attackers were known to the police or intelligence services, citing ongoing efforts to confirm their identities.
However, a woman who lives in the block that was raided told the Guardian she had expressed concerns to Barking police about the man’s extremist opinions. Erica Gasparri said she had gone to the police two years ago after she feared the man was radicalising children in a local park.
“I took four photographs of him and gave them to the [local] police,” Gasparri said. “They rang Scotland Yard when I was there and said the information had been passed on. They were very concerned.
“They told me to delete the photos for my own safety, which I did, but then I heard nothing. That was two years ago. No one came to me. If they did, this could have been prevented and lives could have been saved.”
Police believe all the attackers were killed in what Rowley described as a “critical” confrontation given that the terrorists were wearing what appeared to be suicide belts, but turned out to be hoaxes.
“I am not surprised that faced with what they must have feared were three suicide bombers, the firearms officers fired an unprecedented number of rounds to be completely confident they had neutralised those threats,” he said. “I am humbled by the bravery of an officer who will rush towards a potential suicide bomber thinking only of protecting others.”
One official confirmed that police and MI5 had been reviewing a large pool of 20,000 former terrorism suspects to see if they needed to be reassessed.
Rowley said additional police, armed and unarmed, would be placed on patrol in the capital, policing plans for forthcoming events would be reviewed and “increased physical measures” would be used in order to keep the public safe on London’s bridges.
On Sunday night, 36 people remained in hospital after the attack. One of the dead was a Canadian, another was French, while the injured included several other French nationals, two Australians and two New Zealanders.
The Canadian victim was named as Christine Archibald, who worked in a homeless shelter until she moved to Europe to be with her fiance. Her family said in a statement: ““We grieve the loss of our beautiful, loving daughter and sister. She had room in her heart for everyone and believed strongly that every person was to be valued and respected. She would have had no understanding of the callous cruelty that caused her death.”
Two police officers were also injured. They were an on-duty member of the British transport police armed only with a baton who was stabbed in the face and head when he confronted an attacker and an off-duty member of the Metropolitan police. One member of the public was hit by a stray bullet and was hospitalised but their injuries are not life-threatening.
Witnesses told the BBC that the attackers shouted “this is for Allah” before launching assaults with knives and blades said to be 10 inches long.
Elizabeth O’Neill, the mother of one of the victims, 23-year-old Daniel O’Neill, who was being treated in King’s College hospital, said: “A man ran up to him and said: ‘This is for my family, this is for Islam,’ and stuck a knife straight in him. He’s got a seven-inch scar going from his belly round to his back.”
Gabrielle Sciotto, a documentary maker who was on the scene, said the terrorists were fleeing a policeman who was chasing them out of Borough Market when they were shot.
“They ran towards me because the police officer was trying to chase them,” he said. “Suddenly lots and lots of police came from the other direction. There was a lot of shouting: ‘Stop, stop, get on the floor.’ Then the police shot them.”
Facebook Twitter Pinterest A picture that has been circulating showing a man on the ground, apparently with canisters strapped to his body. Photograph: Gabriele Sciotto
Before that, members of the public had fought back. Gerard Vowls, 47, said he saw a woman being stabbed by three men in their 30s and threw chairs, glasses and bottles in an attempt to stop them. “They kept coming to try to stab me,” he said. “They were stabbing everyone. Evil, evil people.”
Sciotto took photographs of the dead terrorists, which allowed neighbours of one suspected attacker at a modern apartment complex in Barking to identify him.
“He lived here for about three years. I used to see him with his wife in a burka,” said one woman who asked not to be named. “She was pregnant but recently had a baby and they had a small boy of about two. There was a succession of unusual cars coming and going here in the last while. There was a foreign-registered BMW which was just stacking up [parking] tickets. Then in the last week the activity stopped.”
“He used to play with the children in the park and playground,” said Jamal Bafadhal, 63, a neighbour. “From the outside to look at him you thought he was a good guy. But he was a little bit removed from us.”
Salahudee Jayabdeen, 40, said the man had been forcibly removed from a local mosque, Jabir bin Zayd.
“He started questioning what the imam was saying,” he said. “He was asked to leave. He didn’t want to and was forcibly taken out.”
A mosque spokesman confirmed the incident had happened, but said the man involved was not a regular and was not known to them.
The three attacks in Westminster, Manchester, and now London Bridge have claimed 33 lives in total, and constitute the worst wave of atrocities to hit the UK since the London suicide bombings on 7 July 2005. Five other plots believed to be at an advanced stage have been disrupted since the Westminster Bridge attack on 22 March – four in London and one in Birmingham.
Following a Cobra emergency committee meeting on Sunday, attended by police and security chiefs, the prime minister, Theresa May, said that while terrorists involved in the three attacks were “not connected by common networks”, a new threat was emerging of attackers who “are inspired to attack not only on the basis of carefully constructed plots after years of planning and training … but by copying one another”.
May announced a review of counter-terrorism legislation including the possibility of longer sentences for terror offences and a campaign against the “evil ideology of Islamist extremism”.
She said “enough is enough” and reiterated her call for international agreements to “regulate cyberspace”, accusing internet companies of creating a safe space in which extremist ideology could breed. Facebook, Google and Twitter all defended their record and said they wanted to work with government to prevent terrorists using their platforms.
“We want Facebook to be a hostile environment for terrorists,” said Simon Milner, director of policy at Facebook UK, Middle East and Africa.
The mayor of London, Sadiq Khan, announced that a vigil would be held next to City Hall, by Tower Bridge, on Monday evening from 6pm “to show the world that we stand united in the face of those who seek to harm us and our way of life”.
The Joint Terrorism Analysis Centre has left the UK terror threat level at severe, meaning an attack remains “highly likely”. It was raised to critical – the highest level – in the wake of the Manchester arena bombing and was returned to severe last Saturday. ||||| CAIRO (Reuters) - Islamic State claimed responsibility for Saturday night's attack in London which killed seven people and wounded dozens, the militant group's agency Amaq said on Sunday.
"A detachment of Islamic State fighters executed yesterday's London attack," a statement posted on Amaq's media page, monitored in Cairo, said.
Three attackers rammed a van into pedestrians on London Bridge and stabbed others nearby on Saturday night before police shot them dead.
It was the third militant attack in Britain in less than three months.
Islamic State, losing territory in Syria and Iraq to an offensive backed by a U.S.-led coalition, had sent out a call on messaging service Telegram early on Saturday urging its followers to carry out attacks with trucks, knives and guns against "Crusaders" during the Muslim holy month of Ramadan.
Islamist militants, or people claiming allegiance to the group, have carried out scores of deadly attacks in Europe, the Middle East, Africa, Asia and the United States over the past two years. | ISIS has claimed responsibility for Saturday night's terrorist attack in London, though authorities say the claim has not been verified. The militant group's media agency Amaq announced late Sunday that a "detachment of Islamic State fighters" carried out the atrocity, in which seven people were killed and dozens more injured before police shot dead three attackers, reports Reuters. Police say they know the identities of the attackers who drove into pedestrians in London Bridge before running amok with knives, and they will release them "as soon as operationally possible," the BBC reports. At least 12 people were arrested in connection with the attack Sunday and two London addresses were searched early Monday. Police say they are trying to establish whether the attackers, who were wearing what turned out to be fake suicide belts, were part of a wider network. In Barking, east London, neighbors of a suspected attacker tell the Guardian that he was a married father of two children who attended a local mosque. One neighbor says she took photos of him and passed them to police because she feared he was radicalizing children in a local park. "They told me to delete the photos for my own safety, which I did, but then I heard nothing," she says. "That was two years ago. No one came to me. If they did, this could have been prevented and lives could have been saved." (Londoners fought back with bottles and chairs during the attack.) |
Since the advent of modern warfare, the presence of mines and minefields has hampered the freedom of movement of military forces. The origins of mine warfare may be traced back to crude explosive devices used during the Civil War. Since that time, the use of land mines has increased to a point where there are now over 750 types of land mines, ranging in sophistication from simple pressure-triggered explosives to more sophisticated devices that use advanced sensors. It is estimated that there are about 127 million land mines buried in 55 countries. Land mines are considered to be a valuable military asset since, by slowing, channeling, and possibly killing opponents, they multiply the combat impact of defending forces. Their attractiveness to smaller military and paramilitary organizations, such as those in the Third World, is further enhanced because they do not require complex logistics support and are readily available and inexpensive. Virtually every combatant can make effective mines, and they will continue to be a viable weapon for the future. U.S. forces must be prepared to operate in a mined environment across the spectrum of military operations, from peacetime activities to large-scale combat operations. Detection is a key component of countermine efforts. In combat operations, the countermine mission revolves around speed and mobility. Mines hinder maneuver commanders’ ability to accomplish their missions because unit commanders need to know where mines are located so they can avoid or neutralize them. In peacekeeping operations, mines are used against U.S. forces to slow or stop daily operations. This gives insurgents a way to control traffic flow of defense forces and affect the morale of both the military and civilian population. Since World War II, the U.S. military’s primary land mine detection tool has been the hand-held metal detector used in conjunction with a manual probe. This method is slow, labor intensive, and dangerous because the operator is in close proximity to the explosive. The Army has also recently acquired a small number of vehicle-based metal detectors from South Africa to be used in route clearing operations and to be issued to units, as needed, on a contingency basis. Metal detectors are also sensitive to trace metal elements and debris, which are found in most soils. This limitation leads to a high level of false alarms since operators often cannot distinguish between a metal fragment and a mine. False alarms translate into increased workload and time because each detection must be treated as if it were an explosive. The wide use of mines with little to no metal content also presents a significant problem for metal detectors. For example, according to DOD intelligence reports, about 75 percent of the land mines in Bosnia are low-metallic and some former Yugoslav mines containing no metal were known to have been manufactured. In fact, the Army has stated that the inability to effectively detect low metal and non-metallic mines remains a major operational deficiency for U.S. forces. Given the limitations of the metal detector, DOD has been conducting research and development since World War II to improve its land mine detection capability. For example, during the 1940s the United States began research to develop a detector capable of finding nonmetallic mines. Since then, DOD has embarked on a number of unsuccessful efforts to develop a nonmetallic detector and to field a vehicle-based land mine detector. DOD now has new programs to develop a vehicle-based detector and an improved hand-held detector. DOD expects to field these new systems, both with nonmetallic capability, within the next 3 years. Airborne detectors are also being developed by both the Army and the Marine Corps for reconnaissance missions to locate minefields. Countermine research and development, which includes land mine detection, is funded by a number of DOD organizations and coordinated through a newly established Unexploded Ordnance Center of Excellence. The Army is designated as the lead agency for DOD’s countermine research, with most of its detection research funding being managed by the Night Vision and Electronic Sensors Directorate (NVESD) and the Project Manager for Mines, Countermine and Demolitions. The Marine Corps and the Navy are also supporting a limited number of land mine detection research efforts. Additionally, the Defense Advanced Research Projects Agency (DARPA) has been involved with a number of land mine detection programs throughout the years. In fiscal years 1998 through 2000, DOD funded over $360 million in countermine-related research and development projects, of which approximately $160 million was aimed specifically toward land mine detection. DOD sponsored an additional $47 million in research during this period for unexploded ordnance detection (which includes land mines) in support of other DOD missions such as humanitarian demining and environmental cleanup. Because of the basic nature of detection, these other efforts indirectly supported the countermine mission. Overall, DOD funding levels for countermine research have been sporadic over the years. Major countermine research initiatives and fieldings of new detectors have coincided with U.S. military actions, such as the Korean War, the Vietnam War, Operation Desert Storm, and the recent peacekeeping operations in the Balkans. Following each influx of countermine research funding has been a corresponding lull in activity. A countermine program assessment conducted for the Army in 1993 concluded that whereas mine developments have benefited from the infusion of leap ahead technologies, countermine tools have been essentially product improved counterparts of World War II ideas. However, according to DOD, countermine development is a slow process because of the technological challenges inherent to land mine detection. Not only must a detector be able to find mines quickly and safely through large variety of soils and at varying depths in battlefield conditions with clutter and even countermeasures, but it must also be able to discriminate between mines (which vary considerably in size, shape, and component materials) and other buried objects. DOD’s ability to develop meaningful land mine detection solutions is limited by the absence of an effective strategy to guide its research and development program. DOD maintains frequent contact with the external research community to constantly learn about new detection approaches and technologies. However, it has not developed a comprehensive set of mission needs to guide its research programs and does not systematically evaluate the broad range of potential technologies that could address those mission needs. In addition, its resources for conducting critical basic research for addressing fundamental science-based questions are threatened. Lastly, because DOD’s testing plans do not require adequate testing of land mine detectors in development, the extent of performance limitations in the variety of operating conditions under which they are expected to be used will not be fully understood. DOD has not developed a comprehensive and specific set of mission-based criteria that reflect the needs of U.S. forces, upon which to base its investments in new technologies in land mine detection. Although DOD’s overall acquisition process sets out a needs-based framework to conduct research and development, DOD has not developed a complete statement of needs at the early stages of research when technologies are first investigated and selected. The process calls for an evolutionary definition of needs, meaning that statements of needs start in very general terms and become increasingly specific as programs mature. Early stages of research are generated from and guided by general statements of needs supplemented through collaboration between the combat users and the research communities. In the case of land mine detection, the Army stated a general need of having its forces be able to operate freely in a mined environment. This need has received a broad definition, as “capabilities for rapid, remote or standoff surveillance, reconnaissance, detection, and neutralization of mines.” Further specification of the need is left to representatives of the user community and researchers to determine. It is only with respect to specific systems at later stages of the acquisition cycle that more formalized and specific requirements were established to guide decisions about further funding. Although we found that a comprehesive set of specific measurable criteria representing mission needs had not been developed, we did find some specific criteria in use to guide research efforts, such as rates of advance and standoff distances. However, a number of these criteria were established by DOD to reflect incremental improvements over the current capabilities of technologies rather than to reflect the optimal needs of combat engineers. For example, the Army was using performance goals to guide its forward looking mine detection sensors program. The objective of this program was to investigate and develop mine detection technologies to increase standoff and speed for route clearance missions beyond current capabilities. Performance goals included developing a system with a standoff of greater than 20 meters with a rate of advance of 20 kilometers per hour. However, these goals were primarily driven by the capabilities and limitations of the systems being considered. According to an Army researcher, they were based on what existing technologies could achieve in a limited time period (3 years) and not on what the combat engineers would ultimately need. During our assessment of technologies, which is described in the next section of this report, we found that the standoff desired by combat engineers was almost 50 meters for route clearance missions with a rate of advance of 40 kilometers per hour. One barrier to DOD’s developing a comprehensive set of mission needs is large gaps in information about target signature characteristics and environmental conditions. For example, significant information gaps exist about the rate at which land mines leak explosive vapors and the environmental pathways that the vapors take once they are released. Also, knowledge gaps about soil characteristics in future battlefields limit DOD’s ability to fully specify mission needs and knowledgeably select among competing technologies. They also reduce the pace of technological innovation by hampering researchers from predicting how their devices will function. DOD is currently funding research to answer several important questions in these areas. But, as discussed below, continued DOD funding is threatened. Just as DOD has failed to adequately specify countermine mission needs for assessing promising technologies, we found that it had not systematically assessed the strengths and the limitations of underlying technologies to meet mission needs. DOD employs a number of mechanisms to obtain ideas for promising land mine detection solutions. These include attending and sponsoring technical conferences, arranging informal system demonstrations, convening workshops, and publishing formal solicitations for research proposals. However, DOD does not systematically evaluate the merits of the wide variety of underlying technologies against a comprehensive set of mission needs to identify the most promising candidates for a focused and sustained research program. Instead, it generally evaluates the merits of specific systems proposed by developers against time-driven requirements of its research programs. One way DOD identifies land mine detection ideas is through sponsoring and attending international technical conferences on land mine detection technologies. For example, it sponsors an annual conference on unexploded ordnance detection and clearance, at which countermine related detection is a major focus. Additionally, DOD research officials have chaired mine detection conferences within annual sensing technology symposia of the International Society for Optical Engineering (SPIE) since 1995. The most recent SPIE conference on mine detection, held in April 2000, included over 130 technical presentations by researchers from DOD and other organizations worldwide. SPIE provides DOD land mine research officials an opportunity to network with researchers working in different areas of sensing technologies. DOD also identifies new technologies through reviewing researchers’ ideas outside of the formal solicitation process by occasionally allowing researchers to demonstrate their ideas at DOD facilities. Technical workshops are another mechanism used by DOD to identify new ideas. For example, DOD’s Unexploded Ordnance Center of Excellence held a workshop, in part, to identify new land mine detection technologies in 1998. This workshop, largely attended by DOD staff and contractors, explored technological approaches that were not receiving a lot of attention. The report of the workshop pointed out several potential paths for future investment for land mine detection. Of all the mechanisms DOD uses to identify new technologies, issuing announcements in the Commerce Business Daily is its principal means for communicating its research needs to the outside research community and receiving ideas and approaches to improve land mine detection capabilities. In our interviews with non-government researchers, we found that they use DOD’s announcements as their principal means for familiarizing themselves about DOD’s needs. In connection with our efforts to identify candidate technologies for land mine detection, we searched databases, such as the Commerce Business Daily, containing DOD announcements. We found that the Army placed 20 of the 25 announcements we identified from 1997 through 2000. NVESD accounted for 17 of the solicitations. Countermine research and development detection funding is concentrated on four primary technologies…There has been increasing emphasis on radar and active electromagnetics as the technologies showing the greatest short term promise for the reliable detection of land mines (emphasis added). At NVESD, which has the largest share of countermine detection research, programs are generally time-limited. As a result, evaluations of proposals are largely based on the maturity of the idea. An example is the Future Combat Systems (FCS) Mine Detection and Neutralization program, which is funded at about $21 million over 3 years. This program is designed to have a system ready for testing by fiscal year 2002, only 3 years after the program started. This pace is necessary to meet the Army’s overall goals for fielding FCS. NVESD officials told us that this time constraint means they are more apt to fund the more mature ideas. This time constraint could therefore result in not selecting potentially promising technologies that might involve more risk. Although NVESD officials stated that they are receptive to less developed ideas that show promise, the requirements of the program may make this difficult to do. We found that DOD did not supplement its frequent announcements with periodic reviews of the underlying technologies that the responses were based on. Such a review would evaluate their future prospects and could suggest a long-term sustained research program in a technological area that required several thrusts, whereas the individual project proposals might appear to have doubtful value in themselves. Along a similar vein, in 1998 a Defense Science Board task force that evaluated DOD’s efforts in a closely related area of research and development also recommended a two-track approach for research and development. The Board found that, “there has been too little attention given to some techniques which may provide capabilities important for particular sites” and recommended that DOD institute a program parallel to the “baseline” program that “would involve an aggressive research and development effort … to explore some avenues which have received too little attention in the past.” Numerous questions about the physics-based capabilities of the various detection technologies make it difficult, if not impossible, to evaluate them against mission needs at the present time. Although DOD has invested funds in basic research to address some of its questions, its efforts are expected to end after fiscal year 2001. In addition to providing support to technology evaluations, a sustained basic research program is needed to support DOD’s ongoing efforts to develop better systems. Independent evaluations, as well as our assessment of candidate land mine detection technologies, which is presented in the next section of this report, have revealed many uncertainties about the strengths and limitations of each of the applicable technologies with respect to addressing countermine mission needs. In addition, DOD has noted a number of fundamental science-based questions regarding detection technologies. For example, 3 years ago the Center of Excellence, through a series of workshops, identified 81 broad research needs critical to improving detection capabilities. Examples of research needs included an improved understanding of the impact of environmental conditions on many of the technologies examined and better characterization of clutter, which contributes to the problem of false alarms currently plaguing a number of technologies. Some of the needs have been addressed since the workshops. For example, the Center sponsored follow-on workshops and independent studies of radar and metal detectors to address research questions specific to these technologies. However, DOD officials told us that the broad set of needs has not been systematically addressed and that many questions still remain. Also, over the past 3 years, DOD has invested about $4 million annually in basic research directed at answering fundamental science-based questions supporting land mine detection. This work has been managed by the Army Research Office, with funding provided by both the Army and DOD through its Multidisciplinary University Research Initiative. However, this research program is expected to end after fiscal year 2001. According to DOD, this basic research has been valuable to its land mine detection program. For example, the 1999 Center of Excellence annual report states that the basic research program has improved physics-based modeling so that it is now possible to examine realistic problems that include soil interactions with buried targets. The results of this modeling have yielded insights into limitations of sensor performance in various environments. The report concludes that this modeling work needs to be continued and expanded to systematically study soil effects. In fact, the report recommends continued investment in basic research to increase understanding of phenomenology associated with detection technologies, stating that the greatest value of basic research comes from a sustained effort. DOD’s policy is that systems be tested under those realistic conditions that most stress them. According to DOD, this testing is to demonstrate that all technical risk areas have been identified and reduced. However, because of questions about the physics-based strengths and weaknesses of land mine detection technologies, there is uncertainty about how well the detectors currently in development will function in the various environmental conditions expected in countermine operations. Some of these questions could be answered through thorough developmental testing. However, DOD’s testing plans do not adequately subject its detectors to the multitude of conditions necessary to address these performance uncertainties. We reviewed the Army’s testing plans for two land mine detection systems currently in development to determine whether the test protocols were designed on a framework of identifying and minimizing technical risks stemming from the uncertainties detailed above. These are the Handheld Stand-off Mine Detection System (HSTAMIDS) hand-held detector and the Ground Stand-off Mine Detection System (GSTAMIDS) vehicle-based detector. We found that the testing plans were not designed around the breadth of environmental conditions expected for those systems or around anticipated limitations and uncertainties. Rather, testing is to be conducted at only a limited number of locations and under ambient climatic conditions. As such, knowledge about the performance of these detectors in the variety of soil types and weather conditions expected in worldwide military operations is likely to be limited. For example, the performance of ground penetrating radar, a primary sensor in both the HSTAMIDS and the GSTAMIDS, is questionable in saturated soils, such as what might occur after a heavy rain. However, neither the HSTAMIDS nor GSTAMIDS testing plans specifically call for testing in wet conditions. The only way this condition would be tested is if there is heavy rain on or just before the days that testing is to occur. As such, knowledge about the performance of these detectors in a variety of conditions is likely to be limited. Incomplete knowledge of the properties of candidate land mine detection technologies makes it difficult to assess whether DOD is investing in the most promising technologies to address countermine detection missions. Because DOD had not performed a systematic assessment of potentially applicable technologies against military countermine mission needs, we performed our own evaluation. Through a broad and systematic review of technological candidates, we identified nine technologies with potential applicability, five of which DOD is currently exploring. However, insufficient information about these nine technologies prevented us from definitively concluding that any could address any of the missions. Additionally, because of these uncertainties, we could not conclude whether a “sensor fusion” approach involving a combination of two or more of the technologies would yield an adequate solution. We conducted a broad search for potential technological candidates for solutions to the countermine problem, and then evaluated the candidates against a set of mission-based criteria to determine which candidates were promising for further research. A more detailed description of our methodology is presented in appendix I. For criteria, we identified operational needs for each of five different types of critical countermine missions: (1) breaching, (2) route clearance, (3) area clearance, (4) tactical reconnaissance, and (5) reconnaissance supporting stability and support operations during peacetime. A more detailed description of these missions is presented in appendix II. We then developed a set of technical criteria to specifically define detection requirements for each mission. The criteria we developed were based on target parameters, operational parameters,and environmental parameters. Target parameters describe the physical characteristics of land mines and the methods by which they are emplaced. These include such characteristics as land mine sizes and shapes, metallic content, explosive content, burial depths and the length of time mines have been buried. Operational parameters describe the operational needs of the military as they relate to countermine operations involving mine detection. These factors include speed of advance, detection distance from the mine (called stand-off),and the level of precision in identifying the exact position of hidden mines. Target and operational parameters can vary among the five types of missions. Environmental parameters, unlike target and operational parameters, do not vary based on the type of mission. Rather environmental parameters are site-specific. They are natural and man-made conditions in and around the battlefield that affect mine detection. These parameters cover a wide array of atmospheric, surface, and sub-surface environmental conditions, such as air temperature, dust or fog obscuration, surface snow, varying soil types and post-blast explosive residue. A more detailed description of the criteria used in our evaluation is presented in appendix II. Our search yielded 19 technological candidates, which span a wide variety of different physical principles and are shown in figure 1. As shown in figure 1, the majority (15) of the technologies use energy from the electromagnetic (EM) spectrum, either to detect emissions from the mine or to project energy at the mine and detect a reflection. The energies used in these technologies span the entire EM spectrum, from radio waves (characterized by long wavelengths/low frequencies) to gamma rays (short wavelengths/high frequencies). Of the remaining four technologies not directly utilizing EM energy, two (biosensors and trace vapor detectors) operate by using a chemical or biological reaction to detect explosive vapor that is emitted from mines into the surrounding soil or the air directly above the ground. Another one is based on sending neutrons toward the target. The last technology works by sending acoustic or seismic energy toward a target and receiving an acoustic or seismic reflection. A more detailed discussion of these 19 technologies is included in appendix III. When we evaluated the 19 technologies against the operational parameters, we found that 10 had one or more physics-based limitations that would prevent them from achieving any of the five countermine missions by themselves (see table 1). As can be seen from table 1, standoff and speed are the most challenging attributes of a detection system that would meet DOD’s countermine mission needs. Nine technologies failed to meet the standoff criterion, and four failed to meet the speed criterion for any of the five missions. We judged that the remaining nine technologies were “potentially promising” because we did not conclusively identify any definitive operational limitations to preclude their use in one or more countermine missions. For all of these nine technologies, our ability to determine their operational capabilities was reduced by significant uncertainty as to their capabilities. Some, such as ground penetrating radar and acoustic technologies, have been studied for many years. Yet continuing improvements to the sensors and the critical mathematical equations that interpret the raw data coming from the sensors made it difficult for us to predict the absolute limits of their capabilities. Our inability to draw a conclusion about these technologies is supported by reports from the Institute for Defense Analyses and other organizations that have found similar uncertainty about their prospects. The critical issue for radar is whether it will ever be capable of doing a good enough job discriminating between targets and natural clutter to allow an acceptable rate of advance. The issue of clutter is the fundamental problem for many sensor approaches. Our uncertainty about three technologies, terahertz imaging, x-ray fluorescence and electromagnetic radiography was different because their capabilities were not as well-studied. As a result, there was not enough information for us to determine whether they could meet mission-based criteria. In addition, DOD officials told us that they believe that two of them (terahertz imaging and x-ray fluorescence) have fundamental limitations that rule them out for countermine missions. They claimed that terahertz energy is unable to penetrate deep enough through the soil and that x-ray fluorescence has inadequate standoff. However, we were not able to resolve these issues. We believe that the lack of consensus about the capabilities of most of the nine technologies is due, in part, to a basic lack of knowledge about the upper limits of their capabilities. The only way to determine whether these technologies can be employed in a detector that meets countermine mission needs is through a systematic research program. DOD is currently investing in five of the nine technologies (see table 2), and it recently stopped funding a project in one of them (passive millimeter wave). In our review of the ability of the nine technologies to operate in different environmental conditions, we could not, with certainty, identify absolute limitations on the ability of four to operate in expected environmental conditions. However, all nine have uncertainties about the range of environmental conditions in which they can adequately perform. The most significant uncertainties relate to performance in various surface and subsurface conditions, such as water saturated soil and differing soil types. In most cases, these uncertainties have not been adequately studied. Examples of environmental limitations and uncertainties for the nine technologies are presented in table 3. The uncertainties about the various detection technologies also prevented us from determining if the technologies could be combined to meet mission needs. While most of the 19 technologies cannot meet operational and environmental mission needs, in theory a combination of different sensors might solve the countermine problem. This type of arrangement, known as sensor fusion, combines different approaches to compensate for the limitations of them individually. Canada and the Army are developing systems that use some form of sensor fusion. Canada’s Defense Research Establishment in Suffield, Alberta, has produced a multisensor land mine detector that employs thermal neutron activation (TNA), a type of neutron activation analysis, as a confirmation detector in a system that also employs a metal detector, infrared (IR), and ground penetrating radar to scan for mines. The TNA sensor is used to confirm or reject suspect targets that the three scanning sensors detect. The Army is developing a detector (HSTAMIDS) that uses sensor fusion to take advantage of the strengths of both metal detector and radar approaches. In this configuration, the radar is used to improve the metal detector’s performance with mines that employ small amounts of metal. However, neither of these systems (Canada’s and the Army’s) will meet the countermine mission needs stated previously because their component sensors are limited. Any detection system utilizing sensor fusion would somehow need to overcome limitations, such as standoff and speed, in underlying technologies. As pointed out previously, the capability of the identified technologies to meet mission needs is uncertain. Another consideration in developing a sensor fusion solution is that it would require significant advances in signal processing. It is unclear whether DOD’s research investments are in those technologies that, either individually or in combination, have the greatest chance of leading to solutions that address the U.S. military’s countermine mission needs given the lack of knowledge about the strengths and the limitations of the various detection technologies. DOD’s strategy of working toward incrementally improving capabilities over current detectors may result in improvements over current capabilities. However, without a systematic and comprehensive evaluation of potential technologies based on a complete set of mission-based needs, DOD does not know if it has invested its funds wisely to address the needs of the military. DOD’s testing plans for its land mine detection systems in development do not provide assurance that these systems will perform adequately under most expected conditions. Demarcating the acceptable operating conditions of a system is a critical part of research and development. This is important not only for determining if developmental systems will meet mission needs but also for defining the operational limitations so that users can make informed decisions about their use. Therefore, systems should be tested under those conditions that most stress them. Given the numerous environmental and climatic conditions that can be expected to affect the performance of any land mine detector, a robust program of developmental testing is essential to fully understand the strengths and limitations in performance under realistic conditions. Failing to test under a plan specifically designed around the expected environmental and climatic conditions of use as well as the anticipated limitations of the technologies could increase the risk of fielding the system. To improve the Department’s ability to identify and pursue the most promising technologies for land mine detection, we recommend that the Secretary of Defense (1) direct the establishment of a long-range research program to periodically evaluate all applicable land mine detection technologies against a complete set of mission-based criteria and (2) provide a sustained level of basic research to sufficiently address scientific uncertainties. Mission-based criteria could include target signatures, operational requirements, and expected environmental conditions. We also recommend that the Secretary of Defense require the services to provide adequate testing conditions for land mine detection systems in development that better reflect the operating environment in which they will likely have to operate. DOD provided written comments on a draft of this report (see app. IV). DOD concurred with each of our three recommendations and augmented its concurrence with additional comments. DOD’s comments describe and illustrate the lack of a focused and systematic approach underlying DOD’s research programs for land mine detectors. It is not clear from DOD’s response what, if any, measures it plans to take to implement our recommendations. In responding to our first recommendation, DOD states that the Army pursues a systematic research, development, and acquisition program to address land mine detection needs. However, we found that its approach lacked elements critical to the success of this program, such as the use of a comprehensive set of mission-based criteria and a systematic evaluation of the capability of competing alternative technologies to address these criteria. In fact, the Army Science Board study cited by DOD in its comments to us also recommended that “operational needs and priorities need to be clearly thought through and quantified.” There is nothing in DOD’s comments that is directed toward bridging these gaps. Therefore, we continue to believe that the changes that we have recommended are required. Regarding our second recommendation, DOD describes the benefits provided by its current basic research program, but does not commit to continuing funding for basic research for land mine detection after this fiscal year. As we discuss in this report, we believe it is extremely important for DOD to continue with a sustained program of basic research to support its land mine detection program given the extent of the uncertainties surrounding the various technologies. This point was also made by the Army Science Board panel. In response to our third recommendation, DOD states that the testing plans we reviewed were not detailed enough to allow us to reach our conclusions, and it describes certain activities that it is engaged in to incorporate realistic environmental conditions into its testing programs for HSTAMIDS and GSTAMIDS. However, we believe that the described activities further illustrate the lack of a systematic strategy to guide testing during product development. DOD acknowledged the threat to the performance of metal detectors from soils that are rich in iron oxide and pointed out that it is seeking to identify a “suitable site to test the HSTAMIDS system in unique soil environments such as laterite.” We feel that this is an important step in the development of this system. But we believe that this step, along with tests in saturated soils and snowy conditions, should have been taken much earlier, before a large commitment had been made to this system. Testing programs should also be driven by a systematic mission-based evaluation framework. Such an approach should delineate at the earliest stages of development the expected environmental operating conditions based on mission needs. An analysis should then be made to identify for testing those conditions that pose substantial challenges or uncertainties for detector performance. Without such a framework, there is a risk that uncertainties about the performance of these systems will remain after they have been fielded and that significant testing will ostensibly be conducted by users rather than by testers. We are sending a copy of this report to the Honorable Mitchell E. Daniels, Jr., Director, Office of Management and Budget; the Honorable Donald H. Rumsfeld, Secretary of Defense; the Honorable Joseph W. Westphal, Acting Secretary of the Army; the Honorable Robert B. Pirie, Jr., Acting Secretary of the Navy; General James L. Jones, Commandant of the Marine Corps; and other interested congressional committees and parties. We will also make copies available to others upon request. Please contact me on (202) 512-2700 if you or your staff have any questions concerning this report. Major contributors to this report were Kwai-Cheung Chan, Dan Engelberg, Cary Russell, and John Oppenheim. To determine whether the Department of Defense (DOD) employs an effective strategy for identifying the most promising land mine detection technologies, we reviewed literature related to research program design and met with experts in this area. We interviewed officials from the Army, the Navy, the Marine Corps and the Defense Advanced Research Projects Agency (DARPA) responsible for running land mine detection research programs. We also reviewed DOD policy and doctrine related to this area including the Defense Technology Area Plan, the Army Science and Technology Master Plan, and Countermine Modernization Plans. To determine whether DOD is investing in the most promising technologies to fully address mission needs, we evaluated the set of potential land mine detection technologies identified through a systematic search against a set of criteria derived from mission needs. We first designed a framework for evaluating potential technologies. This framework assisted in identifying the most promising technologies and research gaps for further investigation. Through our discussions with DOD, we found out that such a framework had not previously been created. Because our framework was mission directed, we identified a set of critical countermine missions that involve detecting land mines by systematically interviewing Army and Marine Corps combat engineers to determine how countermine activities fit into a variety of combat scenarios and reviewing Army and Marine Corps doctrine that discuss mine threats to U.S. forces and corresponding countermine tactics. Next, through a review of documents and discussions with Army and Marine Corps combat engineers, we identified technical criteria that define detection requirements for each mission. Officials representing the two organizations responsible for combat engineer requirements, the Army Engineer School and the Marine Corps Combat Development Command, reviewed and agreed with the set of criteria we developed. The critical missions and the set of criteria we developed are discussed in appendix II. We then identified conventional and alternative technologies that could have value in terms of performing these land mine detection missions. We distinguished between technologies and systems. “Technologies are approaches by which principles of physics are exploited to achieve tasks.” Systems are implementations of technologies. By developing a methodology that was based on identifying and characterizing technologies, rather than systems, we sought to go beyond the strengths and limitations of current devices and thereby provide information on which to base a future-oriented research program. We identified candidate technologies in three ways: One way was to review literature on land mine detection and interview researchers and other experts in the land mine detection field. Another way was to interview experts in related fields, such as geophysics and civil engineering, that involve similar activities (i.e., looking for hidden subsurface objects). In this, our goal was to find out if those fields use any tools that have not been explored by DOD. The final way was to review proposals that had been submitted to DOD in response to recent solicitations for funding. The technologies we identified are presented in appendix III. We evaluated each of the identified technologies against the set of mission criteria to determine which were promising for land mine detection. We identified “potentially promising” technologies by eliminating those that have limitations that would preclude their meeting mission goals. In performing this evaluation, we attended conferences and workshops, reviewed published and unpublished technical literature, interviewed developers of land mine detection systems, and contracted with an expert in the field of land mine detection technologies to review our conclusions. We also obtained comments from technical experts from the Army. Finally, we determined which of the “potentially promising” technologies DOD was exploring by reviewing agency documents and interviewing DOD officials. We performed our work from November 1999 to February 2001 in accordance with generally accepted government auditing standards. Using our methodology, we identified land mine detection requirements. The five critical countermine missions that involve land mine detection are (1) breaching, (2) route clearance, (3) area clearance, (4) tactical reconnaissance, and (5) stability and support operations (SASO) reconnaissance. Breaching is the rapid creation of safe paths through a minefield to project combat forces to the other side. This mission is usually conducted while the force is under enemy fire. Route clearance is the detection and removal of mines along pre-existing roads and trails to allow for the passage of logistics and support forces. Area clearance is the detection and removal of mines in a designated area of operations to permit use by military forces. Tactical reconnaissance is performed to identify mine threats just prior to and throughout combat operations. SASO reconnaissance is used to assist in making decisions about where to locate forces and for planning area clearance operations. A principal difference between tactical and SASO reconnaissance is the time required for performing the mission. Because SASO reconnaissance involves peacetime operations, the speed at which it is conducted is not as critical as that for tactical reconnaissance. We developed a set of technical criteria to specifically define detection requirements for each mission and grouped the criteria into target parameters, operational parameters, and environmental parameters. Target parameters describe the physical characteristics of land mines and the way they are emplaced. Given that there are over 750 types of land mines available worldwide, the target characteristics vary considerably. The parameters we identified are presented in table 4. Operational parameters describe the operational needs of the military as they relate to countermine operations involving mine detection. Our set of operational parameters are also presented in table 4. One critical operational criterion for a mine detector is speed of advance. For time critical missions, like breaching and route clearance, a detector needs to function effectively at the military forces’ operational speeds. The ability of a detector to keep up with the required rate of advance is dependent on two factors: its scanning speed (the time to search a given area for mines) and its false alarm rate, which is based on the number of times a detector indicates the presence of a mine where one does not exist. False alarms reduce the rate of advance because combat forces must stop to confirm whether an alarm is actually a mine. Another key operational parameter is standoff, which is the distance a mine detector (and its operator) can be from a mine and still be able to detect it. The minimum standoff required is the lethal radius of a mine, which is about 35 meters (for an antitank sized mine). This distance requirement increases as speed increases to allow for reaction time once an alarm is sounded. In cases of minefield reconnaissance performed by airborne detectors, the standoff required is the minimum altitude necessary to provide safety for the aircraft from enemy ground fire. One final operational parameter is the ability of a detector to accurately locate the position of a buried mine. This is important for reducing the time necessary to remove or otherwise neutralize the mine and the safety risk associated with manually probing the ground to find the exact mine position. The environmental parameters we identified are presented in table 5. These are natural and man-made conditions in and around the battlefield that affect mine detection and are grouped into atmospheric, surface, subsurface, and other environmental conditions. While the target and operational parameters can vary among the five mission types, the environmental parameters are not mission-specific. Rather environmental parameters are site-specific. In this appendix, we briefly describe the land mine detection technologies and projects that we identified through our methodology. We grouped the individual projects and lines of effort on the basis of their underlying technological approach. Our grouping resulted in 19 distinct approaches. These technologies vary in their maturity. Some, such as metal detectors and radar, have been explored by many researchers for many years. Much less is known about others such as electromagnetic radiography and microwave enhanced infrared. Others, such as x-ray fluoresence, have been used in other applications but have received relatively little attention thus far in this application. The technologies use different principles. Fifteen of the 19 technologies are based on receiving electromagnetic (EM) energy from the target.Eleven of the 15 EM technologies are based on sending energy (in one case energy in the form of neutrons) into the ground. The remaining four EM technologies are “passive electromagnetic”; they are based on receiving energy that is emitted by the land mine. These four technologies are similar in principle; their relative strengths and limitations with respect to addressing countermine missions arise from the different types of energy that they receive. The final 4 of the 19 technologies are primarily not electromagnetic. Two capture and analyze the explosive that the mine releases into the ground or air, one is based on acoustic or seismic energy reflected off of the target, and one is based on sending neutrons toward the target. Eleven technologies use electromagnetic energy and operate under three different approaches (see fig. 2). Those that react with the explosive Those that react with the explosive 1. Conductivity/resistivity 2. Metal detectors 1. Electromagnetic radiography 2. Gamma ray imaging 3. Microwave enhanced infrared 4. Quadrupole resonance 5. X-ray fluorescence Four operate by sending EM energy into the ground, reflecting off the mine. Five operate by sending EM energy into the ground, creating an effect on the explosive substance. Whereas four of the five act on the explosive within the mine casing, one relies on detecting released explosive molecules. Two operate by detecting differences in the low frequency electromagnetic field around the mine. Four of the 11 active EM technologies (radar, terahertz imaging, LIDAR, and x-ray backscatter) are based on projecting energy into the ground and reflecting off the land mine. The presence of a mine or other buried object is detected from differences in the electromagnetic properties of the target and those of the surrounding ground. The relative strengths and limitations of these technologies vary with their wavelengths. Managing the trade-off between depth of penetration and resolution is one of the central research concerns in this area. The choice of frequency is important; lower frequencies allow better ground penetration but will suffer from poor spatial resolution. Radar’s relatively long wavelength (it operates in the microwave part of the electromagnetic spectrum) allows it to penetrate the ground deeply enough to reach buried mines. This ability, along with the fact that it can detect plastic mines, has made radar the focus of much research and development in the United States and in other nations. For example, DOD has incorporated radar into its hand-held system, Handheld Stand-off Mine Detection System (HSTAMIDS). However, whether a system based on radar will meet countermine mission needs remains in dispute. The poor spatial resolution of radar, which makes it difficult at best to distinguish between buried mines and other objects of a similar size and shape, is the largest obstacle. Another issue is its inability to penetrate soils that are saturated with water. The other technologies have greater resolution but have a corresponding loss of depth penetration. Because LIDAR has a shorter wavelength than radar, it has a limited ability to detect buried mines. X-ray backscatter can provide detailed images of shallowly buried mines due to the extremely short wavelength of the x-rays. It operates by detecting the difference in the atomic number between the ground and the mine target. However, the applicability of this technology is limited due to the limited penetration of the x-rays into the ground. In theory, terahertz imaging should have a similar limitation. However, a researcher studying the feasibility of creating images of mines in the terahertz part of the spectrum told us that his system might be able to penetrate more deeply by increasing the power of the energy. Another general approach involves projecting energy into the ground that reacts with the molecules of the explosive, which send a signal that is received by the detector. Because it reacts with the explosive, rather than the container, the approach has the advantage of more specifically targeting land mines and being less prone to the clutter problem that hinders other active electromagnetic approaches. However, technologies that adopt this approach tend to be more complex and expensive. We identified five distinct technologies that have been advanced that utilize this general approach. One of them, quadrupole resonance, is a relatively mature technology in land mine applications and systems have been built around it. Less is known about the other four technologies and how to apply them to detect land mines and what their capabilities are for addressing countermine missions. These four are electromagnetic radiography, microwave enhanced infrared, x-ray fluorescence, and gamma ray imaging. Therefore, our assessments are less complete for these than for the other more well-studied approaches. Quadrupole resonance has been explored for identifying explosives for several years. Much of the basic research was conducted at the Naval Research Laboratory. Quadrupole resonance detectors are also being developed to screen for explosives at airports. In quadrupole resonance, a pulse of long wavelength energy causes the nitrogen nuclei in the explosives to emit a pulse of energy that is characteristic of the molecule. For example, the nitrogen atoms in TNT emit a unique pulse that can be picked up by the detector. One limitation of quadrupole resonance with respect to countermine missions is that the detector head must be close to the target. The speed at which quadrupole resonance can operate is in question. Current systems are fairly slow. In addition, research questions currently exist in several areas, including how to overcome interference from other sources of energy and how to configure a quadrupole resonance detector to detect TNT. Despite these limitations and questions, DOD is developing systems that use this technology. The Marine Corps is developing a hand-held device that uses quadrupole resonance and the Army is developing a land mine detection vehicle that would use an array of quadrupole resonance detectors across the front to confirm targets presented by sensors that use either radar or metal detector. In conversations with individual systems developers, we identified four other examples of this land mine detection approach. The first two technologies are based on scanning the ground with long wavelength microwaves. This energy excites the explosive molecules that emit a signal that is detected. The other two technologies using this approach send shorter wavelength energy toward the target. Electromagnetic radiography operates by scanning the ground with long wavelength microwaves. According to one developer, when it is struck by this energy; the target radiates back in a particular way; exciting molecules at atomic levels. The molecules respond with spin effects that produce “a spectrographic signature of the target substance.” As noted previously, very little is known at the present time about what the limits are of this technology in terms of the operational requirements and environmental conditions for countermine applications. Microwave enhanced infrared detection operates by sending long wavelength microwaves into the ground and then detecting a “unique thermal signature and infrared spectra of chemical explosives.” One limitation with this approach is it cannot be used to detect metallic mines because the microwave energy cannot penetrate metal. In addition, the speed at which it can operate and the standoff distance are both highly uncertain. The third technology illuminates the ground with x-rays that causes a series of changes in the electron configuration of the target atoms that results in the release of an x-ray photon (x-ray fluorescence). Unlike the other technologies in this category x-ray fluorescence detects molecules of explosive that are emitted from the mine. The amount of fluorescence is dependent on the target molecule. A critical issue in dispute at the present time is whether x-ray fluorescence can work at the distances required to address countermine missions The short wavelength of the x-rays used has a corresponding high degree of scattering. Several experts we spoke to expressed reservations about standoff for this technology, although the system developer claims to have surmounted this limitation. The fourth technology is gamma ray imaging. The basis of this technique is an electron accelerator that produces gamma rays that “interact with the chemical elements in explosives to generate a unique signature.” Because of the scattering of the (short wavelength), x-ray and gamma ray detectors operating on these principles must be in close proximity to the target. According to a developer, the detector must be within one foot of the target. Another obstacle is that the detector would require an extremely large source of energy to create the gamma rays. We identified two technologies that are based on detecting an electromagnetic field. The first is electromagnetic induction. As discussed in the background section, metal detectors that utilize this approach are the principal means for detecting land mines at the present time. Metal detectors generate a magnetic field that reacts with electric and/or magnetic properties of the target. This reaction causes the generation of a second magnetic field, which is received by the detector. The restriction to metallic objects is a limitation given the increasing development of mines with extremely small amounts of metal. Increasing the sensitivity of a metal detector to detect extremely small amounts of metal in these mines leads to its detecting other objects in the ground. Metal detectors are also limited by the need to be relatively close to the mine target in order to operate effectively. The second technology is conductivity/resistivity that involves applying current to the ground using a set of electrodes and measuring the voltage developed between other electrodes. The voltage measured at the electrodes would be affected by the objects in the ground, including land mines. The conductivity technique was originally developed to locate minerals, oil deposits, and groundwater supplies. The need to place the electrodes in or on the ground is a concern for land mine detection applications of this technology. We identified four technologies that have been proposed which do not actively illuminate the target, but are based on detecting energy emitted or reflected by the mine. Three detect the energy naturally released by objects. They are ostensibly cameras that operate in a very similar fashion to video cameras, although they view not red, green, and blue frequencies, but other parts of the spectrum. Land mine detectors that use passive sensing principles spot either (1) a contrast between the energy emitted or reflected from the mine and that of the background or (2) the contrast between the (disturbed) soil immediately surrounding a buried mine and the top layer of soil. They can be designed to pick up this energy difference in different wavelength bands. Passive detectors have been designed or proposed to operate in different parts of the EM spectrum. We identified technologies that operate using infrared, millimeter wave, and microwave principles. Infrared, millimeter, and microwave techniques have different strengths and limitations. The trade-offs between scattering and resolution that exist with the active backscatter approaches (radar and LIDAR) also exist for passive EM technologies. For example, the longer wavelengths of microwave and millimeter waves allow them to penetrate through clouds, smoke, dust, dry leaves, and a thin layer of dry soil but provide more limited resolution of targets. These four technologies are capable of greater standoff than others. Several nations are developing systems that use IR detection to detect minefields (tactical reconnaissance). Systems are also being developed to gather information in several infrared wavelength bands at the same time (“multi-spectral infrared”). This approach increases the amount of information available to distinguish mine targets from the background. The Marine Corps is conducting research in this area. One of the constraints with infrared detection systems is that the mines’ signature against the background will tend to be reduced at certain times during the day. To overcome this limitation, researchers funded by DOD’s Multidisciplinary University Research Initiative (MURI) recently investigated amplifying the infrared signal by heating the ground with microwave energy. Their early findings suggest that microwave heating enhances the infrared signature of objects buried under smooth surfaces. However, much work remains. Given continued funding, they plan to add increasing complexity to their experimentation by testing with rough surfaces, random shapes, and different mine and soil characteristics. They will need to conduct additional research to determine whether the rate of heating is consistent with the speed required to meet most countermine missions. The fourth passive electromagnetic approach is based on detecting the energy produced by the circuitry of advanced mines that contain sophisticated fuses. DOD has recently funded work on this approach as part of the MURI initiative. Apart from the limited applicability of this technology, questions remain concerning how feasible it is and how easily a detector operating on these principles might be fooled with a decoy. We identified four technologies that are not based on electromagnetic principles. They are acoustic/seismic, neutron activation, trace vapor and biosensors. Sensors that utilize an acoustic/seismic approach operate by creating an acoustic or seismic wave in the ground that reflects off the mine. The energy can be delivered in a number of different ways such as a loudspeaker, a seismic source coupled with the ground, and a laser striking the ground over the mine. In addition, there are different ways of receiving the signal from the target (electromagnetically through a doppler radar or doppler laser device or acoustically through a microphone). Numerous questions remain about whether an acoustic/seismic approach can meet the operational needs for countermine missions and the environmental factors that would influence its employment. Although we identified no certain, absolute limitations to an acoustic/seismic approach meeting countermine missions, we did identify significant concerns. Acoustic waves are capable of imaging buried land mines. However, clutter is a major concern with acoustic approaches. Interference from rocks, vegetation, and other naturally objects in the environment alter the waves as they travel in the ground. Additional work needs to be conducted to assess the limits of an acoustic/seismic approach for detecting land mines. An acoustic system is one of the technologies that the Army is currently exploring for the Ground Stand-off Mine Detection System (GSTAMIDS). Neutron activation analysis techniques operate on the principle that mine explosives have a much higher concentration of certain elements like nitrogen and hydrogen than naturally occurring objects. There are several neutron-based techniques for detecting these explosive properties in bulk form. All systems are composed of at least a neutron source – continuous or pulsed, emitting in bursts – to produce the neutrons that have to be directed into the ground, and a detector to characterize the outgoing radiation, usually gamma rays, resulting from the interaction of the neutrons with the soil and the substances it contains (e.g. the explosive). Neutron activation analysis cannot be used as a standoff detector. Our review indicated that neutron activation analysis must operate directly over the mine target. The limited speed of this technology is another restriction for most missions. In addition, unanswered questions about this technology concern the depth of penetration and whether it can be used to detect smaller anti-personnel mines. Because of these limitations and questions, neutron activation analysis is currently envisioned as having a role as a confirmation detector alongside faster sensors on systems that are remotely piloted. For example, as described above, Canada’s military has developed a vehicle that incorporates thermal neutron activation as a confirmation sensor. The vehicle would need to stop only when one of the scanning sensors indicated a possible mine target. The other two technologies are trace vapor and biosensors. Trace vapor detectors involve sensing molecules of the explosive that emanate from the buried mine and then analyzing them. There are several different approaches for capturing and analyzing these molecules. In 1997, DARPA initiated a research program aimed at detecting land mines via their chemical signatures, referred to as the “electronic dog’s nose” program. The program was established because DARPA believed that the technologies DOD was developing (metal detectors, radar and infrared) were limited in that they were not seeking features unique to land mines and were susceptible to high false alarm rates from natural and man made clutter. Through this program, DARPA hoped to change the overall philosophy of mine detection in DOD by detecting the explosive, a unique feature of land mines. This work has been transitioned over to the Army. However, the role of trace vapor detectors in most countermine missions is likely to remain limited due to the limited standoff that can be achieved. The central feature of the biosensor technology approach is a living animal. Current examples of biosensors are dogs, bees, and microbes that detect explosives. Many research questions remain with these approaches. Andrews, Anne et al., Research on Ground-Penetrating Radar for Detection of Mines and Unexploded Ordnance: Current Status and Research Strategy, Institute for Defense Analyses, 1999. Bruschini, Claudio and Bertrand Gros. A Survey of Current Sensor Technology Research for the Detection of Landmines, LAMI-DeTeC. Lausanne, Switzerland, 1997. Bruschini, Claudio and Bertrand Gros. A Survey of Research on Sensor Technology for Landmine Detection. The Journal of Humanitarian Demining, Issue 2.1 (Feb. 1998). Bruschini, Claudio, Karin De Bruyn, Hichem Sahli, and Jan Cornelis. Study on the State of the Art in the EU Related to Humanitarian Demining Technology, Products and Practice. École Polytechnique Fédérale de Lausanne and Vrije Universiteit Brusel. Brussels, Belgium, 1999. Carruthers, Al, Scoping Study for Humanitarian Demining Technologies, Medicine Hat, Canada: Canadian Centre for Mine Action Technologies, 1999. Craib, J.A., Survey of Mine Clearance Technology, Conducted for the United Nations University and the United Nations Department of Humanitarian Affairs, 1994. Evaluation of Unexploded Ordnance Detection and Interrogation Technologies. Prepared for Panama Canal Treaty Implementation Plan Agency. U.S. Army Environmental Center and Naval Explosive Ordnance Disposal Technology Division, 1997. Garwin, Richard L. and Jo L. Husbands. Progress in Humanitarian Demining: Technical and Policy Challenges. Prepared for the Xth Annual Amaldi Conference. Paris, France, 1997. Groot, J.S. and Y.H.L. Janssen. Remote Land Mine(Field) Detection, An Overview of Techniques. TNO Defence Research. The Hague, The Netherlands., 1994. Gros, Bertrand and Claudio Bruschini. Sensor Technologies for the Detection of Antipersonnel Mines, A Survey of Current Research and System Developments. EPFL-LAMI DeTeC. Lausanne, Switzerland., 1996. Havlík, Stefan and Peter Licko. Humanitarian Demining: The Challenge for Robotic Research. The Journal of Humanitarian Demining, Issue 2.2 (May 1998). Healey, A.J. and W.T. Webber. Sensors for the Detection of Land-based Munitions. Naval Postgraduate School. Monterey, CA., 1995 Heberlein, David C., Progress in Metal-Detection Techniques for Detecting and Identifying Landmines and Unexploded Ordnance, Institute for Defense Analyses, 2000. Horowitz, Paul, et al., New Technological Approaches to Humanitarian Demining, The MITRE Corporation, 1996. Hussein, Esam M.A. and Edward J. Waller. Landmine Detection: The Problem and the Challenge. Laboratory for Threat Material Detection, Department of Mechnaical Engineering, University of New Brunswick. Fredericton, NB, Canada. 1999. Janzon, Bo, International Workshop of Technical Experts on Ordnance Recovery and Disposal in the Framework of International Demining Operations (report), National Defence Research Establishment, Stockholm, Sweden, 1994. Johnson, B. et al., A Research and Development Strategy for Unexploded Ordnance Sensing, Massachusetts Institute of Technology, 1996. Kerner, David, et al. Anti-Personnel Landmine (APL) Detection Technology Survey and Assessment. Prepared for the Defense Threat Reduction Agency. DynMeridian. Alexandria, VA., 1999. McFee, John, et al., CRAD Countermine R&D Study – Final Report, Defense Research Establishment Suffield, 1994. Mächler, Ph. Detection Technologies for Anti-Personnel Mines. LAMI- DeTeC. Lausanne, Switzerland, 1995. Scroggins, Debra M., Technology Assessment for the Detection of Buried Metallic and Non-metallic Cased Ordnance, Naval Explosive Ordnance Disposal Technology Center, Indian Head, MD, 1993. Sensor Technology Assessment for Ordnance and Explosive Waste Detection and Location. Prepared for U.S. Army Corps of Engineers and Army Yuma Proving Ground. Jet Propulsion Laboratory, California Institute of Technology. Pasadena, CA. 1995. Tsipis, Kosta. Report on the Landmine Brainstorming Workshop of August 25-30, 1996. Program in Science and Technology for International Security, Massachusetts Institute of Technology. Cambridge, MA., 1996. | Recent U.S. military operations have shown that land mines continue to pose a significant threat to U.S. forces. U.S. land mine detection capabilities are limited and largely unchanged since the Second World War. Improving the Department of Defense's (DOD) land mine detection capability is a technological challenge This report reviews DOD's strategy for identifying the most promising land mine detection technologies. GAO found that DOD's ability to substantially improve its land mine detection capabilities may be limited because DOD lacks an effective strategy for identifying and evaluating the most promising technologies. Although DOD maintains an extensive program of outreach to external researchers and other nations' military research organizations, it does not use an effective methodology to evaluate all technological options to guide its investment decisions. DOD is investing in several technologies to overcome the mine detection problem, but it is not clear that DOD has chosen the most promising technologies. Because DOD has not systematically assessed potential land mine detection technologies against mission needs, GAO did its own assessment. GAO found that the technologies DOD is exploring are limited in their ability to meet mission needs or are greatly uncertain in their potential. GAO identified other technologies that might address DOD's needs, but they are in immature states of development and it is unclear whether they are more promising than the approaches that DOD is exploring. |
I have the urge to declare my sanity and justify my actions, but I assume I'll never be able to convince anyone that this was the right decision. Maybe it's true that anyone who does this is insane by definition, but I can at least explain my reasoning. I considered not writing any of this because of how personal it is, but I like tying up loose ends and don't want people to wonder why I did this. Since I've never spoken to anyone about what happened to me, people would likely draw the wrong conclusions.
My first memories as a child are of being raped, repeatedly. This has affected every aspect of my life. This darkness, which is the only way I can describe it, has followed me like a fog, but at times intensified and overwhelmed me, usually triggered by a distinct situation. In kindergarten I couldn't use the bathroom and would stand petrified whenever I needed to, which started a trend of awkward and unexplained social behavior. The damage that was done to my body still prevents me from using the bathroom normally, but now it's less of a physical impediment than a daily reminder of what was done to me.
This darkness followed me as I grew up. I remember spending hours playing with legos, having my world consist of me and a box of cold, plastic blocks. Just waiting for everything to end. It's the same thing I do now, but instead of legos it's surfing the web or reading or listening to a baseball game. Most of my life has been spent feeling dead inside, waiting for my body to catch up.
At times growing up I would feel inconsolable rage, but I never connected this to what happened until puberty. I was able to keep the darkness at bay for a few hours at a time by doing things that required intense concentration, but it would always come back. Programming appealed to me for this reason. I was never particularly fond of computers or mathematically inclined, but the temporary peace it would provide was like a drug. But the darkness always returned and built up something like a tolerance, because programming has become less and less of a refuge.
The darkness is with me nearly every time I wake up. I feel like a grime is covering me. I feel like I'm trapped in a contimated body that no amount of washing will clean. Whenever I think about what happened I feel manic and itchy and can't concentrate on anything else. It manifests itself in hours of eating or staying up for days at a time or sleeping for sixteen hours straight or week long programming binges or constantly going to the gym. I'm exhausted from feeling like this every hour of every day.
Three to four nights a week I have nightmares about what happened. It makes me avoid sleep and constantly tired, because sleeping with what feels like hours of nightmares is not restful. I wake up sweaty and furious. I'm reminded every morning of what was done to me and the control it has over my life.
I've never been able to stop thinking about what happened to me and this hampered my social interactions. I would be angry and lost in thought and then be interrupted by someone saying "Hi" or making small talk, unable to understand why I seemed cold and distant. I walked around, viewing the outside world from a distant portal behind my eyes, unable to perform normal human niceties. I wondered what it would be like to take to other people without what happened constantly on my mind, and I wondered if other people had similar experiences that they were better able to mask.
Alcohol was also something that let me escape the darkness. It would always find me later, though, and it was always angry that I managed to escape and it made me pay. Many of the irresponsible things I did were the result of the darkness. Obviously I'm responsible for every decision and action, including this one, but there are reasons why things happen the way they do.
Alcohol and other drugs provided a way to ignore the realities of my situation. It was easy to spend the night drinking and forget that I had no future to look forward to. I never liked what alcohol did to me, but it was better than facing my existence honestly. I haven't touched alcohol or any other drug in over seven months (and no drugs or alcohol will be involved when I do this) and this has forced me to evaluate my life in an honest and clear way. There's no future here. The darkness will always be with me.
I used to think if I solved some problem or achieved some goal, maybe he would leave. It was comforting to identify tangible issues as the source of my problems instead of something that I'll never be able to change. I thought that if I got into to a good college, or a good grad school, or lost weight, or went to the gym nearly every day for a year, or created programs that millions of people used, or spent a summer or California or New York or published papers that I was proud of, then maybe I would feel some peace and not be constantly haunted and unhappy. But nothing I did made a dent in how depressed I was on a daily basis and nothing was in any way fulfilling. I'm not sure why I ever thought that would change anything.
I didn't realize how deep a hold he had on me and my life until my first relationship. I stupidly assumed that no matter how the darkness affected me personally, my romantic relationships would somehow be separated and protected. Growing up I viewed my future relationships as a possible escape from this thing that haunts me every day, but I began to realize how entangled it was with every aspect of my life and how it is never going to release me. Instead of being an escape, relationships and romantic contact with other people only intensified everything about him that I couldn't stand. I will never be able to have a relationship in which he is not the focus, affecting every aspect of my romantic interactions.
Relationships always started out fine and I'd be able to ignore him for a few weeks. But as we got closer emotionally the darkness would return and every night it'd be me, her and the darkness in a black and gruesome threesome. He would surround me and penetrate me and the more we did the more intense it became. It made me hate being touched, because as long as we were separated I could view her like an outsider viewing something good and kind and untainted. Once we touched, the darkness would envelope her too and take her over and the evil inside me would surround her. I always felt like I was infecting anyone I was with.
Relationships didn't work. No one I dated was the right match, and I thought that maybe if I found the right person it would overwhelm him. Part of me knew that finding the right person wouldn't help, so I became interested in girls who obviously had no interest in me. For a while I thought I was gay. I convinced myself that it wasn't the darkness at all, but rather my orientation, because this would give me control over why things didn't feel "right". The fact that the darkness affected sexual matters most intensely made this idea make some sense and I convinced myself of this for a number of years, starting in college after my first relationship ended. I told people I was gay (at Trinity, not at Princeton), even though I wasn't attracted to men and kept finding myself interested in girls. Because if being gay wasn't the answer, then what was? People thought I was avoiding my orientation, but I was actually avoiding the truth, which is that while I'm straight, I will never be content with anyone. I know now that the darkness will never leave.
Last spring I met someone who was unlike anyone else I'd ever met. Someone who showed me just how well two people could get along and how much I could care about another human being. Someone I know I could be with and love for the rest of my life, if I weren't so fucked up. Amazingly, she liked me. She liked the shell of the man the darkness had left behind. But it didn't matter because I couldn't be alone with her. It was never just the two of us, it was always the three of us: her, me and the darkness. The closer we got, the more intensely I'd feel the darkness, like some evil mirror of my emotions. All the closeness we had and I loved was complemented by agony that I couldn't stand, from him. I realized that I would never be able to give her, or anyone, all of me or only me. She could never have me without the darkness and evil inside me. I could never have just her, without the darkness being a part of all of our interactions. I will never be able to be at peace or content or in a healthy relationship. I realized the futility of the romantic part of my life. If I had never met her, I would have realized this as soon as I met someone else who I meshed similarly well with. It's likely that things wouldn't have worked out with her and we would have broken up (with our relationship ending, like the majority of relationships do) even if I didn't have this problem, since we only dated for a short time. But I will face exactly the same problems with the darkness with anyone else. Despite my hopes, love and compatability is not enough. Nothing is enough. There's no way I can fix this or even push the darkness down far enough to make a relationship or any type of intimacy feasible.
So I watched as things fell apart between us. I had put an explicit time limit on our relationship, since I knew it couldn't last because of the darkness and didn't want to hold her back, and this caused a variety of problems. She was put in an unnatural situation that she never should have been a part of. It must have been very hard for her, not knowing what was actually going on with me, but this is not something I've ever been able to talk about with anyone. Losing her was very hard for me as well. Not because of her (I got over our relationship relatively quickly), but because of the realization that I would never have another relationship and because it signified the last true, exclusive personal connection I could ever have. This wasn't apparent to other people, because I could never talk about the real reasons for my sadness. I was very sad in the summer and fall, but it was not because of her, it was because I will never escape the darkness with anyone. She was so loving and kind to me and gave me everything I could have asked for under the circumstances. I'll never forget how much happiness she brought me in those briefs moments when I could ignore the darkness. I had originally planned to kill myself last winter but never got around to it. (Parts of this letter were written over a year ago, other parts days before doing this.) It was wrong of me to involve myself in her life if this were a possibility and I should have just left her alone, even though we only dated for a few months and things ended a long time ago. She's just one more person in a long list of people I've hurt.
I could spend pages talking about the other relationships I've had that were ruined because of my problems and my confusion related to the darkness. I've hurt so many great people because of who I am and my inability to experience what needs to be experienced. All I can say is that I tried to be honest with people about what I thought was true.
I've spent my life hurting people. Today will be the last time.
I've told different people a lot of things, but I've never told anyone about what happened to me, ever, for obvious reasons. It took me a while to realize that no matter how close you are to someone or how much they claim to love you, people simply cannot keep secrets. I learned this a few years ago when I thought I was gay and told people. The more harmful the secret, the juicier the gossip and the more likely you are to be betrayed. People don't care about their word or what they've promised, they just do whatever the fuck they want and justify it later. It feels incredibly lonely to realize you can never share something with someone and have it be between just the two of you. I don't blame anyone in particular, I guess it's just how people are. Even if I felt like this is something I could have shared, I have no interest in being part of a friendship or relationship where the other person views me as the damaged and contaminated person that I am. So even if I were able to trust someone, I probably would not have told them about what happened to me. At this point I simply don't care who knows.
I feel an evil inside me. An evil that makes me want to end life. I need to stop this. I need to make sure I don't kill someone, which is not something that can be easily undone. I don't know if this is related to what happened to me or something different. I recognize the irony of killing myself to prevent myself from killing someone else, but this decision should indicate what I'm capable of.
So I've realized I will never escape the darkness or misery associated with it and I have a responsibility to stop myself from physically harming others.
I'm just a broken, miserable shell of a human being. Being molested has defined me as a person and shaped me as a human being and it has made me the monster I am and there's nothing I can do to escape it. I don't know any other existence. I don't know what life feels like where I'm apart from any of this. I actively despise the person I am. I just feel fundamentally broken, almost non-human. I feel like an animal that woke up one day in a human body, trying to make sense of a foreign world, living among creatures it doesn't understand and can't connect with.
I have accepted that the darkness will never allow me to be in a relationship. I will never go to sleep with someone in my arms, feeling the comfort of their hands around me. I will never know what uncontimated intimacy is like. I will never have an exclusive bond with someone, someone who can be the recipient of all the love I have to give. I will never have children, and I wanted to be a father so badly. I think I would have made a good dad. And even if I had fought through the darkness and married and had children all while being unable to feel intimacy, I could have never done that if suicide were a possibility. I did try to minimize pain, although I know that this decision will hurt many of you. If this hurts you, I hope that you can at least forget about me quickly.
There's no point in identifying who molested me, so I'm just going to leave it at that. I doubt the word of a dead guy with no evidence about something that happened over twenty years ago would have much sway.
You may wonder why I didn't just talk to a professional about this. I've seen a number of doctors since I was a teenager to talk about other issues and I'm positive that another doctor would not have helped. I was never given one piece of actionable advice, ever. More than a few spent a large part of the session reading their notes to remember who I was. And I have no interest in talking about being raped as a child, both because I know it wouldn't help and because I have no confidence it would remain secret. I know the legal and practical limits of doctor/patient confidentiality, growing up in a house where we'd hear stories about the various mental illnesses of famous people, stories that were passed down through generations. All it takes is one doctor who thinks my story is interesting enough to share or a doctor who thinks it's her right or responsibility to contact the authorities and have me identify the molestor (justifying her decision by telling herself that someone else might be in danger). All it takes is a single doctor who violates my trust, just like the "friends" who I told I was gay did, and everything would be made public and I'd be forced to live in a world where people would know how fucked up I am. And yes, I realize this indicates that I have severe trust issues, but they're based on a large number of experiences with people who have shown a profound disrepect for their word and the privacy of others.
People say suicide is selfish. I think it's selfish to ask people to continue living painful and miserable lives, just so you possibly won't feel sad for a week or two. Suicide may be a permanent solution to a temporary problem, but it's also a permanent solution to a ~23 year-old problem that grows more intense and overwhelming every day.
Some people are just dealt bad hands in this life. I know many people have it worse than I do, and maybe I'm just not a strong person, but I really did try to deal with this. I've tried to deal with this every day for the last 23 years and I just can't fucking take it anymore.
I often wonder what life must be like for other people. People who can feel the love from others and give it back unadulterated, people who can experience sex as an intimate and joyous experience, people who can experience the colors and happenings of this world without constant misery. I wonder who I'd be if things had been different or if I were a stronger person. It sounds pretty great.
I'm prepared for death. I'm prepared for the pain and I am ready to no longer exist. Thanks to the strictness of New Jersey gun laws this will probably be much more painful than it needs to be, but what can you do. My only fear at this point is messing something up and surviving.
---
I'd also like to address my family, if you can call them that. I despise everything they stand for and I truly hate them, in a non-emotional, dispassionate and what I believe is a healthy way. The world will be a better place when they're dead--one with less hatred and intolerance.
If you're unfamiliar with the situation, my parents are fundamentalist Christians who kicked me out of their house and cut me off financially when I was 19 because I refused to attend seven hours of church a week.
They live in a black and white reality they've constructed for themselves. They partition the world into good and evil and survive by hating everything they fear or misunderstand and calling it love. They don't understand that good and decent people exist all around us, "saved" or not, and that evil and cruel people occupy a large percentage of their church. They take advantage of people looking for hope by teaching them to practice the same hatred they practice.
A random example:
"I am personally convinced that if a Muslim truly believes and obeys the Koran, he will be a terrorist." - George Zeller, August 24, 2010.
If you choose to follow a religion where, for example, devout Catholics who are trying to be good people are all going to Hell but child molestors go to Heaven (as long as they were "saved" at some point), that's your choice, but it's fucked up. Maybe a God who operates by those rules does exist. If so, fuck Him.
Their church was always more important than the members of their family and they happily sacrificed whatever necessary in order to satisfy their contrived beliefs about who they should be.
I grew up in a house where love was proxied through a God I could never believe in. A house where the love of music with any sort of a beat was literally beaten out of me. A house full of hatred and intolerance, run by two people who were experts at appearing kind and warm when others were around. Parents who tell an eight year old that his grandmother is going to Hell because she's Catholic. Parents who claim not to be racist but then talk about the horrors of miscegenation. I could list hundreds of other examples, but it's tiring.
Since being kicked out, I've interacted with them in relatively normal ways. I talk to them on the phone like nothing happened. I'm not sure why. Maybe because I like pretending I have a family. Maybe I like having people I can talk to about what's been going on in my life. Whatever the reason, it's not real and it feels like a sham. I should have never allowed this reconnection to happen.
I wrote the above a while ago, and I do feel like that much of the time. At other times, though, I feel less hateful. I know my parents honestly believe the crap they believe in. I know that my mom, at least, loved me very much and tried her best. One reason I put this off for so long is because I know how much pain it will cause her. She has been sad since she found out I wasn't "saved", since she believes I'm going to Hell, which is not a sadness for which I am responsible. That was never going to change, and presumably she believes the state of my physical body is much less important than the state of my soul. Still, I cannot intellectually justify this decision, knowing how much it will hurt her. Maybe my ability to take my own life, knowing how much pain it will cause, shows that I am a monster who doesn't deserve to live. All I know is that I can't deal with this pain any longer and I'm am truly sorry I couldn't wait until my family and everyone I knew died so this could be done without hurting anyone. For years I've wished that I'd be hit by a bus or die while saving a baby from drowning so my death might be more acceptable, but I was never so lucky.
---
To those of you who have shown me love, thank you for putting up with all my shittiness and moodiness and arbitrariness. I was never the person I wanted to be. Maybe without the darkness I would have been a better person, maybe not. I did try to be a good person, but I realize I never got very far.
I'm sorry for the pain this causes. I really do wish I had another option. I hope this letter explains why I needed to do this. If you can't understand this decision, I hope you can at least forgive me.
Bill Zeller
---
Please save this letter and repost it if gets deleted. I don't want people to wonder why I did this. I disseminated it more widely than I might have otherwise because I'm worried that my family might try to restrict access to it. I don't mind if this letter is made public. In fact, I'd prefer it be made public to people being unable to read it and drawing their own conclusions.
Feel free to republish this letter, but only if it is reproduced in its entirety.
||||| © Alex Halderman GS’09
Bill Zeller, a fifth-year graduate student in the computer science department, died Wednesday night at age 27 as a result of injuries from a suicide attempt.
Zeller was found in his University apartment by Public Safety officers at about 6 a.m. Sunday, shortly after he attempted to take his own life. Brain damage due to oxygen deprivation left Zeller in a coma at University Medical Center at Princeton until the evening of Jan. 5, when he was removed from life support.
He left behind a 4,000-word suicide note, which began: “I have the urge to declare my sanity and justify my actions, but I assume I’ll never be able to convince anyone that this was the right decision.” In the note, Zeller described how repeated sexual abuse as a young child haunted him for the rest of his life, causing regular nightmares and limiting his ability to connect with others.
“This has affected every aspect of my life,” he wrote. “This darkness, which is the only way I can describe it, has followed me like a fog, but at times intensified and overwhelmed me.”
Zeller published the note on his personal website and e-mailed it to friends Sunday morning. Minutes later, first responders discovered him in his apartment.
According to his note, Zeller – who was from Middletown, Conn. – never discussed the incidents of his childhood with anyone, including professionals, because he felt unable to fully trust others. He had been seriously contemplating suicide for at least one year and began drafting the note last winter.
Friends and colleagues said they were shocked by the note’s contents.
“Even to us, his closest friends here, we didn’t know about 80 percent of what he wrote in the note or how he was feeling,” said Harlan Yu GS, one of Zeller’s roommates for the past two years. “I never had any hints living with him for a year and half that this was what he was experiencing on a daily basis. That’s why it was so shocking that he could have hid it so well ... Reading the note it was in his voice, but the things that he was saying is such a far cry from everything that we knew about him.”
In contrast to the troubled person portrayed in the note, those closest to him remembered Zeller as a brilliant programmer, talented chef, devoted Boston Red Sox fan and someone who put his friends first.
“One of the hardest parts for me to read in all that was the fact that he didn’t seem to see himself as being a good person. He just went out of his way so many times for me that there’s no way you could have faked what he was doing or who he was,” said Joe Calandrino GS, a close friend who worked with Zeller on a number of computer science projects. “He showed a level of caring that I don’t think I see out of most people. And I don’t know how he could have even achieved that.”
While at Princeton, Zeller conducted computer security research at the Center for Information Technology Policy under his adviser Ed Felten, a computer science and Wilson School professor and director of CITP.
During that time, Zeller completed several high-profile projects. He and Felten published research exposing serious security vulnerabilities of websites such as The New York Times, YouTube and ING Direct. Zeller also co-authored an influential paper arguing for increased government transparency online.
When asked to discuss Zeller’s work, however, colleagues focused on the dozens of smaller projects that he completed in the past few years, which ranged from the practical — such as Graph Your Inbox, a tool to analyze and visualize Gmail activity over time — to IsItChristmas.com, which reads “no” 364 days of the year.
“I think he was just one of the most creative people that I knew,” Yu said. “A lot of the software he did certainly touched millions of people. He was always coming up with ingenious ideas that would often be funny and practical and also useful to those around him.”
“He would come up with an idea and he would dedicate his next week just because he was so motivated and excited about building something that lots of people could use, that people would find useful,” he added.
Before coming to Princeton, Zeller had already established himself as a young star in computer programming.
As a sophomore at Trinity College, where he graduated with honors in computer science in 2006, Zeller created myTunes, a free program that allows music purchased from iTunes to be downloaded to other computers. It was downloaded more than 3 million times.
Other early work included the open-source blogging platform Zempt, which has since been integrated into the widely used Moveable Type blog software.
“Bill’s work really grew out of his basic approach to life and to his interactions with his friends and colleagues, which was to look for concrete things he could do that could help people,” said Felten, who is serving in a yearlong post in Washington as the Federal Trade Commission’s chief technologist and returned to campus after the incident. On Thursday, Felten published a post in tribute to Zeller on the CITP blog, Freedom to Tinker.
Felten also emphasized Zeller’s commitment to mentoring undergraduates.
“I might not be in computer science but for him. He definitely had a major impact on my life, and I know that he’s had a major impact on a lot of others,” said Jennifer King ’11, who became a close friend of Zeller’s after he advised her work at a campus summer research program. “He’s not someone that I will ever forget because he was so instrumental in directing my life here. He’s not going to disappear into oblivion, which I think is one of the most important signs of a great life.”
According to friends, once Zeller set a goal, he would not rest until he was finished. “Once he decided he wanted to do something, he was almost obsessive with his desire to complete that and see it through,” said Joal Mendonsa, Zeller’s sophomore-year roommate at Trinity. “He basically wrote [myTunes] in a month without really sleeping. He would decide to work out more and would work out every single day for the next seven months.”
In his note, Zeller wrote that intense computer coding allowed him to escape his troubled thoughts for brief periods.
“As a computer scientist, he was an implementer; he was a doer,” King said. “He had this unbelievable creativity that allowed him to come up with crazy ideas, but then he’d actually go and do the crazy ideas, which is something that a lot of people don’t necessarily [do]. Those two qualities aren’t necessarily found in the same person.”
He was also heavily involved in the Graduate Student Government and chaired its facilities committee. “GSG is just one place among many on campus where Bill had many friends and will be missed,” said Kevin Collins, GSG president.
Jeff Dwoskin GS ’10, who co-chaired the facilities committee with Zeller last year, said Zeller’s many contributions included creating a program that tracked University shuttles’ locations and noted whether they were on schedule, a project he completed in a day.
“That was kind of his style, just to do something and make it work in a timeframe that was unbelievable to anyone else. He always impressed us with his ideas and abilities, no matter what the task,” Dwoskin said.
Zeller set himself apart from fellow graduate students in the number of people he reached with his work. “Grad school is the kind of place where you do work that only a few people see or you develop an idea so you can write about it and get it published, but he went the extra step to get things to the public that people used, real tools that had many real users. That’s something that a lot of graduate students can’t say,” said Ari Feldman GS, who worked with Zeller at CITP.
Posts about Zeller’s death on the prominent technology blog Gizmodo and the online community MetaFilter have drawn hundreds of comments, including testimony from those who use his programs.
Despite the positive impact Zeller had on his friends and those who used his programs, he wrote in his note that he chose to end his life to stop hurting those around him, as well as to end 23 years of pain caused by childhood sexual abuse.
“Maybe there’s nothing that could have been done,” said Joseph Hall, a postdoctoral researcher at CITP. “But I like to think in some parallel universe there’s a Bill Zeller out there who found a way to begin to heal himself. It’s a great loss for us.”
University spokeswoman Emily Aronson said that the loss of a community member “reminds us of the importance of being supportive of friends in crisis and making sure that the members of the community are aware of the resources available to them if they find they are in distress.”
Aronson said the University is offering counseling services to those affected by Zeller’s death. There are also many resources available to students who may find themselves in a crisis situation, she added.
Students can reach Counseling and Psychological Services at any time by calling 609-258-3139. They can also contact Public Safety or reach out to other trusted individuals such as residential college advisers, deans or faculty advisers. The University’s Sexual Harassment/Assault Advising, Resources, and Education office can be reached at 609-258-3310.
A memorial service open to members of the University community will be held at 2 p.m. on Jan. 15 at Prospect House. Information about a memorial fund set up in his honor was posted on the University website.
Friends are sharing memories of Zeller at 1000memories.com/billzeller.
Staff writer Henry Rome contributed reporting.
Editor’s Note: Zeller requested that his note be republished in full, rather than excerpted. It can be found on his personal website at documents.from.bz/note.txt. ||||| Starting in 1996, Alexa Internet has been donating their crawl data to the Internet Archive. Flowing in every day, these data are added to the Wayback Machine after an embargo period. ||||| Starting in 1996, Alexa Internet has been donating their crawl data to the Internet Archive. Flowing in every day, these data are added to the Wayback Machine after an embargo period. | Princeton is mourning the death of 27-year-old Bill Zeller, a renowned computer programmer and grad student who killed himself after leaving a 4,000-word note, reports the Daily Princetonian. “I have the urge to declare my sanity and justify my actions," it begins, "but I assume I’ll never be able to convince anyone that this was the right decision.” He wrote of the repeated sexual abuse he suffered as a child and how that "darkness" had "followed me like a fog." The note, which he wanted made public, can be read here. Zeller wrote that he never spoke of his abuse and felt that he couldn't connect with others or be a good person, though the newspaper rounds up friend after friend who declares the opposite. An example of his sense of humor: He created this IsItChristmas site, which reads "No" 364 days a year. He also created this app for Gmail users. “Maybe there’s nothing that could have been done,” says a postdoctoral researcher who knew him. "But I like to think in some parallel universe there’s a Bill Zeller out there who found a way to begin to heal himself. It’s a great loss for us.” Click for more. |
On Aug. 1, 2005, Lt. Gen. Keith Alexander reported for duty as the 16th director of the National Security Agency, the United States' largest intelligence organization. He seemed perfect for the job. Alexander was a decorated Army intelligence officer and a West Point graduate with master's degrees in systems technology and physics. He had run intelligence operations in combat and had held successive senior-level positions, most recently as the director of an Army intelligence organization and then as the service's overall chief of intelligence. He was both a soldier and a spy, and he had the heart of a tech geek. Many of his peers thought Alexander would make a perfect NSA director. But one prominent person thought otherwise: the prior occupant of that office.
Air Force Gen. Michael Hayden had been running the NSA since 1999, through the 9/11 terrorist attacks and into a new era that found the global eavesdropping agency increasingly focused on Americans' communications inside the United States. At times, Hayden had found himself swimming in the murkiest depths of the law, overseeing programs that other senior officials in government thought violated the Constitution. Now Hayden of all people was worried that Alexander didn't understand the legal sensitivities of that new mission.
"Alexander tended to be a bit of a cowboy: 'Let's not worry about the law. Let's just figure out how to get the job done,'" says a former intelligence official who has worked with both men. "That caused General Hayden some heartburn."
The heartburn first flared up not long after the 2001 terrorist attacks. Alexander was the general in charge of the Army's Intelligence and Security Command (INSCOM) at Fort Belvoir, Virginia. He began insisting that the NSA give him raw, unanalyzed data about suspected terrorists from the agency's massive digital cache, according to three former intelligence officials. Alexander had been building advanced data-mining software and analytic tools, and now he wanted to run them against the NSA's intelligence caches to try to find terrorists who were in the United States or planning attacks on the homeland.
By law, the NSA had to scrub intercepted communications of most references to U.S. citizens before those communications can be shared with other agencies. But Alexander wanted the NSA "to bend the pipe towards him," says one of the former officials, so that he could siphon off metadata, the digital records of phone calls and email traffic that can be used to map out a terrorist organization based on its members' communications patterns.
"Keith wanted his hands on the raw data. And he bridled at the fact that NSA didn't want to release the information until it was properly reviewed and in a report," says a former national security official. "He felt that from a tactical point of view, that was often too late to be useful."
Hayden thought Alexander was out of bounds. INSCOM was supposed to provide battlefield intelligence for troops and special operations forces overseas, not use raw intelligence to find terrorists within U.S. borders. But Alexander had a more expansive view of what military intelligence agencies could do under the law.
"He said at one point that a lot of things aren't clearly legal, but that doesn't make them illegal," says a former military intelligence officer who served under Alexander at INSCOM.
In November 2001, the general in charge of all Army intelligence had informed his personnel, including Alexander, that the military had broad authority to collect and share information about Americans, so long as they were "reasonably believed to be engaged" in terrorist activities, the general wrote in a widely distributed memo.
The general didn't say how exactly to make this determination, but it was all the justification Alexander needed. "Hayden's attitude was 'Yes, we have the technological capability, but should we use it?' Keith's was 'We have the capability, so let's use it,'" says the former intelligence official who worked with both men.
Hayden denied Alexander's request for NSA data. And there was some irony in that decision. At the same time, Hayden was overseeing a highly classified program to monitor Americans' phone records and Internet communications without permission from a court. At least one component of that secret domestic spying program would later prompt senior Justice Department officials to threaten resignation because they thought it was illegal.
But that was a presidentially authorized program run by a top-tier national intelligence agency. Alexander was a midlevel general who seemed to want his own domestic spying operation. Hayden was so troubled that he reported Alexander to his commanding general, a former colleague says. "He didn't use that atomic word -- 'insubordination' -- but he danced around it."
The showdown over bending the NSA's pipes was emblematic of Alexander's approach to intelligence, one he has honed over the course of a 39-year military career and deploys today as the director of the country's most powerful spy agency.
Alexander wants as much data as he can get. And he wants to hang on to it for as long as he can. To prevent the next terrorist attack, he thinks he needs to be able to see entire networks of communications and also go "back in time," as he has said publicly, to study how terrorists and their networks evolve. To find the needle in the haystack, he needs the entire haystack.
"Alexander's strategy is the same as Google's: I need to get all of the data," says a former administration official who worked with the general. "If he becomes the repository for all that data, he thinks the resources and authorities will follow."
That strategy has worked well for Alexander. He has served longer than any director in the NSA's history, and today he stands atop a U.S. surveillance empire in which signals intelligence, the agency's specialty, is the coin of the realm. In 2010, he became the first commander of the newly created U.S. Cyber Command, making him responsible for defending military computer networks against spies, hackers, and foreign armed forces -- and for fielding a new generation of cyberwarriors trained to penetrate adversaries' networks. Fueled by a series of relentless and increasingly revealing leaks from former NSA contractor Edward Snowden, the full scope of Alexander's master plan is coming to light.
Today, the agency is routinely scooping up and storing Americans' phone records. It is screening their emails and text messages, even though the spy agency can't always tell the difference between an innocent American and a foreign terrorist. The NSA uses corporate proxies to monitor up to 75 percent of Internet traffic inside the United States. And it has spent billions of dollars on a secret campaign to foil encryption technologies that individuals, corporations, and governments around the world had long thought protected the privacy of their communications from U.S. intelligence agencies.
The NSA was already a data behemoth when Alexander took over. But under his watch, the breadth, scale, and ambition of its mission have expanded beyond anything ever contemplated by his predecessors. In 2007, the NSA began collecting information from Internet and technology companies under the so-called PRISM program. In essence, it was a pipes-bending operation. The NSA gets access to the companies' raw data--including e-mails, video chats, and messages sent through social media--and analysts then mine it for clues about terrorists and other foreign intelligence subjects. Similar to how Alexander wanted the NSA to feed him with intelligence at INSCOM, now some of the world's biggest technology companies -- including Google, Microsoft, Facebook, and Apple -- are feeding the NSA. But unlike Hayden, the companies cannot refuse Alexander's advances. The PRISM program operates under a legal regime, put in place a few years after Alexander arrived at the NSA, that allows the agency to demand broad categories of information from technology companies.
Never in history has one agency of the U.S. government had the capacity, as well as the legal authority, to collect and store so much electronic information. Leaked NSA documents show the agency sucking up data from approximately 150 collection sites on six continents. The agency estimates that 1.6 percent of all data on the Internet flows through its systems on a given day -- an amount of information about 50 percent larger than what Google processes in the same period.
When Alexander arrived, the NSA was secretly investing in experimental databases to store these oceans of electronic signals and give analysts access to it all in as close to real time as possible. Under his direction, it has helped pioneer new methods of massive storage and retrieval. That has led to a data glut. The agency has collected so much information that it ran out of storage capacity at its 350-acre headquarters at Fort Meade, Maryland, outside Washington, D.C. At a cost of more than $2 billion, it has built a new processing facility in the Utah desert, and it recently broke ground on a complex in Maryland. There is a line item in the NSA's budget just for research on "coping with information overload."
Yet it's still not enough for Alexander, who has proposed installing the NSA's surveillance equipment on the networks of defense contractors, banks, and other organizations deemed essential to the U.S. economy or national security. Never has this intelligence agency -- whose primary mission is espionage, stealing secrets from other governments -- proposed to become the electronic watchman of American businesses.
This kind of radical expansion shouldn't come as a surprise. In fact, it's a hallmark of Alexander's career. During the Iraq war, for example, he pioneered a suite of real-time intelligence analysis tools that aimed to scoop up every phone call, email, and text message in the country in a search for terrorists and insurgents. Military and intelligence officials say it provided valuable insights that helped turn the tide of the war. It was also unprecedented in its scope and scale. He has transferred that architecture to a global scale now, and with his responsibilities at Cyber Command, he is expanding his writ into the world of computer network defense and cyber warfare.
As a result, the NSA has never been more powerful, more pervasive, and more politically imperiled. The same philosophy that turned Alexander into a giant -- acquire as much data from as many sources as possible -- is now threatening to undo him. Alexander today finds himself in the unusual position of having to publicly defend once-secret programs and reassure Americans that the growth of his agency, which employs more than 35,000 people, is not a cause for alarm. In July, the House of Representatives almost approved a law to constrain the NSA's authorities -- the closest Congress has come to reining in the agency since the 9/11 attacks. That narrow defeat for surveillance opponents has set the stage for a Supreme Court ruling on whether metadata -- the information Alexander has most often sought about Americans -- should be afforded protection under the Fourth Amendment's prohibition against "unreasonable searches and seizures," which would make metadata harder for the government to acquire.
Alexander declined Foreign Policy's request for an interview, but in response to questions about his leadership, his respect for civil liberties, and the Snowden leaks, he provided a written statement.
"The missions of NSA and USCYBERCOM are conducted in a manner that is lawful, appropriate, and effective, and under the oversight of all three branches of the U.S. government," Alexander stated. "Our mission is to protect our people and defend the nation within the authorities granted by Congress, the courts and the president. There is an ongoing investigation into the damage sustained by our nation and our allies because of the recent unauthorized disclosure of classified material. Based on what we know to date, we believe these disclosures have caused significant and irreversible harm to the security of the nation."
In lieu of an interview about his career, Alexander's spokesperson recommended a laudatory profile about him that appeared in West Point magazine. It begins: "At key moments throughout its history, the United States has been fortunate to have the right leader -- someone with an ideal combination of rare talent and strong character -- rise to a position of great responsibility in public service. With General Keith B. Alexander ... Americans are again experiencing this auspicious state of affairs."
Lawmakers and the public are increasingly taking a different view. They are skeptical about what Alexander has been doing with all the data he's collecting -- and why he's been willing to push the bounds of the law to get it. If he's going to preserve his empire, he'll have to mount the biggest charm offensive of his career. Fortunately for him, Alexander has spent as much time building a political base of power as a technological one.
* * *
Those who know Alexander say he is introspective, self-effacing, and even folksy. He's fond of corny jokes and puns and likes to play pool, golf, and Bejeweled Blitz, the addictive puzzle game, on which he says he routinely scores more than 1 million points.
Alexander is also as skilled a Washington knife fighter as they come. To get the NSA job, he allied himself with the Pentagon brass, most notably Donald Rumsfeld, who distrusted Hayden and thought he had been trying to buck the Pentagon's control of the NSA. Alexander also called on all the right committee members on Capitol Hill, the overseers and appropriators who hold the NSA's future in their hands.
When he was running the Army's Intelligence and Security Command, Alexander brought many of his future allies down to Fort Belvoir for a tour of his base of operations, a facility known as the Information Dominance Center. It had been designed by a Hollywood set designer to mimic the bridge of the starship Enterprise from Star Trek, complete with chrome panels, computer stations, a huge TV monitor on the forward wall, and doors that made a "whoosh" sound when they slid open and closed. Lawmakers and other important officials took turns sitting in a leather "captain's chair" in the center of the room and watched as Alexander, a lover of science-fiction movies, showed off his data tools on the big screen.
"Everybody wanted to sit in the chair at least once to pretend he was Jean-Luc Picard," says a retired officer in charge of VIP visits.
Alexander wowed members of Congress with his eye-popping command center. And he took time to sit with them in their offices and explain the intricacies of modern technology in simple, plain-spoken language. He demonstrated a command of the subject without intimidating those who had none.
"Alexander is 10 times the political general as David Petraeus," says the former administration official, comparing the NSA director to a man who was once considered a White House contender. "He could charm the paint off a wall."
Alexander has had to muster every ounce of that political savvy since the Snowden leaks started coming in June. In closed-door briefings, members of Congress have accused him of deceiving them about how much information he has been collecting on Americans. Even when lawmakers have screamed at him from across the table, Alexander has remained "unflappable," says a congressional staffer who has sat in on numerous private briefings since the Snowden leaks. Instead of screaming back, he reminds lawmakers about all the terrorism plots that the NSA has claimed to help foil.
"He is well aware that he will be criticized if there's another attack," the staffer says. "He has said many times, 'My job is to protect the American people. And I have to be perfect.'"
There's an implied threat in that statement. If Alexander doesn't get all the information he wants, he cannot do his job. "He never says it explicitly, but the message is, 'You don't want to be the one to make me miss,'" says the former administration official. "You don't want to be the one that denied me these capabilities before the next attack."
Alexander has a distinct advantage over most, if not all, intelligence chiefs in the government today: He actually understands the multibillion-dollar technical systems that he's running.
"When he would talk to our engineers, he would get down in the weeds as far as they were. And he'd understand what they were talking about," says a former NSA official. In that respect, he had a leg up on Hayden, who colleagues say is a good big-picture thinker but lacks the geek gene that Alexander was apparently born with.
"He looked at the technical aspects of the agency more so than any director I've known," says Richard "Dickie" George, who spent 41 years at the NSA and retired as the technical director of the Information Assurance Directorate. "I get the impression he would have been happy being one of those guys working down in the noise," George said, referring to the front-line technicians and analysts working to pluck signals out of the network.
Alexander, 61, has been a techno-spy since the beginning of his military career. After graduating from West Point in 1974, he went to West Germany, where he was initiated in the dark arts of signals intelligence. Alexander spent his time eavesdropping on military communications emanating from East Germany and Czechoslovakia. He was interested in the mechanics that supported this brand of espionage. He rose quickly through the ranks.
"It's rare to get a commander who understands technology," says a former Army officer who served with Alexander in 1995, when Alexander was in charge of the 525th Military Intelligence Brigade at Fort Bragg, North Carolina. "Even then he was into big data. You think of the wizards as the guys who are in their 20s." Alexander was 42 at the time.
At the turn of the century, Alexander took the big-data approach to counterterrorism. How well that method worked continues to be a matter of intense debate. Surely discrete interceptions of terrorists' phone calls and emails have helped disrupt plots and prevent attacks. But huge volumes of data don't always help catch potential plotters. Sometimes, the drive for more data just means capturing more ordinary people in the surveillance driftnet.
When he ran INSCOM and was horning in on the NSA's turf, Alexander was fond of building charts that showed how a suspected terrorist was connected to a much broader network of people via his communications or the contacts in his phone or email account.
"He had all these diagrams showing how this guy was connected to that guy and to that guy," says a former NSA official who heard Alexander give briefings on the floor of the Information Dominance Center. "Some of my colleagues and I were skeptical. Later, we had a chance to review the information. It turns out that all [that] those guys were connected to were pizza shops."
A retired military officer who worked with Alexander also describes a "massive network chart" that was purportedly about al Qaeda and its connections in Afghanistan. Upon closer examination, the retired officer says, "We found there was no data behind the links. No verifiable sources. We later found out that a quarter of the guys named on the chart had already been killed in Afghanistan."
Those network charts have become more massive now that Alexander is running the NSA. When analysts try to determine if a particular person is engaged in terrorist activity, they may look at the communications of people who are as many as three steps, or "hops," removed from the original target. This means that even when the NSA is focused on just one individual, the number of people who are being caught up in the agency's electronic nets could easily be in the tens of millions.
According to an internal audit, the agency's surveillance operations have been beset by human error and fooled by moving targets. After the NSA's legal authorities were expanded and the PRISM program was implemented, the agency inadvertently collected Americans' communications thousands of times each year, between 2008 and 2012, in violation of privacy rules and the law.
Yet the NSA still pursued a counterterrorism strategy that relies on ever-bigger data sets. Under Alexander's leadership, one of the agency's signature analysis tools was a digital graph that showed how hundreds, sometimes thousands, of people, places, and events were connected to each other. They were displayed as a tangle of dots and lines. Critics called it the BAG -- for "big ass graph" -- and said it produced very few useful leads. CIA officials in charge of tracking overseas terrorist cells were particularly unimpressed by it. "I don't need this," a senior CIA officer working on the agency's drone program once told an NSA analyst who showed up with a big, nebulous graph. "I just need you to tell me whose ass to put a Hellfire missile on."
Given his pedigree, it's unsurprising that Alexander is a devotee of big data. "It was taken as a given for him, as a career intelligence officer, that more information is better," says another retired military officer. "That was ingrained."
But Alexander was never alone in his obsession. An obscure civilian engineer named James Heath has been a constant companion for a significant portion of Alexander's career. More than any one person, Heath influenced how the general went about building an information empire.
Several former intelligence officials who worked with Heath described him as Alexander's "mad scientist." Another called him the NSA director's "evil genius." For years, Heath, a brilliant but abrasive technologist, has been in charge of making Alexander's most ambitious ideas a reality; many of the controversial data-mining tools that Alexander wanted to use against the NSA's raw intelligence were developed by Heath, for example. "He's smart, crazy, and dangerous. He'll push the technology to the limits to get it to do what he wants," says a former intelligence official.
Heath has followed Alexander from post to post, but he almost always stays in the shadows. Heath recently retired from government service as the senior science advisor to the NSA director -- Alexander's personal tech guru. "The general really looked to him for advice," says George, the former technical director. "Jim didn't mind breaking some eggs to make an omelet. He couldn't do that on his own, but General Alexander could. They brought a sense of needing to get things done. They were a dynamic duo."
Precisely where Alexander met Heath is unclear. They have worked together since at least 1995, when Alexander commanded the 525th Military Intelligence Brigade and Heath was his scientific sidekick. "That's where Heath took his first runs at what he called 'data visualization,' which is now called 'big data,'" says a retired military intelligence officer. Heath was building tools that helped commanders on the field integrate information from different sensors -- reconnaissance planes, satellites, signals intercepts -- and "see" it on their screens. Later, Heath would work with tools that showed how words in a document or pages on the Internet were linked together, displaying those connections in the form of three-dimensional maps and graphs.
At the Information Dominance Center, Heath built a program called the "automatic ingestion manager." It was a search engine for massive sets of data, and in 1999, he started taking it for test runs on the Internet.
In one experiment, the retired officer says, the ingestion manager searched for all web pages linked to the website of the Defense Intelligence Agency (DIA). Those included every page on the DIA's site, and the tool scoured and copied them so aggressively that it was mistaken for a hostile cyberattack. The site's automated defenses kicked in and shut it down.
On another occasion, the searching tool landed on an anti-war website while searching for information about the conflict in Kosovo. "We immediately got a letter from the owner of the site wanting to know why was the military spying on him," the retired officer says. As far as he knows, the owner took no legal action against the Army, and the test run was stopped.
Those experiments with "bleeding-edge" technology, as the denizens of the Information Dominance Center liked to call it, shaped Heath and Alexander's approach to technology in spy craft. And when they ascended to the NSA in 2005, their influence was broad and profound. "These guys have propelled the intelligence community into big data," says the retired officer.
Heath was at Alexander's side for the expansion of Internet surveillance under the PRISM program. Colleagues say it fell largely to him to design technologies that tried to make sense of all the new information the NSA was gobbling up. But Heath had developed a reputation for building expensive systems that never really work as promised and then leaving them half-baked in order to follow Alexander on to some new mission.
"He moved fairly fast and loose with money and spent a lot of it," the retired officer says. "He doubled the size of the Information Dominance Center and then built another facility right next door to it. They didn't need it. It's just what Heath and Alexander wanted to do." The Information Operations Center, as it was called, was underused and spent too much money, says the retired officer. "It's a center in search of a customer."
Heath's reputation followed him to the NSA. In early 2010, weeks after a young al Qaeda terrorist with a bomb sewn into his underwear tried to bring down a U.S. airliner over Detroit on Christmas Day, the director of national intelligence, Dennis Blair, called for a new tool that would help the disparate intelligence agencies better connect the dots about terrorism plots. The NSA, the State Department, and the CIA each had possessed fragments of information about the so-called underwear bomber's intentions, but there had been no dependable mechanism for integrating them all and providing what one former national security official described as "a quick-reaction capability" so that U.S. security agencies would be warned about the bomber before he got on the plane.
Blair put the NSA in charge of building this new capability, and the task eventually fell to Heath. "It was a complete disaster," says the former national security official, who was briefed on the project. "Heath's approach was all based on signals intelligence [the kind the NSA routinely collects] rather than taking into account all the other data coming in from the CIA and other sources. That's typical of Heath. He's got a very narrow viewpoint to solve a problem."
Like other projects of Heath's, the former official says, this one was never fully implemented. As a result, the intelligence community still didn't have a way to stitch together clues from different databases in time to stop the next would-be bomber. Heath -- and Alexander -- moved on to the next big project.
"There's two ways of looking at these guys," the retired military officer says. "Two visionaries who took risks and pushed the intelligence community forward. Or as two guys who blew a monumental amount of money."
As immense as the NSA's mission has become -- patrolling the world's data fields in search of terrorists, spies, and computer hackers -- it is merely one phase of Alexander's plan. The NSA's primary mission is to protect government systems and information. But under his leadership, the agency is also extending its reach into the private sector in unprecedented ways.
Toward the end of George W. Bush's administration, Alexander helped persuade Defense Department officials to set up a computer network defense project to prevent foreign intelligence agencies --mainly China's -- from stealing weapons plans and other national secrets from government contractors' computers.
Under the Defense Industrial Base initiative, also known as the DIB, the NSA provides the companies with intelligence about the cyberthreats it's tracking. In return, the companies report back about what they see on their networks and share intelligence with each other.
Pentagon officials say the program has helped stop some cyber-espionage. But many corporate participants say Alexander's primary motive has not been to share what the NSA knows about hackers. It's to get intelligence from the companies -- to make them the NSA's digital scouts. What is billed as an information-sharing arrangement has sometimes seemed more like a one-way street, leading straight to the NSA's headquarters at Fort Meade.
"We wanted companies to be able to share information with each other," says the former administration official, "to create a picture about the threats against them. The NSA wanted the picture."
After the DIB was up and running, Alexander proposed going further. "He wanted to create a wall around other sensitive institutions in America, to include financial institutions, and to install equipment to monitor their networks," says the former administration official. "He wanted this to be running in every Wall Street bank."
That aspect of the plan has never been fully implemented, largely due to legal concerns. If a company allowed the government to install monitoring equipment on its systems, a court could decide that the company was acting as an agent of the government. And if surveillance were conducted without a warrant or legitimate connection to an investigation, the company could be accused of violating the Fourth Amendment. Warrantless surveillance can be unconstitutional regardless of whether the NSA or Google or Goldman Sachs is doing it.
"That's a subtle point, and that subtlety was often lost on NSA," says the former administration official. "Alexander has ignored that Fourth Amendment concern."
The DIB experiment was a first step toward Alexander's taking more control over the country's cyberdefenses, and it was illustrative of his assertive approach to the problem. "He was always challenging us on the defensive side to be more aware and to try and find and counter the threat," says Tony Sager, who was the chief operating officer for the NSA's Information Assurance Directorate, which protects classified government information and computers. "He wanted to know, 'Who are the bad guys? How do we go after them?'"
While it's a given that the NSA cannot monitor the entire Internet on its own and that it needs intelligence from companies, Alexander has questioned whether companies have the capacity to protect themselves. "What we see is an increasing level of activity on the networks," he said recently at a security conference in Canada. "I am concerned that this is going to break a threshold where the private sector can no longer handle it and the government is going to have to step in."
* * *
Now, for the first time in Alexander's career, Congress and the general public are expressing deep misgivings about sharing information with the NSA or letting it install surveillance equipment. A Rasmussen poll of likely voters taken in June found that 68 percent believe it's likely the government is listening to their communications, despite repeated assurances from Alexander and President Barack Obama that the NSA is only collecting anonymous metadata about Americans' phone calls. In another Rasmussen poll, 57 percent of respondents said they think it's likely that the government will use NSA intelligence "to harass political opponents."
Some who know Alexander say he doesn't appreciate the depth of public mistrust and cynicism about the NSA's mission. "People in the intelligence community in general, and certainly Alexander, don't understand the strategic value of having a largely unified country and a long-term trust in the intelligence business," says a former intelligence official, who has worked with Alexander. Another adds, "There's a feeling within the NSA that they're all patriotic citizens interested in protecting privacy, but they lose sight of the fact that people don't trust the government."
Even Alexander's strongest critics don't doubt his good intentions. "He's not a nefarious guy," says the former administration official. "I really do feel like he believes he's doing this for the right reasons." Two of the retired military officers who have worked with him say Alexander was seared by the bombing of the USS Cole in 2000 and later the 9/11 attacks, a pair of major intelligence failures that occurred while he was serving in senior-level positions in military intelligence. They said he vowed to do all he could to prevent another attack that could take the lives of Americans and military service members.
But those who've worked closely with Alexander say he has become blinded by the power of technology. "He believes they have enough technical safeguards in place at the NSA to protect civil liberties and perform their mission," the former administration official says. "They do have a very robust capability -- probably better than any other agency. But he doesn't get that this power can still be abused. Americans want introspection. Transparency is a good thing. He doesn't understand that. In his mind it's 'You should trust me, and in exchange, I give you protection.'"
On July 30 in Las Vegas, Alexander sat down for dinner with a group of civil liberties activists and Internet security researchers. He was in town to give a keynote address the next day at the Black Hat security conference. The mood at the table was chilly, according to people who were in attendance. In 2012, Alexander had won plaudits for his speech at Black Hat's sister conference, Def Con, in which he'd implored the assembled community of experts to join him in their mutual cause: protecting the Internet as a safe space for speech, communications, and commerce. Now, however, nearly two months after the first leaks from Snowden, the people around the table wondered whether they could still trust the NSA director.
His dinner companions questioned Alexander about the NSA's legal authority to conduct massive electronic surveillance. Two guests had recently written a New York Times op-ed calling the NSA's activities "criminal." Alexander was quick to debate the finer points of the law and defend his agency's programs -- at least the ones that have been revealed -- as closely monitored and focused solely on terrorists' information.
But he also tried to convince his audience that they should help keep the NSA's surveillance system running. In so many words, Alexander told them: The terrorists only have to succeed once to kill thousands of people. And if they do, all of the rules we have in place to protect people's privacy will go out the window.
Alexander cast himself as the ultimate defender of civil liberties, as a man who needs to spy on some people in order to protect everyone. He knows that in the wake of another major terrorist attack on U.S. soil, the NSA will be unleashed to find the perpetrators and stop the next assault. Random searches of metadata, broad surveillance of purely domestic communications, warrantless seizure of stored communications -- presumably these and other extraordinary measures would be on the table. Alexander may not have spelled out just what the NSA would do after another homeland strike, but the message was clear: We don't want to find out.
Alexander was asking his dinner companions to trust him. But his credibility has been badly damaged. Alexander was heckled at his speech the next day at Black Hat. He had been slated to talk at Def Con too, but the organizers rescinded their invitation after the Snowden leaks. And even among Alexander's cohort, trust is flagging.
"You'll never find evidence that Keith sits in his office at lunch listening to tapes of U.S. conversations," says a former NSA official. "But I think he has a little bit of naiveté about this controversy. He thinks, 'What's the problem? I wouldn't abuse this power. Aren't we all honorable people?' People get into these insular worlds out there at NSA. I think Keith fits right in."
One of the retired military officers, who worked with Alexander on several big-data projects, said he was shaken by revelations that the agency is collecting all Americans' phone records and examining enormous amounts of Internet traffic. "I've not changed my opinion on the right balance between security versus privacy, but what the NSA is doing bothers me," he says. "It's the massive amount of information they're collecting. I know they're not listening to everyone's phone calls. No one has time for that. But speaking as an analyst who has used metadata, I do not sleep well at night knowing these guys can see everything. That trust has been lost."
SAUL LOEB/AFP/Getty Images ||||| (updated below; Update II [w/correction])
It has been previously reported that the mentality of NSA chief Gen. Keith Alexander is captured by his motto "Collect it All". It's a get-everything approach he pioneered first when aimed at an enemy population in the middle of a war zone in Iraq, one he has now imported onto US soil, aimed at the domestic population and everyone else.
But a perhaps even more disturbing and revealing vignette into the spy chief's mind comes from a new Foreign Policy article describing what the journal calls his "all-out, barely-legal drive to build the ultimate spy machine". The article describes how even his NSA peers see him as a "cowboy" willing to play fast and loose with legal limits in order to construct a system of ubiquitous surveillance. But the personality driving all of this - not just Alexander's but much of Washington's - is perhaps best captured by this one passage, highlighted by PBS' News Hour in a post entitled: "NSA director modeled war room after Star Trek's Enterprise". The room was christened as part of the "Information Dominance Center":
"When he was running the Army's Intelligence and Security Command, Alexander brought many of his future allies down to Fort Belvoir for a tour of his base of operations, a facility known as the Information Dominance Center. It had been designed by a Hollywood set designer to mimic the bridge of the starship Enterprise from Star Trek, complete with chrome panels, computer stations, a huge TV monitor on the forward wall, and doors that made a 'whoosh' sound when they slid open and closed. Lawmakers and other important officials took turns sitting in a leather 'captain's chair' in the center of the room and watched as Alexander, a lover of science-fiction movies, showed off his data tools on the big screen. "'Everybody wanted to sit in the chair at least once to pretend he was Jean-Luc Picard,' says a retired officer in charge of VIP visits."
Numerous commentators remarked yesterday on the meaning of all that (note, too, how "Total Information Awareness" was a major scandal in the Bush years, but "Information Dominance Center" - along with things like "Boundless Informant" - are treated as benign or even noble programs in the age of Obama).
But now, on the website of DBI Architects, Inc. of Washington and Reston, Virginia, there are what purports to be photographs of the actual Star-Trek-like headquarters commissioned by Gen. Alexander that so impressed his Congressional overseers. It's a 10,740 square foot labyrinth in Fort Belvoir, Virginia. The brochure touts how "the prominently positioned chair provides the commanding officer an uninterrupted field of vision to a 22'-0" wide projection screen":
The glossy display further describes how "this project involved the renovation of standard office space into a highly classified, ultramodern operations center." Its "primary function is to enable 24-hour worldwide
visualization, planning, and execution of coordinated information operations for the US Army and other federal agencies." It gushes: "The
futuristic, yet distinctly military, setting is further reinforced by the Commander's console, which gives the illusion that one has boarded
a star ship":
Other photographs of Gen. Alexander's personal Star Trek Captain fantasy come-to-life (courtesy of public funds) are here. Any casual review of human history proves how deeply irrational it is to believe that powerful factions can be trusted to exercise vast surveillance power with little accountability or transparency. But the more they proudly flaunt their warped imperial hubris, the more irrational it becomes.
Related issues
(1) Harvard Law Professor Yochai Benkler has an excellent Op-Ed in the Guardian arguing that the NSA is so far out-of-control that radical measures, rather than incremental legislative reform, are necessary to rein it in.
(2) The Federation of American Scientists' Steven Aftergood, usually a reform-minded transparency advocate somewhat hostile to massive leaks, examines the serious reform which Snowden's disclosures are enabling, as reluctantly acknowledged even by the FISA court and James Clapper himself.
(3) British comedian Russell Brand attended an event sponsored by GQ and Hugo Boss and gave a speech, while accepting an award, which offended almost everyone in the room (that speech is here). He then wrote a genuinely brilliant (and quite hilarious) Op-Ed in the Guardian about the role elite institutions play in reinforcing their legitimacy and how they maintain control of public discourse. It is well worth taking the time to read it.
UPDATE
Speaking of rampant, Strangelove-like megalomania in the National Security State, do read these remarkable comments from former NSA and CIA chief Gen. Michael Hayden regarding how the US views the internet.
UPDATE II [w/correction]
As the Washington Post and Slate both point out about the Foreign Policy article, the Star Trek room was used by Gen. Alexander but not actually commissioned by him, as I erroneously indicated. As the Post writes, this is "not to say that he didn't revel in the futuristic command center's bells and whistles, which include doors that make a distinctive 'whooshing' sound when opening or closing." As Foreign Policy reported, "Alexander, a lover of science-fiction movies, showed off his data tools on the big screen" when members of Congress and other dignitaries visited. The Post adds:
"The nifty workspace seemed to make an impression on the members of congress and other important visitors who dropped by to check it out. 'Everybody wanted to sit in the chair at least once to pretend he was Jean-Luc Picard,' a retired officer in charge of VIP visits told Foreign Policy."
But the room was commissioned before Alexander arrived. | Buried in a lengthy Foreign Policy profile on NSA chief Gen. Keith Alexander is this fascinating tidbit: When running the US Army's Intelligence and Security Command, he had its base of operations at Fort Belvoir—called the Information Dominance Center—built to look like the bridge of the USS Enterprise from Stark Trek. It was even designed by a Hollywood set designer, with sliding doors that make a "whoosh" sound when they open and close. Alexander, a sci-fi fan, would reportedly impress visiting lawmakers and officials by letting them sit in the leather captain's chair while he showed off all his tech and tools on the big screen. "Everybody wanted to sit in the chair at least once to pretend he was Jean-Luc Picard," says a source who was previously in charge of VIP visits. Over at the Guardian, Glenn Greenwald has dug up what appear to be photos of the center on the website of an architecture firm. |
IDEA is the primary federal law addressing the unique educational needs of children with disabilities. Millions of youths with disabilities aged 3 through 21 receive educational services under IDEA each year. In 1975, the Congress enacted the Education for All Handicapped Children Act (EHA), which mandated that a free, appropriate public education be made available for all children with disabilities, ensured due process rights, required individualized education programs, and required placement of children with disabilities in the least restrictive environment. Subsequent amendments to this law added other provisions and programs in support of children with disabilities and their parents and renamed the law as the IDEA in 1990. IDEA was most recently substantially revised in 1997. IDEA defines childhood disabilities to include a number of different emotional or physical conditions. Specifically, IDEA defines a “child with a disability” as a child with mental retardation; hearing, speech, or language impairments; visual impairments; orthopedic impairments; serious emotional disturbance; autism; traumatic brain injury; other health impairments; or specific learning disabilities, who, for this reason, needs special education and related services. By requiring that eligible children with disabilities receive special education services to address their educational needs in the least restrictive environment, IDEA mandates that such students are to be educated, to the maximum extent appropriate, with children who are not disabled. Generally, disabled students are to be removed from the regular education class only when they cannot be educated in that setting with supplementary aids and services. IDEA provides safeguards to ensure that children with disabilities who engage in misconduct are not unfairly deprived of educational services. For example, in developing the child’s IEP, the team—which includes at least one of the child’s regular education teachers and others providing special education resources—must consider strategies to address any behavior that may impede the child’s learning or the learning of others. If a child with a disability engages in misconduct, the school may take disciplinary action; however, the school may also be required to convene the IEP team to conduct a behavioral assessment and develop or review an intervention plan to address the behavior that resulted in the disciplinary action. Also, when the suspension considered is for more than 10 school days at a time, the IEP team must review the relationship between the child’s disability and the behavior that resulted in the disciplinary action. In October 1997, the Department of Education issued proposed regulations implementing the amendments. The proposed regulations contained several provisions that would allow services to continue to special education students who were suspended or expelled. In response to these proposed regulations some districts put in place discipline policies that were consistent with the proposed regulations that limited suspensions of special education students. In commenting on the proposed regulations, some school administrators and others voiced concerns that several procedural and discipline provisions that were designed to protect the rights of students with disabilities created problems among some school administrators and teachers over how to preserve school safety and order. After receiving nearly 6,000 public comments, Education issued final regulations for the IDEA amendments on March 12, 1999. The final regulations included some changes to the discipline provisions that attempted to respond to some of these concerns. According to Education, the discipline provisions in the final regulations give school officials reasonable flexibility to deal with minor infractions of school rules, while ensuring that special education students continue to receive educational services. To avoid disruption during the school year, Education did not require a state to comply with the new regulations essentially until the 1999-2000 school year began. Generally, under IDEA and the 1999 implementing federal regulations, schools are permitted to suspend a special education student for up to 10 school days in a given school year without providing educational services or removing the child to an alternative educational setting. However, if the misconduct is not a manifestation of the student’s disability, the student may be suspended beyond 10 school days; for such suspensions, the special education student must be provided educational services. The final regulations require a manifestation determination—to assess whether the student’s misconduct was caused by his disability—and an IEP team meeting only when a suspension is for more than 10 school days at a time. Otherwise, for short-term suspensions lasting 10 or fewer school days that do not constitute a change in placement, a manifestation determination and an IEP meeting are not mandatory. Additionally, the final regulations also permit repeated short-term (not more than 10 school days) suspensions of a disabled student, even if the suspensions cumulatively total more than 10 school days, so long as educational services are provided to the student after the 10th suspension day in a given school year. The regulations also modify a school’s authority to suspend a disabled student for more than 10 school days. Specifically, prior to the 1997 IDEA amendments, a student with a disability could be removed for up to 45 days to an interim alternative educational setting for carrying a firearm; under the revised law and the implementing regulations, this suspension authority has been expanded to include a disabled student who possesses or carries a weapon or possesses, uses, sells, or solicits drugs at school, as well as a disabled student determined by a hearing officer to be so dangerous that the student’s behavior “is substantially likely to result in injury to the child or others.” Before special education students may be removed from their current educational placement, however, IDEA provides a number of procedural safeguards. One such safeguard is a student’s right to remain in his or her current educational placement during any due process and subsequent judicial proceedings that follow the initial disciplinary removal. This safeguard was designed to limit the exclusion of students with disabilities from their educational setting because of their disability. In the past, such exclusions were alleged to have occurred so that schools, under the guise of minimizing disruptions or protecting other students, would not have to provide expensive services to disabled students. However, the so-called “stay-put” provision, whereby a child’s educational placement is to be maintained, has been perceived by some as limiting the authority of school personnel to remove special education students from school for disciplinary infractions. Education publicized the issuance of these final regulations extensively through printed materials and via its agency Web site. It also provided training and support materials to states and school districts explaining the changes. Education held a series of public forums around the country for local education agencies, schools, and other interested parties to explain the changes to the final regulations, with a special emphasis on the changes to the discipline provisions. It also held interactive videoconferences for the public and made numerous presentations at state forums. Education funded partnership grants with various groups to provide approved training and information at the local level. Finally, the agency issued memorandums related to IDEA implementation in electronic and printed form to provide guidance and answers to commonly asked questions. About 81 percent of schools responding to our survey experienced one or more incidents of serious misconduct in the 1999-2000 school year. Most principals reported to us (consistent with prior research findings) that most incidents of serious misconduct were acts of violent behavior, generally fistfights; firearms incidents were rare. Although the number of incidents was greater among regular education students, special education students had a higher rate of serious misconduct (per 1,000 students) than regular education students in reporting schools. The most common effect of serious misconduct was a disruption of student learning. Other effects, as reported by principals, included administrators and teachers having to spend an undue amount of time responding to the misconduct. Principals attributed the effects of serious misconduct to incidents caused by both regular education students and special education students. On the basis of our analysis of the data reported to us, 81 percent of the 272 responding schools experienced at least one incident of serious misconduct in the 1999-2000 school year (see table 1). Schools responding to our survey experienced an average of 10 incidents of serious misconduct among regular education students and 4 incidents among special education students in school year 1999-2000 (see table 2). To make a comparison that controls for the greater number of regular education students in schools (they were 88 percent of all students in the schools we surveyed), we calculated rates of misconduct per 1,000 students. We found that special education students had a higher rate of misconduct. For every 1,000 regular education students represented in our survey, there were 15 incidents of serious misconduct reported; for every 1,000 special education students, there were 50 incidents of serious misconduct reported. Violent behavior was the most common type of serious misconduct engaged in by students, according to responding principals. Based on information we received from written survey comments, from discussions we had with school officials during our survey data clarification, and from our site visits, many of the violent incidents were student fistfights. Seven of every 10 incidents among regular education students and 3 of every 4 incidents among special education students were acts of violent behavior. The number of incidents reported by principals varied. While 22 percent of responding principals reported no serious misconduct among regular education students during the 1999-2000 school year, 31 percent reported 10 or more incidents among regular education students. Further, 34 percent reported no serious misconduct among special education students, while 15 percent reported 10 or more incidents among this group. More detailed information on the incidence of serious misconduct appears in tables 6 and 7 in appendix II. Serious misconduct, whether committed by regular education or special education students, leads to a variety of negative effects on the school community (see fig. 1). The most common effect—reported by 52 percent of responding principals—is a disruption in student learning. The next most common effect of serious misconduct involves the time and attention teachers and administrators must devote to dealing with student misconduct. Forty-seven percent of responding principals indicated school administrators have to spend an undue amount of time and attention on serious misconduct, and 29 percent of responding principals indicated that teachers have to spend an undue amount of time on discipline procedures and reviewing district discipline policies. These responses are consistent with the comments we heard in our site visits. Some of the staff we interviewed stated that IDEA-related discipline processes were burdensome when compared with actions taken regarding regular education students and that they took resources away from other activities. Other effects reported in the survey responses were a negative impact on efforts to meet state or district learning standards and difficulty hiring substitute teachers. Principals responding to our survey attributed the more common effects of serious misconduct—disruption of student learning; school administrators and teachers spending an undue amount of time and attention on disciplinary matters; negative impact on efforts to meet state or district learning standards; and difficulty hiring substitute teachers—to both regular education and special education students. However, principals generally attributed the effects somewhat more frequently to special education students than to regular education students (especially effects involving the time spent in dealing with serious misconduct). For example, 127 principals indicated that administrators had spent an undue amount of time and effort in dealing with serious misconduct. Among these principals, 80 said this effect resulted from misconduct by both regular education and special education students. An additional 40 principals indicated the effect arose solely from misconduct by special education students, while 7 other principals attributed the effect exclusively to misconduct by regular education students. Likewise, 50 of the 80 principals who said that teachers had spent an undue amount of time on disciplinary matters indicated that this effect was attributable to both regular and special education students. The remaining 30 principals indicated the effect had resulted exclusively from misconduct by special education students. Principals attributed each of the remaining three more common effects to the misconduct of both groups as well (see table 8 in app. II for the complete list of effects arising from serious misconduct and the frequency that principals attributed them to each student group). Based on our analysis of reported disciplinary actions and past research, regular education and special education students who engaged in serious misconduct were treated in a similar manner. Regardless of student status, about 60 to 65 percent of students who engaged in serious misconduct during school year 1999-2000 were given out-of-school suspensions. Moreover, most suspended students from either group were given short- term, rather than long-term, suspensions. The portion of suspended special education students who received educational services during their suspensions was not much different from the portion of suspended regular education students who received services. Finally, the percentages of regular education and special education perpetrators who were suspended from school and/or placed in an alternative educational setting were 15 percent and 17 percent, respectively. We asked principals in our survey to indicate the type and frequency of disciplinary actions they took with students in response to the serious misconduct engaged in by regular education and special education students during the 1999-2000 school year. The information principals provided to us reveals that there is little difference in how they discipline regular education and special education students who engage in serious misconduct. Table 3 compares the frequency with which principals took disciplinary actions with regular education and special education students who engaged in serious misconduct. An out-of-school suspension was the most common disciplinary action taken against students who engaged in serious misconduct, based on our analysis of the data reported to us. Sixty- four percent of regular education students and 58 percent of special education students who engaged in serious misconduct were given out-of- school suspensions during the 1999-2000 school year. Relatively few students were expelled. A large majority of special education students who received an expulsion were provided educational services after the expulsion, consistent with IDEA requirements that schools continue to provide services to students with disabilities who are expelled. About one- half of regular education students received education services after expulsion. Our analysis of the suspension data indicates little difference between the two student categories in terms of the length of suspensions received (see table 4). About two-thirds of each category of suspended students were suspended for a short period (1 to 3 days) rather than a long period (4 or more days). Forty-five percent of suspended special education students received educational services during the suspension period. By comparison, 35 percent of suspended regular education students received educational services during their suspension. According to our analysis of the information reported to us, principals referred to the police or juvenile justice system similar portions of regular education and special education students involved in serious misconduct. Specifically, responding principals reported referring an average of 34 percent of special education perpetrators and 28 percent of regular education perpetrators to the police or juvenile justice system. A police referral was in addition to the disciplinary action reported above (in fact, police or uniformed security officers were present continually at many of the 17 schools we visited). IDEA appears to play a limited role in schools’ ability to properly discipline students. Eighty-six percent of the 272 schools responding to our survey also operate under one or more local special education discipline policies that differ from IDEA and the final regulations by providing additional protections for students with disabilities. In some instances, local special education discipline policies prohibit schools from taking actions that would be permissible under IDEA, while in other cases, these policies require schools to take actions not mandated by IDEA. For example, 64 percent of responding principals reported that a local policy prohibits suspension of special education students for more than 10 school days over the course of a school year, even though a suspension totaling more than 10 school days is permissible under IDEA. Responding principals viewed some of these local policies more favorably than others and generally assessed their overall special education discipline policies, which are an amalgamation of IDEA and local policies, as moderately supporting discipline-related matters. Principals rated most negatively the local policy preventing suspension of a special education student more than 10 cumulative school days in a school year. Nevertheless, responding principals generally regarded their overall special education discipline policy as having a positive or neutral effect on the level of safety and orderliness in their schools. Our analysis of principals’ responses showed that 86 percent also operate under one or more special education discipline policies that are different from the federal IDEA discipline policy because the local policies provide additional protections for special education students. These differences can be characterized as two types: (1) disciplinary actions permissible under IDEA but prohibited under local policies and (2) actions not mandated by IDEA but required by local policies. IDEA and local policies most frequently differ on actions related to student suspension. According to information provided by responding principals, 64 percent are not allowed to suspend a special education student for more than 10 cumulative school days during a school year, 36 percent are required to provide services to the student throughout the suspension period, and 24 percent are required to determine whether the student’s behavior was a manifestation of his or her disability whenever suspension is being considered. In contrast, IDEA final regulations allow schools to suspend special education students for more than 10 cumulative school days in a school year and require neither of the latter two policies listed above for all suspensions. Table 5 summarizes differences between IDEA and local policies that were derived from responses to our survey. See appendix II for details of the reported variations between districts’ special education discipline policies and IDEA. Responding principals generally viewed favorably or neutrally those special education discipline policies not mandated by IDEA but required at the local level. For example, 87 percent of principals who are required to offer services to suspended students and 72 percent who are required to conduct manifestation determinations rated these local policies as having a positive effect on their ability to properly discipline special education students or were neutral toward these policies. In contrast, they generally viewed more negatively those policies where actions are permissible under IDEA but prohibited at the local level. For example, of principals who reported that they are unable to suspend special education students for more than 10 school days over a school year, 50 percent rated this policy as having a negative effect on their ability to properly discipline special education students, while 50 percent rated it as having no effect or a positive effect (see table 5). Responding principals generally regarded their overall special education discipline policy, which essentially is a combination of IDEA and any local policies, as having a positive or neutral effect on their schools’ levels of safety and orderliness (see fig. 2). Specifically, 74 percent of responding principals rated their policies as having a positive or neutral effect on the safety level at their school (although the remaining 26 percent rated the policies as having a negative effect). Likewise, 76 percent rated their local policies as having a positive or neutral effect on their schools’ level of orderliness. Among all principals who responded to our survey, the most frequent comment (expressed by 26 percent of all responding principals) in response to our open-ended questions was that the special education discipline policy under which they operate is not fair or equitable to teachers, students, and/or parents. Other comments included that the IEP meetings and documentation requirements associated with IDEA discipline procedures are burdensome and time-consuming (20 percent); special education discipline policies limit the school’s ability to appropriately discipline special education students (19 percent); and concern about the maximum number of school days that special education students can be suspended or placed in an alternative educational setting (13 percent). The schools responding to our survey experienced a relatively small number of incidents of serious misconduct over the course of a school year. Regular education and special education students alike had engaged in serious misconduct, but the rate among special education students was higher than that of regular education students. This may be due, in part, to behavioral responses associated with some disabilities, which can manifest themselves in inappropriate behaviors. Despite little difference in the actions taken by schools in our survey to discipline regular education and special education students, a sizable minority of principals voiced concern that their schools’ discipline policies impeded proper disciplinary action. Some of these comments may have resulted from the additional time and resources that principals reportedly must use to discipline special education students compared with regular education students. Although the 1997 IDEA amendments and final federal regulations gave schools more flexibility in handling discipline issues, our analysis showed that local school district policies can provide additional protections when compared with provisions in the final federal regulations. Where it exists, the local policy that limits the suspension of special education students to no more than 10 cumulative school days per year is viewed negatively by about half of the principals who operate under it. This 10-school-day suspension limit may reflect school districts’ continuation of policies developed from the proposed IDEA federal regulations that were out for public comment through May 1999 but were replaced by the final regulations. Where restrictive local policies are applied, they may alter the balance between protecting the rights of disabled students and ensuring that administrators are able to maintain the safe and orderly environment that the Congress and Education sought to achieve. Because the more common concerns we identified about different treatment for special education students resulted largely from local policy, changes to federal law will not address these concerns. In commenting on the draft report the Department of Education stated that the report provided valuable factual information about special education discipline policy and practices. Education staff also provided technical comments, which we incorporated as appropriate. Education’s comments appear in appendix III. We are sending copies of this report to the Secretary of Education, relevant congressional committees, and others who are interested. Copies will be made available to others on request. If you or your staffs have any questions concerning this report, please call me at (202) 512-7215. Another GAO contact and staff acknowledgments are listed in appendix IV. This appendix describes the methodologies used in our review of IDEA and student discipline policies. All data collected were self-reported and we did not independently verify their accuracy. We did our work from January 2000 to December 2000 in accordance with generally accepted government auditing standards. To obtain a broad perspective on the issues surrounding IDEA and special education discipline we interviewed researchers, public policy advisers, attorneys, and representatives of organizations that have an interest in special education and discipline policy in public schools in general. We asked for their opinions about the discipline of special education students in public schools and how IDEA affected the ability of school administrators to maintain safe and orderly schools. We gathered anecdotal data about different disciplinary treatment of special education and regular education students, but no group was able to provide us with national data on the disciplinary actions taken with regular education or special education students. We sought data available from Department of Education sources on discipline for special education and regular education students. No national data from any of Education’s current data collections existed that would allow us to compare disciplinary actions taken with students from the two groups. Education now is collecting by means of a survey new information on discipline and special education issues as required by IDEA. We met with the Education staff and their contractor who are responsible for this survey. We also met with or had telephone conversations with state officials who were responsible for their respective state IDEA-mandated data collection efforts. The first-year data collection effort had not been completed for all states, and the processing and cleaning of the data was still in its early stages in spring 2000. Moreover, these data did not include discipline data on regular education students that we needed to address one of our objectives. Therefore, while we had hoped to use information generated by this new data collection effort, we had to collect our own data. Because no national comparative discipline information was available, we developed a survey instrument to gather data at the middle school and high school level for school year 1999-2000. We chose to collect data at the school level because principals were the group most likely to have information on outcomes of serious misconduct by special education and regular education students. We eliminated elementary schools from our sample because our review of Department of Education and Department of Justice reports indicated that elementary schools were much less likely than either middle or high schools to experience or report any type of serious misconduct. We mailed questionnaires to principals from 500 randomly selected public middle schools and high schools. We drew our sample from the most recently available address listing from the 1997 Common Core of Data maintained by the Department of Education. In addition, we surveyed the 70 largest schools drawn from that same list. We pretested our survey instrument with principals in area high schools in Maryland and Virginia. After we drew our samples, we learned that 50 of the randomly sampled cases were not public middle or high schools, so we excluded them from our sample and drew replacements. After receiving responses to our survey, we had to exclude an additional 35 cases from the random sample of 500 because these schools had closed, had moved, had been consolidated with other schools, or otherwise were no longer appropriate for inclusion in our sample. Despite several follow-ups, only 60 percent of the principals from the random survey responded. This response rate is too low to permit us to produce estimates that are nationally representative. The 70 largest schools were predominantly located in California, Florida, New York, and Texas. Our response rate for the 70 largest schools was 27 percent. Most of the schools failed to respond to the survey despite repeated mailings and numerous telephone contacts. We also met with officials from New York City schools, which accounted for more than 25 percent of the large-school sample, and even though they reassured us that they would cooperate, no additional schools responded. The response rate from the large schools was too low to permit us to conduct a comparative analysis of large and small schools. We augmented data from our mail surveys with information from site visits to three states: Louisiana, New York, and Wisconsin. On these site visits we met with state officials, nine district superintendents, special education directors, assistant principals, school security staff, and principals from 45 schools. We selected these states in order to visit with school staff from a variety of settings (urban/rural, large city/suburban), where IDEA and discipline issues were reported to be of significant concern. We discussed with these school officials their experiences with state and local district policy concerning school discipline for special education and regular education students and the impact that IDEA law and regulations have had on their ability to maintain safe and orderly schools. In addition to those named above, the following persons made important contributions to the report: George Erhart, Brett Fallavollita, Elspeth Grindstaff, and Behn Miller. The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Orders by mail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Orders by visiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders by phone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: fraudnet@gao.gov 1-800-424-5454 (automated answering system) | Standards for discipline and safety in schools are set primarily by local school districts. Federal and local regulations provide additional protections to special education students who misbehave to ensure that they are not unfairly deprived of their rights to an appropriate education. GAO reviewed regular and special education discipline policies to determine whether there were any differences in how disabled and non-disabled students were disciplined. GAO found the rate of misconduct among special education students was higher than that of regular education students. Despite little difference in the actions taken by schools to discipline regular education and special education students, a sizeable minority of principals voiced concern that their schools' discipline policies impeded proper disciplinary action. Although the 1997 Individuals With Disabilities Education Act amendments and final federal regulations gave schools more flexibility in handling discipline issues, GAO's analysis showed that local school district policies can provide additional protections when compared with provisions in the final regulations. Restrictive local policies may alter the balance between protecting the rights of disabled students and ensuring that administrators are able to maintain a safe and orderly environment. Because the more common concerns GAO identified about different treatment for special education students resulted largely from local policy, changes in federal law will not address these concerns. |
As we move further into the 21st century, it becomes increasingly important for the Congress, OMB, and executive agencies to face two overriding questions: What is the proper role for the federal government? How should the federal government do business? GPRA serves as a bridge between these two questions by linking results that the federal government seeks to achieve to the program approaches and resources that are necessary to achieve those results. The performance information produced by GPRA’s planning and reporting infrastructure can help build a government that is better equipped to deliver economical, efficient, and effective programs that can help address the challenges facing the federal government. Among the major challenges are instilling a results orientation, ensuring that daily operations contribute to results, understanding the performance consequences of budget decisions, coordinating crosscutting programs, and building the capacity to gather and use performance information. The cornerstone of federal efforts to successfully meet current and emerging public demands is to adopt a results orientation; that is, to develop a clear sense of the results an agency wants to achieve as opposed to the products and services (outputs) an agency produces and the processes used to produce them. Adopting a results-orientation requires transforming organizational cultures to improve decisionmaking, maximize performance, and assure accountability—it entails new ways of thinking and doing business. This transformation is not an easy one and requires investments of time and resources as well as sustained leadership commitment and attention. Based on the results of our governmentwide survey in 2000 of managers at 28 federal agencies, many agencies face significant challenges in instilling a results-orientation throughout the agency, as the following examples illustrate. At 11 agencies, less than half of the managers perceived, to at least a great extent, that a strong top leadership commitment to achieving results existed. At 26 agencies, less than half of the managers perceived, to at least a great extent, that employees received positive recognition for helping the agency accomplish its strategic goals. At 22 agencies, at least half of the managers reported that they were held accountable for the results of their programs to at least a great extent, but at only 1 agency did more than half of the managers report that they had the decisionmaking authority they needed to help the agency accomplish its strategic goals to a comparable extent. Additionally, in 2000, significantly more managers overall (84 percent) reported having performance measures for the programs they were involved with than the 76 percent who reported that in 1997, when we first surveyed federal managers regarding governmentwide implementation of GPRA. However, at no more than 7 of the 28 agencies did 50 percent or more of the managers respond that they used performance information to a great or very great extent for any of the key management activities we asked about. As I mentioned earlier, we are now moving to a more difficult but more important phase of GPRA—using results-oriented performance information on a routine basis as a part of agencies’ day-to-day management and for congressional and executive branch decisionmaking. GPRA is helping to ensure that agencies are focused squarely on results and have the capabilities to achieve those results. GPRA is also showing itself to be an important tool in helping the Congress and the executive branch understand how the agencies’ daily activities contribute to results that benefit the American people. To build leadership commitment and help ensure that managing for results becomes the standard way of doing business, some agencies are using performance agreements to define accountability for specific goals, monitor progress, and evaluate results. The Congress has recognized the role that performance agreements can play in holding organizations and executives accountable for results. For example, in 1998, the Congress chartered the Office of Student Financial Assistance as a performance- based organization, and required it to implement performance agreements. In our October 2000 report on agencies’ use of performance agreements, we found that although each agency developed and implemented agreements that reflected its specific organizational priorities, structure, and culture, our work identified five common emerging benefits from agencies’ use of results-oriented performance agreements. (See fig. 1.) Strengthens alignment of results-oriented goals with daily operations Fosters collaboration across organizational boundaries Enhances opportunities to discuss and routinely use performance information to make program improvements Provides results-oriented basis for individual accountability Maintains continuity of program goals during leadership transitions Performance agreements can be effective mechanisms to define accountability for specific goals and to align daily activities with results. For example, at the Veterans Health Administration (VHA), each Veterans Integrated Service Network (VISN) director’s agreement includes performance goals and specific targets that the VISN is responsible for accomplishing during the next year. The goals in the performance agreements are aligned with VHA’s, and subsequently the Department of Veterans Affairs’ (VA), overall mission and goals. A VHA official indicated that including corresponding goals in the performance agreements of VISN directors contributed to improvements in VA’s goals. For example, from fiscal years 1997 through 1999, VHA reported that its performance on the Prevention Index had improved from 69 to 81 percent. A goal requiring VISNs to produce measurable increases in the Prevention Index has been included in the directors’ performance agreements each year from 1997 through 1999. The Office of Personnel Management recently amended its regulations for members of the Senior Executive Service requiring agencies to appraise senior executive performance using measures that balance organizational results with customer, employee, and other perspectives in their next appraisal cycles. The regulations also place increased emphasis on using performance results as a basis for personnel decisions, such as pay, awards, and removal. We are planning to review agencies’ implementation of the amended regulations. Program evaluations are important for assessing the contributions that programs are making to results, determining factors affecting performance, and identifying opportunities for improvement. The Department of Agriculture’s Animal and Plant Health Inspection Service (APHIS) provides an example of how program evaluations can be used to help improve performance by identifying the relationships between an agency’s efforts and results. Specifically, APHIS used program evaluation to identify causes of a sudden outbreak of Mediterranean Fruit Flies along the Mexico-Guatemala border. The Department of Agriculture’s fiscal year 1999 performance report described the emergency program eradication activities initiated in response to the evaluation’s findings and recommendations, and linked the continuing decrease in the number of infestations during the fiscal year to these activities. However, our work has shown that agencies typically do not make full use of program evaluations as a tool for performance measurement and improvement. After a decade of government downsizing and curtailed investment, it is becoming increasingly clear that today’s human capital strategies are not appropriately constituted to adequately meet current and emerging needs of the government and its citizens in the most efficient, effective, and economical manner possible. Attention to strategic human capital management is important because building agency employees’ skills, knowledge, and individual performance must be a cornerstone of any serious effort to maximize the performance and ensure the accountability of the federal government. GPRA, with its explicit focus on program results, can serve as a tool for examining the programmatic implications of an agency’s strategic human capital management challenges. However, we reported in April 2001 that, overall, agencies’ fiscal year 2001 performance plans reflected different levels of attention to strategic human capital issues. When viewed collectively, we found that there is a need to increase the breadth, depth, and specificity of many related human capital goals and strategies and to better link them to the agencies’ strategic and programmatic planning. Very few of the agencies’ plans addressed succession planning to ensure reasonable continuity of leadership; performance agreements to align leaders’ performance expectations with the agency’s mission and goals; competitive compensation systems to help the agency attract, motivate, retain, and reward the people it needs; workforce deployment to support the agency’s goals and strategies; performance management systems, including pay and other meaningful incentives, to link performance to results; alignment of performance expectations with competencies to steer the workforce towards effectively pursuing the agency’s goals and strategies; and employee and labor relations grounded in a mutual effort on the strategies to achieve the agency’s goals and to resolve problems and conflicts fairly and effectively. In a recent report, we concluded that a substantial portion of the federal workforce will become eligible to retire or will retire over the next 5 years, and that workforce planning is critical for assuring that agencies have sufficient and appropriate staff considering these expected increases in retirements. OMB recently instructed executive branch agencies and departments to submit workforce analyses by June 29, 2001. These analyses are to address areas such as the skills of the workforce necessary to accomplish the agency’s goals and objectives; the agency’s recruitment, training, and retention strategies; and the expected skill imbalances due to retirements over the next 5 years. OMB also noted that this is the initial phase of implementing the President’s initiative to have agencies restructure their workforces to streamline their organizations. These actions indicate OMB’s growing interest in working with agencies to ensure that they have the human capital capabilities needed to achieve their strategic goals and accomplish their missions. Major management challenges and program risks confronting agencies continue to undermine the economy, efficiency, and effectiveness of federal programs. As you know, Mr. Chairman, this past January, we updated our High-Risk Series and issued our 21-volume Performance and Accountability Series and governmentwide perspective that outlines the major management challenges and program risks that federal agencies continue to face. This series is intended to help the Congress and the administration consider the actions needed to support the transition to a more results-oriented and accountable federal government. GPRA is a vehicle for ensuring that agencies have the internal management capabilities needed to achieve results. OMB has required that agencies’ annual performance plans include performance goals for resolving their major management problems. Such goals should be included particularly for problems whose resolution is mission-critical, or which could potentially impede achievement of performance goals. This guidance should help agencies address critical management problems to achieve their strategic goals and accomplish their missions. OMB’s attention to such issues is important because we have found that agencies are not consistently using GPRA to show how they plan to address major management issues. A key objective of GPRA is to help the Congress, OMB, and executive agencies develop a clearer understanding of what is being achieved in relation to what is being spent. Linking planned performance with budget requests and financial reports is an essential step in building a culture of performance management. Such an alignment infuses performance concerns into budgetary deliberations, prompting agencies to reassess their performance goals and strategies and to more clearly understand the cost of performance. For the fiscal year 2002 budget process, OMB called for agencies to prepare an integrated annual performance plan and budget and asked the agencies to report on the progress they had made in better understanding the relationship between budgetary resources and performance results and on their plans for further improvement. In the 4 years since the governmentwide implementation of GPRA, we have seen more agencies make more explicit links between their annual performance plans and budgets. Although these links have varied substantially and reflect agencies’ goals and organizational structures, the connections between performance and budgeting have become more specific and thus more informative. We have also noted progress in agencies’ ability to reflect the cost of performance in the statements of net cost presented in annual financial statements. Again, there is substantial variation in the presentation of these statements, but agencies are developing ways to better capture the cost of performance. Virtually all of the results that the federal government strives to achieve require the concerted and coordinated efforts of two or more agencies. There are over 40 program areas across the government, related to a dozen federal mission areas, in which our work has shown that mission fragmentation and program overlap are widespread, and that crosscutting federal program efforts are not well coordinated. To illustrate, in a November 2000 report, and in several recent testimonies, we noted that overall federal efforts to combat terrorism were fragmented. These efforts are inherently difficult to lead and manage because the policy, strategy, programs, and activities to combat terrorism cut across more than 40 agencies. As we have repeatedly stated, there needs to be a comprehensive national strategy on combating terrorism that has clearly defined outcomes. For example, the national strategy should include a goal to improve state and local response capabilities. Desired outcomes should be linked to a level of preparedness that response teams should achieve. We believe that, without this type of specificity in a national strategy, the nation will continue to miss opportunities to focus and shape the various federal programs combating terrorism. Crosscutting program areas that are not effectively coordinated waste scarce funds, confuse and frustrate program customers, and undercut the overall effectiveness of the federal effort. GPRA offers a structured and governmentwide means for rationalizing these crosscutting efforts. The strategic, annual, and governmentwide performance planning processes under GPRA provide opportunities for agencies to work together to ensure that agency goals for crosscutting programs complement those of other agencies; program strategies are mutually reinforcing; and, as appropriate, common performance measures are used. If GPRA is effectively implemented, the governmentwide performance plan and the agencies’ annual performance plans and reports should provide the Congress with new information on agencies and programs addressing similar results. Once these programs are identified, the Congress can consider the associated policy, management, and performance implications of crosscutting programs as part of its oversight of the executive branch. Credible performance information is essential for the Congress and the executive branch to accurately assess agencies’ progress towards achieving their goals. However, limited confidence in the credibility of performance information is one of the major continuing weaknesses with GPRA implementation. The federal government provides services in many areas through the state and local level, thus both program management and accountability responsibilities often rest with the state and local governments. In an intergovernmental environment, agencies are challenged to collect accurate, timely, and consistent national performance data because they rely on data from the states. For example, earlier this spring, the Environmental Protection Agency identified, in its fiscal year 2000 performance report, data limitations in its Safe Drinking Water Information System due to recurring reports of discrepancies between national and state databases, as well as specific misidentifications reported by individual utilities. Also, the Department of Transportation could not show actual fiscal year 2000 performance information for measures associated with its outcome of less highway congestion. Because such data would not be available until after September 2001, Transportation used projected data. According to the department, the data were not available because they are provided by the states, and the states’ reporting cycles for these data do not match its reporting cycle for its annual performance. Discussing data credibility and related issues in performance reports can provide important contextual information to the Congress. The Congress can use this discussion, for example, to raise questions about the problems agencies are having in collecting needed results-oriented information and the cost and data quality trade-offs associated with various collection strategies. | This testimony discusses the Government Performance and Results Act (GPRA) of 1993. During the last decade, Congress, the Office of Management and Budget, and executive agencies have worked to implement a statutory framework to improve the performance and accountability of the executive branch and to enhance executive branch and congressional decisionmaking. The core of this framework includes financial management legislation, especially GPRA. As a result of this framework, there has been substantial progress in the last few years in establishing the basic infrastructure needed to create high-performing federal organizations. The issuance of agencies' fiscal year 2000 performance reports, in addition to updated strategic plans, annual performance plans, and the governmentwide performance plans, completes two full cycles of annual performance planning and reporting under GPRA. However, much work remains before this framework is effectively implemented across the government, including transforming agencies' organizational cultures to improve decisionmaking and strengthen performance and accountability. |
NASA's Mars rover Curiosity took this self-portrait, composed of more than 50 images using its robotic arm-mounted MAHLI camera, on Feb. 3, 2013. The image shows Curiosity at the John Klein drill site. A drill hole is visible at bottom left.
NASA's Mars rover Curiosity has revealed no trace of methane, a potential sign of primitive life, on the Martian surface, contradicting past evidence of the gas spotted by spacecraft orbiting the Red Planet, researchers say.
The Mars methane discovery, or rather the lack thereof, adds new fuel to the debate over whether the gas is truly present on Mars. And not all scientists are convinced that methane is missing on Mars.
The first and only attempts to search for life on Mars were the Viking missions, launched in 1975. Those probes failed to find organic compounds in Martian soils, apparently ruling out the possibility of extant life on the Red Planet. [7 Biggest Mars Mysteries of All Time]
But in the past decade, probes orbiting Mars and telescopes on Earth have detected what appeared to be plumes of methane gas from the Red Planet. The presence of colorless, odorless, flammable methane on Mars, the simplest organic molecule, helped revive the possibility of life once existing, or even currently living, just below the planet's surface.
On Earth, much of the methane in the atmosphere is released by life-forms, such as cattle. Scientists have suspected that methane stays stable in the Martian atmosphere for only about 300 years, so whatever is generating this gas did so recently.
Now, the new findings from NASA's Curiosity rover unveiled online today (Sept. 19) in the journal Science suggest that, at most, only trace amounts of methane exist on Mars.
"Because methane production is a possible signature of biological activity, our result is disappointing for many," said study lead author Christopher Webster, an Earth and planetary scientist at NASA's Jet Propulsion Laboratory in Pasadena, Calif.
But the findings still puzzle scientists.
"It's a mystery surrounded by an enigma here," said imaging physicist Jan-Peter Muller of University College London, who is a Curiosity rover science team member but is not one of the authors of this latest Mars methane study. "This clearly contradicts what has been measured from space and from Earth."
Methane mystery on Mars
The Curiosity rover has analyzed the surface and atmosphere of Mars with an arsenal of advanced scientific instruments ever since its spectacular landing on Mars in August 2012. Measurements from the rover's Tunable Laser Spectrometer, a device specially designed for measuring the gas on Mars, say the most methane that could currently exist in the Martian atmosphere is 1.3 parts per billion by volume. [Latest Mars Photos by the Curiosity Rover]
"Based on earlier observations, we were expecting to land on Mars and measure background levels of methane of at least several parts per billion, but saw nothing," Webster told SPACE.com.
When the researchers first looked for methane using Curiosity, they found strong signals that they quickly realized were coming from the little methane that they had taken with them, Webster said — that is, "'Florida air' that had leaked into one chamber during the long prelaunch activities. This contamination has been removed in stages, but each attempt to look for methane from the Mars atmosphere has resulted in a non-detection."
The original plan of the researchers was to analyze the carbon isotope ratios of methane on Mars to get insight on whether that gas could be biologically produced. "However, the lack of significant methane has denied that latter experiment," Webster said.
This new measurement is about six times lower than previous estimates of methane levels on Mars. Webster and his colleagues suggest this severely limits the odds of methane production by microbes below the surface of Mars or from rock chemistry.
"It's an excellent piece of science," Muller told SPACE.com. "However, it's not to say that what is measured 1 meter (3 feet) above the ground is representative of the atmosphere in total — that's a matter of interpretation, not necessarily a matter of fact."
Is the methane hiding?
For instance, past measurements of methane in the atmosphere of Mars analyzed a region much higher above the surface, "so these might be very different measurements," Muller said. "It does leave a little wiggle room in terms of interpretation."
Moreover, when it comes to places on Earth where methane leaks out, scientists can detect large volumes of methane right at the plumes but practically none away from them, Muller said.
"It's difficult to know whether the null measurement from Curiosity has to do with being in the wrong place at the wrong time, or whether it is representative of Mars," Muller said.
"We are often asked if our measurements at Gale Crater represent the planet as a whole," Webster noted. "We remind others that the lifetime of methane on Mars is very long, about 300 years, compared to the short mixing time — months — for the whole atmosphere, so we feel our measurement does represent the global background value."
Curiosity experiment may hold the key
The Sample Analysis at Mars suite of instruments on Curiosity has yet to conduct a "methane enrichment" experiment that will increase the sensitivity of the rover's Tunable Laser Spectrometer even further — by a factor of at least 10, Webster said. "It's possible that we may then see methane at extremely low levels — or, alternatively, we will not, and our upper limit will go down much further," he added.
The ExoMars spacecraft, planned for launch in 2016, will study the chemical composition of Mars' atmosphere to learn more about any methane there.
"It can look at the vertical distribution of methane on Mars, see if it's lofted some way high up in the atmosphere or if it's near the ground," Muller said. "If it's near the ground, that's likely reflective of it seeping out of the ground; if it's high in the atmosphere, some exotic photochemical process may be responsible."
Webster stressed that Curiosity will continue its mission to assess the habitability of Mars.
"The Curiosity rover will continue to make its measurements of both atmosphere and rock samples to discover if organics other than methane exist on Mars," Webster said. "To that end, the jury is still out, as these important measurements are being made in a series of studies that will extend many months into the future. Stay tuned!"
Follow us @Spacedotcom, Facebook and Google+. Original article on SPACE.com. ||||| PASADENA, Calif. -- Data from NASA's Curiosity rover has revealed the Martian environment lacks methane. This is a surprise to researchers because previous data reported by U.S. and international scientists indicated positive detections.
The roving laboratory performed extensive tests to search for traces of Martian methane. Whether the Martian atmosphere contains traces of the gas has been a question of high interest for years because methane could be a potential sign of life, although it also can be produced without biology.
"This important result will help direct our efforts to examine the possibility of life on Mars," said Michael Meyer, NASA's lead scientist for Mars exploration. "It reduces the probability of current methane-producing Martian microbes, but this addresses only one type of microbial metabolism. As we know, there are many types of terrestrial microbes that don't generate methane."
Curiosity analyzed samples of the Martian atmosphere for methane six times from October 2012 through June and detected none. Given the sensitivity of the instrument used, the Tunable Laser Spectrometer, and not detecting the gas, scientists calculate the amount of methane in the Martian atmosphere today must be no more than 1.3 parts per billion. That is about one-sixth as much as some earlier estimates. Details of the findings appear in the Thursday edition of Science Express.
"It would have been exciting to find methane, but we have high confidence in our measurements, and the progress in expanding knowledge is what's really important," said the report's lead author, Chris Webster of NASA's Jet Propulsion Laboratory in Pasadena, Calif. "We measured repeatedly from Martian spring to late summer, but with no detection of methane."
Webster is the lead scientist for spectrometer, which is part of Curiosity's Sample Analysis at Mars (SAM) laboratory. It can be tuned specifically for detection of trace methane. The laboratory also can concentrate any methane to increase the gas' ability to be detected. The rover team will use this method to check for methane at concentrations well below 1 part per billion.
Methane, the most abundant hydrocarbon in our solar system, has one carbon atom bound to four hydrogen atoms in each molecule. Previous reports of localized methane concentrations up to 45 parts per billion on Mars, which sparked interest in the possibility of a biological source on Mars, were based on observations from Earth and from orbit around Mars. However, the measurements from Curiosity are not consistent with such concentrations, even if the methane had dispersed globally.
"There's no known way for methane to disappear quickly from the atmosphere," said one of the paper's co-authors, Sushil Atreya of the University of Michigan, Ann Arbor. "Methane is persistent. It would last for hundreds of years in the Martian atmosphere. Without a way to take it out of the atmosphere quicker, our measurements indicate there cannot be much methane being put into the atmosphere by any mechanism, whether biology, geology, or by ultraviolet degradation of organics delivered by the fall of meteorites or interplanetary dust particles."
The highest concentration of methane that could be present without being detected by Curiosity's measurements so far would amount to no more than 10 to 20 tons per year of methane entering the Martian atmosphere, Atreya estimated. That is about 50 million times less than the rate of methane entering Earth's atmosphere.
Curiosity landed inside Gale Crater on Mars in August 2012 and is investigating evidence about habitable environments there. JPL manages the mission and built the rover for NASA's Science Mission Directorate in Washington. The rover's Sample Analysis at Mars suite of instruments was developed at NASA's Goddard Space Flight Center in Greenbelt, Md., with instrument contributions from Goddard, JPL and the University of Paris in France.
For more information about the mission, visit http://www.jpl.nasa.gov/msl , http://www.nasa.gov/msl and http://mars.jpl.nasa.gov/msl . To learn more about the SAM instrument, visit: http://ssed.gsfc.nasa.gov/sam/index.html .
News Media Contact
Guy Webster 818-354-6278Jet Propulsion Laboratory, Pasadena, Calif.guy.webster@jpl.nasa.govDwayne Brown 202-358-1726NASA Headquarters, Washingtondwayne.c.brown@nasa.gov2013-285 ||||| View Images The Curiosity rover used a laser to sample freshly drilled rock dust on Mars. It has not found methane.
Image courtesy NASA/Caltech/LANL/IRAP/CNES/LPGNantes/IAS/CNRS/MSSS
NASA's Curiosity rover has failed to find significant signs of methane in the Martian atmosphere, mission scientists reported on Thursday. The new rover information suggests that earlier reports of Martian methane—once seen as a possible sign of microbial life on the planet—may have been off target.
If the Curiosity finding holds up, it would raise questions about one of the most intriguing discoveries made about Mars in recent years: that periodic and large-scale plumes of organic methane are released from beneath the planet's surface.
"We consider this to be a quite definitive conclusion, and we're very confident with it," Chris Webster, manager of the Planetary Science Instrument Office at NASA's Jet Propulsion Laboratory in Pasadena, California, said of the new rover readings reported in the journal Science.
"It puts an upper limit on the background methane on Mars that is very constraining of any scenarios for its production on the planet."
The special interest in the gas comes from the fact that some 90 percent of the methane on Earth is the product of living microbes. Signs of methane plumes in the Martian atmosphere seen by Earth-based telescopes had earlier raised hopes of detecting similar microbial life hidden under the Martian surface.
Original Discovery Defended
The lead author of that 2009 methane plume discovery report, Michael Mumma of NASA's Goddard Space Flight Center in Greenbelt, Maryland, said that he stood by his finding that substantial and localized plumes of methane were released on Mars in 2003.
He suggests that the Martian atmosphere destroys methane much more quickly than Earth's does, and that within three years of the original measurements new observations showed that half of the methane was gone.
"These findings are actually consistent with our results," Mumma said of the findings from Curiosity. "We reported that the methane releases are likely to be sporadic and that the methane is quickly eliminated in the atmosphere.
"The good news here is that the rover instrument designed to detect methane is working, and we look forward to ongoing monitoring in the future."
Other American and European researchers have also detected elevated levels of methane in the atmosphere of Mars—for example, the European Space Agency's Mars Express orbiting spacecraft found methane in 2004—but none with the specificity reported by Mumma's team.
Curiosity Counters Methane Reports
Webster said that he took the previous reports of methane on Mars "at face value," since they too were published in peer-reviewed journals. But he said the Curiosity observations were clearly different.
While methane can be produced through geological processes, on Earth it is overwhelmingly a byproduct of microbes called methanogens. Best known as denizens of the guts of creatures ranging from humans to cattle to termites, these organisms produce the marsh gas found in wetlands and landfills. But they can also live deep underground.
Because of the harsh environment on Mars—high levels of surface radiation; low temperatures; and dry, acidic conditions—scientists have generally agreed that any microbes now alive on the planet would likely inhabit the deep underground.
Mumma's team did not point to biology as the source of the methane plumes they identified, but they did raise it as a possibility along with geological processes.
Surface Measurements Will Continue
The new paper makes the case that the methane levels Curiosity detected on the ground are so low that the likelihood of a biological source is vanishingly small.
"Methane is a very well understood gas that is quite stable," Webster said. "We know how long it lasts and how it is destroyed over decades."
While it is conceivable that something exists in the Martian atmosphere that destroys methane at a much faster pace than on Earth, "we have no evidence, no observations of what it might be," he said.
Webster said the rover's instruments have not detected any methane so far, but the possibility of error put the upper limit of methane at 1.7 parts per billion. That means that the Martian atmosphere could hold at most about 10,000 tons (nine million kilograms) of methane, notes the University of Michigan's Sushil Atreya, a co-author on the new study. On Earth, the atmosphere holds about six billion tons (5.44 trillion kilograms) of methane.
The methane-detecting device is on the rover's Sample Analysis at Mars (SAM) instrument panel and is called the tunable laser spectrometer. Webster said that efforts to detect methane will continue, but will likely be reduced if results continue to come back negative. | If there are any little green men on Mars, or at least little green microbes, the Mars rover Curiosity can't find any trace of them. NASA reported today that a series of tests turned up no signs of telltale methane, reports Space.com. The results are especially disappointing given that researchers a few years back thought they had detected plumes of methane gas on the planet, notes National Geographic. The results don't conclusively rule out the possibility of microbial life, but they don't bode well, either. "It would have been exciting to find methane, but we have high confidence in our measurements, and the progress in expanding knowledge is what's really important," says lead author Chris Webster of NASA's Jet Propulsion Laboratory. "We measured repeatedly from Martian spring to late summer, but with no detection of methane." |
Block grants are broader in scope and offer greater state discretion in the use of funds than categorical programs; in addition, block grants allocate funding on the basis of a statutory formula. Block grants have been associated with a variety of goals, including encouraging administrative cost savings, decentralizing decisionmaking, promoting coordination, spurring innovation, and providing opportunities to target funding. However, block grants have historically accounted for only a small proportion (11 percent) of grants to states and localities, as figure 1 shows. Before OBRA created nine block grants in 1981, three block grants had been created under President Nixon for community development, social services, and employment and training. More recently, the Job Training Partnership Act was passed in 1982, and the largest block grant program in terms of funding, the Surface Transportation Program, was created in 1991. (See app. II for a more detailed discussion of block grants.) Under OBRA, the administration of numerous federal domestic assistance programs was substantially changed by consolidating more than 50 categorical grant programs and 3 existing block grants into 9 block grants and shifting primary administrative responsibility for these programs to the states. The OBRA block grants carried with them significantly reduced federal data collection and reporting requirements as compared to the previous categorical programs, although some minimal requirements were maintained to protect federal interests. Overall, federal funding was reduced by 12-percent, or about $1 billion, but varied by block grant. (See app. III for a more detailed discussion of the 1981 block grants. App. VI includes a bibliography on block grants.) States were given broad discretion under the block grants to decide what specific services and programs to provide, as long as they were directly related to the goals of the grant program. Four of the block grants were for health, three for social services, and one each for education and community development. The three block grants that were in place prior to OBRA but were modified by OBRA were (1) the Health Incentives Grant for Comprehensive Public Health, which was incorporated into the Preventive Health and Health Services Block Grant; (2) the Title XX Block Grant, which was expanded into the new Social Services Block Grant; and (3) the Community Development Block Grant, which had been in existence since 1974. Under OBRA, Community Development Block Grant funds for cities with a population under 50,000 were given to the states to allocate. In two cases (the Primary Care and Low-Income Home Energy Assistance Block Grants), a single categorical program was transformed into a block grant. Overall federal funding for the block grants in 1982 was about 12 percent, or $1 billion, below the 1981 level for the categorical programs, as table 1 shows. However, changes in federal funding levels for the block grants varied by block grant—ranging from a $159 million, or 30-percent, reduction in the Community Services Block Grant, to a $94 million, or 10-percent, increase in the Community Development Block Grant. The Social Services Block Grant was reduced by the largest amount—$591 million, representing a 20-percent reduction. The funding and other federally imposed requirements attached to the 1981 block grants were generally viewed by states as less onerous than under the prior categorical programs. Funding requirements were used to (1) advance national objectives (for example, providing preventive health care, or more specifically, to treat hypertension); (2) protect local service providers who have historically played a role in service delivery; and (3) maintain state contributions. Set-aside requirements and cost ceilings were used to ensure certain services are provided. For example, the Preventive Health and Health Services Block Grant required that 75 percent of its funding be used for hypertension. A limitation in the Low-Income Home Energy Assistance Block Grant specified that no more than 15 percent of funds be used for residential weatherization. Pass-through requirements—notably the requirement that 90 percent of 1982 allocations under the Community Services Block Grant be awarded to community action agencies—were used to protect local service providers. The community action agencies were the primary service providers under the prior categorical program. Finally, provisions were included to maintain state involvement by preventing states from substituting federal for state funds. Block grants carried with them significantly reduced federal data collection and reporting requirements compared with categorical programs. Under the categorical programs, states were required to comply with specific procedures for each program, whereas the block grants had only a single set of procedures, and the administration decided to largely let the states interpret the compliance provisions in the statute. Federal agencies were prohibited from imposing burdensome reporting requirements and, for many of the block grants, states were allowed to establish their own program reporting formats. However, some data collection and reporting requirements were contained in each of the block grants as a way to ensure some federal oversight in the administration of block grants. Block grants generally require the administering federal agency to report to the Congress on program activities; provide program assessment data, such as the number of clients served; or conduct compliance reviews of state program operations. Basic reporting requirements also exist for state agencies. In general, the transition from categorical programs to block grants following the passage of OBRA was smooth, with states generally relying on existing management and service delivery systems. Although some continuity in funding was evident, states put their own imprint on the programs. States used a number of mechanisms to offset federal reductions for block grant programs. Block grant allocations were initially based on allocations under the prior categorical programs and were not sensitive to relative need, cost of providing services, or states’ ability to pay, posing concerns regarding their equity. Steps have been taken to improve program accountability, but problems such as noncomparable data persist. Finally, the lack of information on program activities and results may have contributed to the Congress’ adding funding constraints to block grants over time. (See app. IV for a more detailed discussion of the experience operating under the 1981 block grants.) For the most part, states were able to rely on existing management and service delivery systems. States consolidated offices or took other steps to coordinate related programs. For example, Florida’s categorical programs had been administered by several bureaus within the state’s education department; under the Education Block Grant all the responsibilities were assigned to one bureau. State officials generally found federal requirements placed on the states under the block grants created in 1981 to be less burdensome than those of the prior categorical programs. For example, state officials in Texas said that before the Preventive Health and Health Services Block Grant, the state was required to submit 90 copies of 5 categorical grant applications. Moreover, states reported that reduced federal application and reporting requirements had a positive effect on their management of block grant programs. In addition, some state agencies were able to make more productive use of their staffs as personnel devoted less time to federal administrative requirements and more time to state-level program activities. Although states reported management efficiencies under the block grants, they also experienced increased grant management responsibilities because they had greater program flexibility and responsibility. It is not possible to measure the net effect of these changes in state responsibilities on the level of states’ administrative costs. In addition, cost changes could not be quantified due to the absence of uniform state administrative cost definitions and data, as well as a lack of comprehensive baseline data on prior categorical programs. States took a variety of approaches to help offset the 12-percent overall federal funding reduction experienced when the categorical programs were consolidated into the block grants. Together, these approaches helped states replace much of the funding reductions during the first several years. For example, some states carried over funding from the prior categorical programs. This was possible because many prior categorical grants were project grants that extended into fiscal year 1982. States also offset federal funding reductions through transfers among block grants. The 13 states transferred about $125 million among the block grants in 1982 and 1983. About $112 million, or 90 percent, entailed moving funds from the Low-Income Home Energy Assistance Block Grant to the Social Services Block Grant. The transfer option was used infrequently between other block grants, although it was allowed for most. States also used their own funds to help offset reduced federal funding, but only for certain block grants. In the vast majority of cases, the 13 states increased their contribution to health-related or the Social Services Block Grant programs—areas of long-standing state involvement—between 1981 and 1983. Initially, most federal funding to states was distributed on the basis of their share of funds received under the prior categorical programs in fiscal year 1981. Such distributions may not be sensitive to populations in need, the relative cost of services in each state, or states’ ability to fund program costs. With the exception of the Social Services Block Grant and Community Development Block Grant, all block grants included a requirement that the allocation of funds take into account what states received in previous years in order to ease the transition to block grants. For example, under the Alcohol, Drug Abuse, and Mental Health Services Block Grant, funds were distributed among the states for mental health programs in the same proportions as they were distributed in fiscal year 1981. For alcohol and drug abuse programs, funds had to be distributed in the same proportions as in fiscal year 1980. Today, most block grants use formulas that more heavily weigh beneficiary population and other need-based factors. For example, the Community Development Block Grant uses a formula that reflects poverty, overcrowding, age of housing, and other measures of urban deterioration. The formula for the Job Training Partnership Act Block Grant considers unemployment levels and the number of economically disadvantaged people in the state. This formula is also used to distribute funds to local service delivery areas. However, three block grants—Community Services, Maternal and Child Health Services, and Preventive Health and Health Services—are still largely tied to 1981 allocations. Block grants significantly reduced the reporting burden imposed by the federal government on states compared with previous categorical programs. However, states stepped in and assumed a greater role in oversight of the programs, consistent with the block grant philosophy. The 13 states we visited generally reported that they were maintaining their level of effort for data collection as under the prior categorical grants. States tailored their efforts to better meet their own planning, budgetary, and legislative needs. Given their new management responsibilities, states sometimes increased reporting requirements for local service providers. However, the Congress, which maintained interest in the use of federal funds, had limited information on program activities, services delivered, and clients served. This was because there were fewer federal reporting requirements, and states were given the flexibility to determine what and how to report program information. Due to the lack of comparability of information across states, state-by-state comparisons were difficult. In response to this situation, model criteria and standardized forms were developed in 1984 to help states collect uniform data, primarily through voluntary cooperative efforts by the states. However, continued limitations in data comparability reduced the usefulness of the data to serve the needs of federal policymakers, such as for allocating federal funds, determining the magnitude of needs among individual states, and comparing program effectiveness among states. Just as with data collection and reporting, the Congress became concerned about financial accountability in the federal financial assistance provided to state and local entities. With the passage of the 1984 Single Audit Act, the Congress promoted more uniform, entitywide audit coverage than was achieved under the previous grant-by-grant audit approach. We have found the single audit approach has contributed to improving financial management practices in state and local governments. Systems for tracking federal funds have been improved, administrative controls over federal programs have been strengthened, and oversight of entities receiving federal funds has increased. However, the single audit process is not well designed to assist federal agencies in program oversight, according to our 1994 review. To illustrate, we found limitations with the usefulness of single audit reports. For example, reports do not have to be issued until 13 months after the end of the audit period, which many federal and state program managers found too late to be useful. In addition, managers are not required to report on the adequacy of their internal control structures, which would assist auditors in evaluating the entity’s management of its programs. In addition, the results of the audits are not being summarized or compiled so that oversight officials and program managers can easily access and analyze them to gain programwide perspectives and identify leads for follow-on audit work or program oversight. Yet, we believe that the Single Audit Act is an appropriate means of promoting financial accountability for block grants, particularly if our recommended improvements are implemented. Even though block grants were intended to increase state flexibility, over time additional constraints were placed in these programs that had the effect of “recategorizing” them. These constraints often took the form of set-asides, requiring a minimum portion of funds to be used for a specific purpose, and cost-ceilings, specifying a maximum portion of funds that could be used for other purposes. This trend reduced state flexibility. Many of these restrictions were imposed because of congressional concern that states were not adequately meeting national needs. In nine block grants, from fiscal years 1983 and 1991, the Congress added new cost ceilings and set-asides or changed existing ones 58 times.Thirteen of these amendments added new cost ceilings or set-asides to 9 of 11 block grants we reviewed. Between fiscal years 1983 and 1991, the portion of funds restricted under set-asides increased in three block grants (Maternal and Child Health Services; Community Development, and Education). For example, set-asides for the Maternal and Child Health Services Block Grant restricted 60 percent of total funding (30 percent for preventive and primary care services for children and 30 percent for children with special health care needs). Our research suggests that three lessons can be drawn from the experience with the 1981 block grants that would have value to the Congress as it considers creating new block grants. First, there clearly is a need to focus on accountability for results, and the Government Performance and Results Act may provide such a framework. Second, funding allocations based on distributions under prior categorical programs may be inequitable because they do not reflect need, ability to pay, and variations in the cost of providing services. Finally, states handled the transition to the 1981 block grants, but today’s challenges are likely to be greater. The programs being considered for inclusion in block grants not only are much larger but also, in some cases, such as Aid to Families with Dependent Children, which provides cash assistance to the poor, are fundamentally different from those programs included in the 1981 block grants. (See app. V for a more detailed discussion of lessons learned.) One of the principal goals of block grants is to shift responsibility for programs from the federal government to the states. This includes priority setting, program management, and, to a large extent, accountability. However, the Congress and federal agencies maintain an interest in the use and effectiveness of federal funds. Paradoxically, accountability may be critical to preserving state autonomy. When adequate program information is lacking, the 1981 block grant experience demonstrates that the Congress may become more prescriptive. For example, funding constraints were added that limited state flexibility, and, in effect, “recategorized” some of the block grants. Across the government, we have recommended a shift in focus of federal management and accountability toward program results and outcomes, with correspondingly less emphasis on inputs and rigid adherence to rules.This focus on outcomes is particularly appropriate for block grants, given their emphasis on providing states flexibility in determining the specific problems they wish to address and the strategies they plan to employ to address those problems. The flexibility block grants allow should be reflected in the kinds of national information collected by federal agencies. The Congress and agencies will need to decide the kinds and nature of information needed to assess program results. While the requirements in the Government Performance and Results Act (GPRA) of 1993 (P.L. 103-62) apply to all federal programs, they also offer an accountability framework for block grants. Consistent with the philosophy underlying block grants, GPRA seeks to shift the focus of federal management and accountability away from a preoccupation with inputs, such as budget and staffing levels, and adherence to rigid processes to a greater focus on outcomes and results. GPRA is in its early stages of implementation, but by the turn of the century, annual reporting under this act is expected to fill key information needs. Among other things, GPRA requires every agency to establish indicators of performance, set annual performance goals, and report on actual performance, in comparison with these goals, each March beginning in the year 2000. Agencies are now developing strategic plans (to be submitted by Sept. 30, 1997) articulating the agency’s mission, goals, and objectives preparatory to meeting these reporting requirements. In addition, although the single audit process is not well designed to assist federal agencies in program oversight, we believe that it is an appropriate means of promoting financial accountability for block grants, particularly if our recommended improvements are implemented. The Congress will need to make tough decisions on block grant funding formulas. Public attention is frequently focused on allocation formulas because there will always be winners and losers. Three characteristics of formulas to better target funds include factors that consider (1) state or local need; (2) differences among states in the costs of providing services; and (3) state or local ability to contribute to program costs. To the extent possible, equitable formulas rely on current and accurate data that measure need and ability to contribute. We have reported on the need for better population data to better target funding to people who have a greater need of services. The experience managing the 1981 block grants contributed to increased state management expertise. Overall, states have become more capable of responding to public service demands and initiating innovations during the 1980s and 1990s. Many factors account for strengthened state government. Beginning in the 1960s and 1970s, states modernized their government structures, hired more highly trained individuals, improved their financial management practices, and diversified their revenue systems. State and local governments have also taken on an increasing share of the responsibility for financing this country’s domestic expenditures. As figure 2 illustrates, state and local government expenditures have increased more rapidly than federal grants-in-aid. Between 1978 and 1993, state and local outlays increased dramatically, from $493 billion to $884 billion in constant 1987 dollars. Many factors contribute to state fiscal conditions, not the least of which are economic. In addition, state officials have expressed concern about unfunded mandates imposed by the federal government. Practices such as “off-budget” transactions could obscure the long-term impact of program costs in some states. In addition, while states’ financial position has improved on the whole, the fiscal gap between wealthier and poorer states and localities remains significant, in part due to federal budget cuts. We reported in 1993 that southeastern and southwestern states, because of greater poverty rates and smaller taxable resources, generally were among the weakest states in terms of fiscal capacity. New block grant proposals include programs that are much more expansive than block grants created in 1981 and could present a greater challenge for the states to both implement and finance. Nearly 100 programs in five areas—cash welfare, child welfare and abuse programs, child care, food and nutrition, and social services—could be combined, accounting for more than $75 billion of a total of about $200 billion in federal grants to state and local governments. The categorical programs, which were replaced by the OBRA block grants, accounted for only about $6.5 billion of the $95 billion 1981 grant outlays. In addition, the present block grant proposals include programs that are fundamentally different from those included in the 1981 block grants. For example, Aid to Families with Dependent Children (AFDC) provides direct cash assistance to individuals. Given that states tend to cut services and raise taxes during economic downturns to comply with balanced budget requirements, these cash assistance programs could experience funding reductions affecting vulnerable populations at a time when the AFDC population is likely growing. At the same time, the needs to assist these vulnerable populations would be increasing. In addition, some experts suggest that states have not always maintained state funding for cash assistance programs in times of fiscal strain. Because the information presented in this report was largely based on previously issued reports, we did not obtain agency comments. We are sending copies of this report to the Director, Office of Management and Budget; the Secretaries of Education, Health and Human Services, Labor, and other federal departments; and other interested parties. If you or your staff have any questions concerning this report, please call me at (202) 512-7014. Major contributors to this report are listed in appendix VII. To review the experience with block grants, we examined our past work on the implementation of the block grants created by the Omnibus Budget Reconciliation Act of 1981 (OBRA). The work consists of a series of reports on each of the major block grants, which were released during the early to mid-1980s, as well as several summary reports of these findings released in 1985. To update this work, we reviewed our more recent work on block grants as part of our overall program oversight efforts, focusing on block grants in the health, education, and social services areas. For example, in the early 1990s, we issued reports on the administration of the Low-Income Home Energy Assistance Block Grant (LIHEAP); drug treatment efforts under the Alcohol, Drug Abuse, and Mental Health Services Block Grant (ADMS); and oversight issues with respect to the Community Development Block Grant (CDBG). In 1992, we also looked at the distribution of funds under the Maternal and Child Health Services Block Grant (MCH). We have closely tracked the implementation of the Job Training Partnership Act (JTPA) Block Grant since its inception in 1982 and have looked at the Child Care and Development Block Grant, created in 1990, in the context of our other work on child care and early childhood programs. For a list of GAO and other key reports on block grants, refer to appendix VI. Our review of the implementation of the 1981 block grants was done in the early to mid-1980s and was based on work in 13 states. These 13 states—California, Colorado, Florida, Iowa, Kentucky, Massachusetts, Michigan, Mississippi, New York, Pennsylvania, Texas, Vermont, and Washington—received about 46 percent of the 1983 national block grant appropriations and accounted for about 48 percent of the nation’s population. The results may not be projected to the nation as a whole, although the 13 states represent a diverse cross section of the country. While our more recent oversight work updates some of our understanding of how block grants have been implemented, we have not done a systematic review of block grants themselves since these earlier reports. Block grants are broader in scope and offer greater state flexibility in the use of funds than categorical programs. They have been associated with a variety of goals, including encouraging administrative cost savings, decentralizing decisionmaking, promoting coordination, spurring innovation, and providing opportunity to target funding. Before OBRA created nine block grants, three block grants had been created by President Nixon for community development, social services, and employment and training. More recently, the Job Training Partnership Act was passed in 1982, and the largest block grant program in terms of funding, the Surface Transportation Program, was created in 1991. Today, a total of 15 block grants are in effect, although block grants today, as they have historically, represent only a small proportion (about 11 percent) of all grants-in-aid to states and localities. Block grants are a form of federal aid authorized for a wider range of activities compared with categorical programs, which tend to be very specific in scope. The recipients of block grants are given greater flexibility to use funds based on their own priorities and to design programs and allocate resources as they determine to be appropriate. These recipients are typically general purpose governments at the state or local level, as opposed to service providers (for example, community action organizations). Administrative, planning, fiscal, and other types of reporting requirements are kept to the minimum amount necessary to ensure that national goals are being accomplished. Federal aid is distributed on the basis of a statutory formula, which results in narrowing the discretion of federal administrators and providing a sense of fiscal certainty to recipients. Block grants have been associated over the years with a variety of goals, each of which has been realized to a greater or lesser degree depending upon the specific block grant. Block grant proponents argue that administrative cost savings would occur as a by-product of authorizing funds in a broadly defined functional area as block grants do, rather than in several narrowly specified categories. These proponents say that block grants provide a single set of requirements instead of numerous and possibly inconsistent planning, organization, personnel, paperwork, and other requirements of categorical programs. Decisionmaking is decentralized in that state and local recipients are encouraged to identify and rank their problems, develop plans and programs to deal with them, allocate funds among various activities called for by these plans and programs, and account for results. At the same time, block grants can eliminate federal intradepartmental coordination problems arising from numerous categorical grants in the same functional area, as well as help state and local recipients better coordinate their activities. Still another objective of the block grant is innovation— recipients are free to use federal funds to launch activities that otherwise could not be undertaken. By distributing aid on the basis of a statutory formula, block grants aim to better target federal funds on jurisdictions having the greatest need. However, a critical concern about block grants is whether the measures used—population, income, unemployment, housing, and overcrowding, among others—are accurate indicators of need and can be made available in a timely fashion. By contrast, a project-based categorical program would emphasize grantsmanship in the acquisition of federal aid and maximize the opportunities for federal administrators to influence grant award decisions. Three block grants were enacted in the mid-1970s under President Nixon. These were the Comprehensive Employment and Training Act of 1974 (CETA); the Housing and Community Development Act, which instituted CDBG; and Title XX of the Social Security Act. CETA called for locally managed but federally funded job training and public sector job creation programs. CDBG replaced categorical grant and loan programs under which communities applied for funds on a case-by-case basis. For the purpose of developing viable urban communities by providing decent housing and expanding economic opportunities, the block grant allowed communities two types of grants—entitlement and discretionary, the latter for communities with populations under 50,000. Title XX replaced prior social services programs and set forth broad national goals such as helping people become economically self-supporting; protecting children and adults from abuse, neglect, and exploitation; and preventing and reducing inappropriate institutional care. With the passage of OBRA under President Reagan, nine block grants were created. The discretionary program under CDBG became the Small Cities program. States were called on to administer this block grant program and required to give priority to activities benefiting low- and moderate-income families. The Title XX was expanded into the Social Services Block Grant (SSBG), although because the initial block grant was already state administered and very broad in scope, there were few changes as a consequence of OBRA. In addition, OBRA created block grants in the areas of health services, low-income energy assistance, substance abuse and mental health, and community services, in addition to social services and community development, as already mentioned. In 1982, the JTPA Block Grant was created. JTPA emphasized state and local government responsibility for administering federally funded job training programs, and, unlike CETA, which it replaced, partnerships with the private sector were established. Private industry councils (PIC), with a majority of business representatives, oversaw the delivery of job training programs at the local level. State job training coordinating councils also included private sector representation. The premise was that private sector leaders best understood what kinds of job training their communities needed, and would bring a concern for efficiency and performance. The Surface Transportation Program, established by the Intermodal Surface Transportation Efficiency Act of 1991, is currently the largest block grant program, with $17.5 billion awarded in fiscal year 1993. The act dramatically changed the structure of the Federal Highway Administration’s programs, which had been based on federal aid by road system—primary, secondary, urban, and rural. The Surface Transportation Program allows states and localities to use funds for construction or rehabilitation of virtually any kind of road. A portion of funds may also be used for transit projects or other nontraditional highway uses. Other block grants created after the 1981 block grants include the 1982 Federal Transit Capital and Operating Assistance Block Grant; the 1988 Projects for Assistance in Transition from Homelessness; and the 1990 Child Care and Development Block Grant. One block grant, ADMS, was broken into two different block grants in 1992. These block grants are the Community Health Services Block Grant and the Prevention and Treatment of Substance Abuse Block Grant. Among the block grants eliminated since 1981 are the Partnership for Health, Community Youth Activity, Primary Care, Law Enforcement Assistance, and Criminal Justice Assistance Block Grants. Today, a total of 15 block grants are in effect. These block grants and dollars awarded in fiscal year 1993 awards appear in table II.1. Compared with categorical grants, which number 578, there are far fewer block grants. As figure II.1 demonstrates, the largest increase in block grants occurred as a result of OBRA in 1981. Not all of the 1981 OBRA block grants were still in effect in 1990. Some, such as the Primary Care Block Grant, had been eliminated. Other block grants, such as the Child Care and Development Block Grant, were created between 1980 and 1990. $22 billion, compared with total federal grants of $206 billion. About $32 billion was awarded for block grants in 1993. (es.) (est.) Federal Transit Capital and Operating Assistance Prevention and Treatment of Substance Abuse JTPA, Title II-A: Training Services for Disadvantaged Adults and Youth Payments to States for Child Care Assistance (Child Care and Development Block Grant) Under OBRA, the administration of numerous federal domestic assistance programs was substantially changed by consolidating more than 50 categorical grant programs into 9 block grants and shifting primary administrative responsibility for these programs to the states. Overall federal funding was reduced by 12 percent, or about $1 billion, but varied by block grant. The OBRA block grants carried with them significantly reduced federal funding and data collection and reporting requirements as compared to the previous categorical programs, although some minimal requirements were maintained to protect federal interests. Under OBRA of 1981, the administration of numerous federal domestic assistance programs was substantially changed by consolidating more than 50 categorical grant programs and 3 existing block grants into 9 block grants and shifting primary administrative responsibility for these programs to the states. However, 534 categorical programs were in effect the same year this legislation passed, meaning there continued to be many more categorical programs than were subsumed under the 1981 block grants. States were given flexibility under block grants to decide what specific services and programs to provide as long as they were directly related to the goals of the grant program. Four of the block grants were for health, three for social services, and one each for education and community development. Three existing block grants were among the 9 block grants created. As mentioned previously, these include Title XX, which was expanded into SSBG, and CDBG, for which states were give the responsibility of administering the Small Cities program. In addition, the Health Incentives Grant for Comprehensive Public Health was incorporated into the Preventive Health and Health Services Block Grant (PHHS). In two cases (Primary Care and LIHEAP), a single categorical program was transformed into a block grant. The scope of block grants was much wider than the categorical grants that were consolidated to form them. For example, Chapter 2 of the Elementary and Secondary Education Act (the Education Block Grant) funded state and local activities to improve elementary and secondary education for children attending public and private schools. The 38 categorical programs that this Education Block Grant comprised included, for example, several “Emergency School Aid Act” programs, “Civil Rights Technical Assistance and Training,” and “Ethnic Heritage Studies Program.” Some block grants were wider in scope compared with others that were more narrow. For example, the scope of LIHEAP—which covers assistance to eligible households in meeting the costs of home energy—was quite narrow, having essentially a single function. In contrast, the scope of the Community Services Block Grant (CSBG) was to support efforts to “ameliorate the causes of poverty,” including employment, education, housing, emergency assistance, and other services. Several block grants offered the flexibility to transfer funds to other block grants, providing states the option to widen their scope even further. For example, SSBG allowed a state to transfer up to 10 percent of its allotment to the four health-related block grants or LIHEAP. Such flexibility to transfer funds was offered in five of the block grants—SSBG, LIHEAP, ADMS, CSBG, and PHHS. Overall federal funding for the block grants in 1982 was about 12 percent, or $1 billion, below the 1981 level for the categorical programs, as table III. 1 shows. However, changes in federal funding levels for the block grants varied by block grant—ranging from a $159 million, or 30-percent, reduction in the Community Services Block Grant, to a $94 million, or 10-percent, increase in CDBG. SSBG was reduced by the largest amount—$591 million, representing a 20-percent reduction. Table III.1 compares the 1981 funding levels of the categorical programs with the 1982 funding levels when these categorical programs were consolidated into block grants. CDBG (Small Cities) The funding requirements attached to the block grants were generally viewed by states as less onerous than under the displaced categorical programs. However, the federal government used funding requirements to (1) advance national objectives (for example, providing preventive health care, or more specifically, to treat hypertension), (2) protect local providers who have historically played a role in the delivery of services, and (3) maintain state contributions. Mechanisms contained in the block grants that protected federal interests included (1) state matching requirements, (2) maintenance of effort or nonsupplant provisions, (3) set-asides, (4) pass-through requirements, and (5) cost ceilings. An illustration of each mechanism follows: State matching requirements were imposed to help maintain state program contributions. CDBG required that states provide matching funds equal to at least 10 percent of the block funds allocated. MCH required that each state match every four federal dollars with three state dollars. The Primary Care Block Grant required that states provide a 20-percent match of fiscal year 1983 funds and a 33-percent match of fiscal year 1984 funds. Many state governments chose not, or were unable, to make the match for the Primary Care Block Grant, leading to the termination of this program in 1986. A nonsupplant provision appeared in three block grants (Education, PHHS, and ADMS), which prohibited states from using federal block grant funds to supplant state and local government spending. The purpose of this provision was to maintain state involvement by preventing states from substituting federal for state funds. Set-asides require states and localities to use a specified minimum portion of their grant for a particular purpose. PHHS included a set-aside in which the states were required to provide at least 75 percent of fiscal year 1981 funds in fiscal year 1982 for hypertension and, for rape prevention, an allocation based on state population of a total of at least $3 million each fiscal year. Under pass-through requirements, state or local governments must transfer a certain level of funds to subrecipients in order to protect local providers who have historically played a role in the delivery of services. CSBG required that states award not less than 90 percent of fiscal year 1982 funds to community action organizations or to programs or organizations serving seasonal or migrant workers. Cost ceilings require that states and localities spend no more than a specified maximum percentage of their grant for a particular purpose or group. LIHEAP included a cost ceiling of 15 percent of funds for residential “weatherization” or for other energy-related home repairs. The 1981 block grants carried with them significantly reduced federal data collection and reporting requirements compared with categorical programs. Under the categorical programs, states had to comply with specific procedures for each program, whereas with block grants there was one single set of procedures. Federal agencies were actually prohibited from imposing “burdensome” reporting requirements. Consistent with the philosophy of minimal federal involvement, the administration decided to largely let the states interpret the compliance provisions in the statute. This meant states, for the most part, determined both form and content of block grant data collected and reported. However, some data collection and reporting requirements were contained in each of the block grants as a way to ensure some federal oversight in the administration of block grants. From federal agencies, the block grants generally required (1) a report to the Congress on program activities, (2) program assessment data such as the number of clients served, or (3) compliance reviews of state program operations. For example, ADMS required the Department of Health and Human Services (HHS) to provide agency reports to the Congress on activities and recommendations; program assessments, which included data on clients, services, and funding; and annual compliance reviews in several states. From states agencies, the block grants generally required: (1) grant applications, which included information on how the states plan to use federal funds, (2) program reports describing the actual use of federal funds, (3) fiscal expenditure reports providing a detailed picture of expenditures within certain cost categories, and (4) financial and compliance audits. For example, LIHEAP required states to provide annual descriptions of intended use of funds, annual data on numbers and incomes of households served, and annual audits. In addition to these reporting requirements, states were required to involve the public. Some block grants required states to solicit public comments on their plans or reports describing the intended use of funds. Some block grants also required that a public hearing be held on the proposed use and distribution of funds. The Education Block Grant required the state to establish an advisory committee. Where states had operated programs, transition to block grants was smoother as states relied on existing management and service delivery systems. However, the transition to block grants was not as smooth for LIHEAP and CSBG because of limited prior state involvement or state funding of these programs. State officials generally reported administrative efficiencies in managing block grants as compared with categorical programs, although administrative cost savings were difficult to quantify. Although states experienced a 12-percent federal funding reduction when the 1981 block grants were created, they were able to offset these reductions for the first several years through a variety of approaches, such as carrying funding over from categorical grants. Several concerns have emerged over time. First, initial funding allocations were based on prior categorical grants in order to ease the transition to block grants. Such distributions, however, may be inequitable because they are not sensitive to populations in need, the relative cost of services in each state, or states’ ability to fund program costs. Second, although the Congress has taken steps to improve both data comparability and financial accountability, problems persist in terms of the kinds of information available for program managers to effectively oversee block grants. For example, consistent national information on program changes, services delivered, and clients served has not been available to the Congress because of the lack of standardization in block grant reporting. Third, state flexibility was reduced as funding constraints were added to block grants over time. This runs counter to an important goal of block grants, which is to increase state flexibility. Prior program experience helped states manage the 1981 block grants. For the most part, states were able to rely on existing management and service delivery systems. Proceeding from their role under the prior categorical programs as well as their substantial financial commitment to certain program areas, states had a service delivery structure in place through which social services, health, and education programs were implemented. Decisions on the use of social services, health, and education block grant funds often reflected broader state goals and priorities for delivering related services. In some cases, states consolidated offices or took other steps to coordinate related programs, such as with the Education Block Grant, in which 5 of 13 states merged offices. For example, Florida’s categorical programs had been administered by several bureaus within the state’s education department. Under the block grant, all responsibilities were assigned to one bureau. The exceptions to this were LIHEAP and CSBG. The categorical programs that preceded these block grants were almost entirely federally funded. In the case of CSBG, service providers had dealt primarily with federal officials and had little contact with state administrators. With LIHEAP, planning processes were not well integrated with overall state planning processes. Officials in 11 of the 13 states we visited indicated that separate priorities were set for LIHEAP. With CSBG, not only was the planning process not well integrated, but the state had to develop a new administrative structure. Five states had to assign management of CSBG to new or different offices or change the status of existing offices. States had to develop relationships with community action agencies, whose continued participation in the block grant-funded program was ensured by a 90-percent pass-through requirement. Taking advantage of the flexibility that block grants offered them, states began to put their own imprint on the use of funds. Although some continuity in funding was evident, changes in funding patterns did emerge: Under MCH and PHHS, the states tended to provide greater support for services to children with disabilities and reduce support for lead-based paint poisoning prevention. Under SSBG, the states usually gave a higher priority to adult and child protective services and home-based services, among other services. By contrast, they often tightened eligibility standards for day care services. Given the increased availability of federal child care funding from sources other than the SSBG, states may decide to allocate fewer SSBG dollars to child care in the future. Under LIHEAP, most of the states increased funding for weatherization and crisis assistance while decreasing expenditures for heating assistance. More recently, we found that state actions differed significantly in response to a decrease in federal funding of $619 million under the block grant between fiscal years 1986 and 1989. Some states, for example, varied in the extent to which they offset federal funding cuts with other sources of funding. States’ imprint on their use of block grant funds was not evident with ADMS. This was in part due to funding constraints added by the Congress over time. State officials generally found federal requirements placed on them by the 1981 block grants less burdensome than those of the prior state-operated categorical programs. For example, state officials in Texas said that before PHHS, the state was required to submit 90 copies of 5 categorical grant applications. Moreover, states reported that reduced federal application and reporting requirements had a positive effect on their management of block grant programs. Also, some state agencies were able to make more productive use of their staffs as personnel devoted less time to federal administrative requirements and more time to state-level program activities. Although states realized considerable management efficiencies or improvements under the block grants, they also experienced increased grant management responsibilities through greater program discretion devolved from the federal government. It is not possible to measure the net effect of these competing forces on the level of states’ administrative costs. In addition, cost changes could not be quantified because of the lack of uniform state administrative cost definitions and data as well as a lack of comprehensive baseline data on prior categorical programs. States took a variety of approaches to help offset the 12-percent overall federal funding reductions experienced when the categorical programs were consolidated into the 1981 block grants. For example, some states carried over funding from the prior categorical programs. This was possible because many prior categorical grants were project grants that extended into fiscal year 1982. In the 13 states we visited, at least 57 percent of the 1981 categorical awards preceding the three health block grants were available for expenditure in 1982—the first year of block grant implementation. By 1983, however, carryover funding had declined to 7 percent of total expenditures. Carryover funding was not available under SSBG or LIHEAP because the programs preceding them had been funded on a formula basis, and funds were generally expended during the same fiscal year in which they were awarded. States also offset federal funding reductions through transfers among block grants. The 13 states transferred about $125 million among the block grants in 1982 and 1983. About $112 million, or 90 percent, entailed moving funds from LIHEAP to SSBG. This trend was influenced by the fact that SSBG experienced the largest dollar reduction—about $591 million in 1982 alone—and did not benefit from overlapping categorical funding, while LIHEAP received increased federal appropriations. The transfer option was used infrequently between other block grants. States also used their own funds to help offset reduced federal funding, but only for certain block grants. In the vast majority of cases, the 13 states increased their contribution to health-related block grants or SSBG—areas of long-standing state involvement. Although such increases varied greatly from state to state, overall increases ranged from 9 percent in PHHS to 24 percent in MCH between 1981 and 1983. Overall, expenditures of state funds for programs supported with block grant moneys increased between 1981 and 1983 in 85 percent of the cases in which the states we visited had operated the health-related block grants and SSBG since their initial availability in 1982. Aside from the health-related block grants and SSBG, states did not make great use of their own revenues to offset reduced federal funds. Together, these approaches helped states replace much of the funding reductions during the first several years. Three-fourths of the cases we examined experienced increases in total program expenditures, although once adjusted for inflation this dropped to one-fourth of all cases.Increased appropriations in 1983 through 1985, and for 1983 only, funds made available under the Emergency Jobs Appropriations Act also helped offset these reductions. Some block grants, however, did not do as well as others. For example, some states did not restore funding for CSBG, which may be due in part to the limited prior state involvement under the categorical program preceding the block grant. Initially, most federal funding to states was distributed on the basis of the state’s share of funds received under the prior categorical programs in fiscal year 1981. We found that such distributions may be inequitable because they are not sensitive to populations in need, the relative cost of services in each state, or states’ ability to fund program costs. With the exception of SSBG and CDBG, block grants included a requirement that the allocation of funds take into account what states received in previous years in order to ease the transition to block grants. For example, under ADMS, funds had to be distributed among the states for mental health programs in the same proportions as funds were distributed in fiscal year 1981. For alcohol and drug abuse programs, funds had to be distributed in the same proportions as in fiscal year 1980. Today, most block grants use formulas that more heavily weigh beneficiary population and other need-related factors. For example, CDBG uses a formula that reflects poverty, overcrowding, age of housing, and other measures of urban deterioration. The formula for JTPA considers unemployment levels and the number of economically disadvantaged persons in the state. This formula is also used to distribute funds to local service delivery areas. However, three block grants—CSBG, MCH, and PHHS—are still largely tied to 1981 allocations. Difficulties posed in developing funding formulas that allocate on the basis of need, relative cost of services, and ability to pay are illustrated here: Because of concern that funds were not distributed equitably under ADMS, the Congress mandated that HHS conduct a study of alternative formulas that considered need-related factors, and in 1982 the Secretary of HHS reported on several formula options that would more fairly distribute funds. Legislative amendments in 1988, for instance, introduced the use of new indicators of need: (1) the number of people in specific age groups as proxies for populations at risk for drug abuse, alcohol abuse, and mental health disorders and (2) state total taxable resources as a proxy for its capacity to fund program services from state resources. These amendments also called for phasing out the distribution of funds based on categorical grant distribution. We examined the formula in 1990, finding that the formula’s urban population factor overstates the magnitude of drug use in urban as compared with rural areas and that a provision that protects states from losing funding below their 1984 levels causes a mismatch between needs and actual funding. Under MCH, funds continue to be distributed primarily on the basis of funds received in fiscal year 1981 under the previous categorical programs. Only when funding exceeds the amount appropriated in fiscal year 1983 are additional funds allotted in proportion to the number of persons under age 18 that are in poverty. We found that economic and demographic changes are not adequately reflected in the current allocation, resulting in problems of equity. We developed a formula by which equity is improved for both beneficiaries and taxpayers that includes, for example, a measure for at-risk children. In keeping with the desire to maximize state flexibility, most block grant statutes did not prescribe how states should distribute funds to substate entities. Only the Education and the newer JTPA Block Grants prescribe how states should distribute funds to local service providers. For example, the Education Block Grant requires states to distribute funds to local educational authorities using a formula that considers relative enrollment and adjusts per pupil allocations upward to account for large enrollments of students whose education imposes a higher than average cost—generally students from high-risk groups. Although this formula was prescribed, states were given the discretion to decide which factors to consider in determining who were high-cost students. Where the law did not prescribe such distribution, some states developed their own formulas. In a 1982 study, we identified nine states that developed formulas to distribute CSBG funds to local service providers based in part on poverty, leading to reductions in funding to many community action agencies compared with the funding these agencies received under the prior categorical programs. Mississippi developed a formula to distribute ADMS funds to community mental health centers based on factors such as population density and per capita income. Block grants significantly reduced the reporting burden imposed by the federal government on states as compared to the previous categorical programs. However, states stepped in and assumed a greater role in oversight of programs, consistent with the block grant philosophy. The 13 states we visited generally reported that they were maintaining their prior level of effort for data collection under the categorical grants. States tailored their efforts to better meet their own planning, budgetary, and legislative needs. Given their new responsibilities, states sometimes passed on reporting requirements to local service providers. However, the Congress, which maintained interest in the use of federal funds, had limited information on program activities, services delivered, and clients served. This was because there were fewer federal reporting requirements, and states were given the flexibility to determine what and how to report program information. In addition, due to the lack of comparability of information across states, state-by-state comparisons were difficult. Federal evaluation efforts were hampered because of this diminished ability to assess the cumulative effects of block grants across the nation. In response to this situation, model criteria and standardized forms for some block grants were developed in 1984 to help states collect uniform data, primarily through voluntary cooperative efforts. We examined the data collection strategies of four block grants to assess the viability of this approach. Problems identified included the following: States reported little data on the characteristics of clients served under the Education Block Grant, and LIHEAP data on households receiving assistance to weatherize their homes were not always readily accessible to state cash assistance agencies. Because of the broad range of activities under CSBG and the Education Block Grant, it is highly likely that the same clients served by more than one activity were counted twice. In 1991, we examined reporting problems under ADMS. Because HHS did not specify what information states must provide, the Congress did not have information it needed to determine whether a set-aside for women’s services had been effective in addressing treatment needs of pregnant women and mothers with young children. In another 1991 report, we found state annual reports varied significantly in the information provided on drug treatment services, making comparisons or assessments of federally supported drug treatment services difficult. In addition, many states did not provide information in a uniform format when they applied for funds. Generally, the data were timely, and most officials in the six states we included in our review perceived the collection efforts to be less burdensome than reporting under categorical programs. However, the limitations in data comparability reduce the usefulness of the data to serve the needs of federal policymakers, such as allocating federal funds, determining the magnitude of needs among individual states, and comparing program effectiveness among states. Just as with data collection and reporting, the Congress became concerned about financial accountability in the federal financial assistance provided to state and local entities. With the 1984 Single Audit Act, the Congress promoted more uniform, entitywide audit coverage than was achieved under the previous grant-by-grant audit approach. The single audit process has contributed to improving financial management practices of state and local officials we interviewed. These officials reported that they, among other things, have improved systems for tracking federal funds, strengthened administrative controls over federal programs, and increased oversight of entities to which they distribute federal funds. Even though state and local financial management practices have been improved, a number of issues burden the single audit process, hinder the usefulness of its reports, and limit its impact, according to our 1994 report. Specifically, criteria for determining which entities and programs are to be audited are based solely on dollar amounts. This approach has the advantage of subjecting a high percentage of federal funds to audit, but it does not necessarily focus audit resources on the programs identified as being high risk. For example, even though the Office of Management and Budget (OMB) has identified Federal Transit Administration grants as being high risk, we found in our review of single audit reports that only a small percentage of the grants to transit authorities were required to be audited. The usefulness of single audit reports for program oversight is limited in several ways. Reports do not have to be issued until 13 months after the end of the audit period, which many federal and state program managers we surveyed found was too late to be useful. Audited entities’ managers are not required to report on the adequacy of their internal control structures, which would assist auditor’s in evaluating an entity’s management of its programs. In addition, the results of the audits are not being summarized or compiled so that oversight officials and program managers can easily access and analyze them to gain programwide perspectives and identify leads for follow-on audit work or program oversight. Even though block grants were intended to provide flexibility to the states, over time constraints were added which had the effect of “recategorizing” them. These constraints often took the forms of set-asides, requiring a minimum portion of funds be used for a specific purpose, and cost ceilings, specifying a maximum portion of funds that could be used for other purposes. This trend reduced state flexibility. Many of these restrictions were imposed as a result of congressional concern that states were not adequately meeting national needs. In nine block grants from fiscal years 1983 and 1991, the Congress added new cost ceilings and set-asides or changed existing ones 58 times, as figure IV.I illustrates. Thirteen of these amendments added new cost ceilings or set-asides to 9 of 11 block grants we reviewed. Between fiscal years 1983 and 1991, the portion of funds restricted under set-asides increased in three block grants—MCH, CDBG, and Education. For example, set-asides for MCH restricted 60 percent of total funding (30 percent for preventive and primary care services for children and 30 percent for children with special health care needs). However, during the same period the portion of restricted funds under two block grants—ADMS and PHHS—decreased. In addition, 5 of the 11 block grants we examined permitted states to obtain waivers from some cost ceilings or set-asides if the state could justify that this amount of funds was not needed for the purpose specified in the set-aside. Three lessons can be drawn from the experience with the 1981 block grants. These are the following: (1) The Congress needs to focus on accountability for results in its oversight of the block grants. The Government Performance and Results Act provides a framework for this and is consistent with the goal of block grants to provide flexibility to the states. (2) Funding formulas based on distributions under prior categorical programs may be inequitable because they do not reflect need, ability to pay, and variations in the cost of providing services. (3) States handled the 1981 block grants, but today’s challenges are likely to be greater. The programs being considered for inclusion in block grants not only are much larger but also are fundamentally different than those programs included in the 1981 block grants. One of the principal goals of block grants is to shift responsibility for programs from the federal government to the states. This includes priority setting, program management, and, to a large extent, accountability. However, the Congress and federal agencies maintain an interest in the use and effectiveness of federal funds. Paradoxically, accountability is critical to preserving state flexibility. When adequate program information is lacking, the 1981 block grant experience demonstrates that the Congress may become more prescriptive. For example, funding constraints were added that limited state flexibility, and, in effect, “recategorized” some of the block grants. We have recommended a shift in focus of federal management and accountability toward program results and outcomes, with correspondingly less emphasis on inputs and rigid adherence to rules.This focus on outcomes over inputs is particularly appropriate for block grants given their emphasis on providing states flexibility in determining specific problems to address and strategies for addressing them. The flexibility block grants allow should be reflected in the kinds of national information collected by federal agencies. The Congress and federal agencies will need to decide the kinds and nature of information needed to assess program results. While the requirements in the Government Performance and Results Act of 1993 (GPRA) (P.L. 103-62) apply to all federal programs, they also offer an accountability framework for block grants. Consistent with the philosophy underlying block grants, GPRA seeks to shift the focus of federal management and accountability away from a preoccupation with inputs, such as budget and staffing levels, and adherence to rigid processes to a greater focus on outcomes and results. By the turn of the century, annual reporting under this act is expected to fill key information gaps. Among other things, GPRA requires every agency to establish indicators of performance, set annual performance goals, and report on actual performance in comparison with these goals each March beginning in the year 2000. Agencies are now developing strategic plans (to be submitted by September 30, 1997) articulating the agency’s mission, goals, and objectives preparatory to meeting these reporting requirements. Even though GPRA is intended to focus agencies on program results, much work, however, lies ahead. Even in the case of JTPA, in which there has been an emphasis on program outcomes, we have found that most agencies do not collect information on participant outcomes, nor do they conduct studies of program effectiveness. At the same time, there is little evidence of greater reliance on block grants since the 1981 block grants were created. Categorical programs continue to grow, up to almost 600 in fiscal year 1993. We have more recently reported on the problems created with the existence of numerous programs or funding streams in three program areas—youth development, employment and training, and early childhood. Even though state and local financial management practices have been improved with the Single Audit Act, a number of issues burden the single audit process, hinder the usefulness of its reports, and limit its impact. We have made recommendations to enhance the single audit process and to make it more useful for program oversight. We believe, however, that the Single Audit Act is an appropriate means of promoting financial accountability for block grants, particularly if our recommended improvements are implemented. Even if block grants were created to give state governments more responsibility in the management of programs, the federal government will continue to be challenged by the distribution of funds among the states and localities. Public debate is likely to focus on formulas given there will always be winners and losers. Three characteristics of formulas that better target funds include factors that consider (1) state or local need, (2) differences among states in the costs of providing services, and (3) state or local ability to contribute to program costs. To the extent possible, equitable formulas rely on current and accurate data that measure need and ability to contribute. We have reported on the need for better population data to better target funding to people who have a greater need of services. We have examined the formulas that govern distribution of funds for MCH as well as other social service programs such as the Older American Act programs. In advising on the revisions to MCH, we recommended that 3 factors be included in the formula: concentration of at-risk children to help determine level of need; the effective tax rate to reflect states’ ability to pay; and costs of providing health services, including labor, office space, supplies, and drugs. We also suggested ways to phase in formulas to keep the disruption of services to a minimum. During the buildup of the federal grant programs, the federal government viewed state and local governments as convenient administrative units for advancing federal objectives. State and local governments were seen as lacking the policy commitment and the administrative and financial capacity to address the domestic agenda. During the 1970s, the opposition to using state and local governments as mere administrative units grew, culminating in the Reagan administration’s New Federalism policy, which focused on shifting leadership of the domestic policy agenda away from the federal government and toward states. By cutting the direct federal-to-local linkages, this policy also encouraged local governments to strengthen their relationships with their respective states. States as a whole have become more capable of responding to public service demands and initiating innovations during the 1990s. Many factors account for strengthened state government. Beginning in the 1960s and 1970s, states modernized their government structures, hired more highly trained individuals, improved their financial management practices, and diversified their revenue systems. State and local governments have also taken on an increasing share of the responsibility for financing the country’s domestic expenditures. Changing priorities, tax cuts, and mounting deficits drove federal policymakers to cut budget and tax subsidies to both states and localities. These cuts fell more heavily on localities, however, because the federal government placed substantial importance on “safety net” programs in health and welfare that help the poor, which generally are supported by federal-state partnerships. In contrast, the federal government placed less importance on other “nonsafety net” programs such as infrastructure and economic development, which generally are federal-local partnerships. Growth in spending by state governments also reflects rising health care costs as well as officials’ choices favoring new or expanded services and programs. As figure V.1 illustrates, state and local governments’ expenditures have increased more rapidly, while federal grants-in-aid represent a smaller proportion of total state and local expenditure burden. Between 1978 and 1993, state and local outlays increased dramatically, from $493 billion to $884 billion in constant 1987 dollars. With their growing fiscal responsibilities, states have reevaluated their spending priorities and undertaken actions to control program growth, cut some services, and increase revenues—by raising taxes and imposing user fees, for example. The continued use of these state budget practices, combined with a growing economy, have improved the overall financial condition of state governments. Many factors contribute to state fiscal conditions, not the least of which are economic recessions, since most states do not possess the power to deficit spend. In addition, state officials have expressed concern about unfunded mandates imposed by the federal government. Practices such as “off-budget” transactions could obscure the long-term impact of program costs in some states. In addition, while states’ financial position has improved on the whole, the fiscal gap between wealthier and poorer states and localities remains significant, in part due to federal budget cuts. We reported in 1993 that southeastern and southwestern states, because of greater poverty rates and smaller taxable resources, generally were among the weakest states in terms of fiscal capacity. New block grant proposals include programs that are much more expansive than block grants created in 1981 and could present a greater challenge for the states to both implement and finance, particularly if such proposals are accompanied by federal funding cuts. Nearly 100 programs in five areas—cash welfare, child welfare and abuse programs, child care, food and nutrition, and social services—could be combined, accounting for more than $75 billion of a total of about $200 billion in federal grants to state and local governments. Comparatively, the categorical programs, which were replaced by the OBRA block grants, accounted for only about $6.5 billion of the $95 billion in 1981 outlays. In addition, these block grant proposals include programs that are fundamentally different than those included in the 1981 block grants. For example, Aid to Families with Dependent Children provides direct cash assistance to individuals. Given that states tend to cut services and raise taxes during economic downturns to comply with balanced budget requirements, these cash assistance programs could experience funding reductions, which could impact vulnerable populations at the same time their number are likely to increase. In addition, some experts suggest that states have not always maintained state funding for cash assistance programs in times of fiscal strain. The following bibliography lists selected GAO reports on block grants created by the Omnibus Budget Reconciliation Act of 1981 and subsequent reports pertaining to implementation of block grant programs. In addition, the bibliography includes studies published by other acknowledged experts in intergovernmental relations. Block Grants: Increases in Set-Asides and Cost Ceilings Since 1982 (GAO/HRD-92-58FS, July 27, 1992). Block Grants: Federal-State Cooperation in Developing National Data Collection Strategies (GAO/HRD-89-2, Nov. 29, 1988). Block Grants: Federal Data Collection Provisions (GAO/HRD-87-59FS, Feb. 24, 1987). Block Grants: Overview of Experiences to Date and Emerging Issues (GAO/HRD-85-46, Apr. 3, 1985). State Rather Than Federal Policies Provided the Framework for Managing Block Grants (GAO/HRD-85-36, Mar. 15, 1985). Block Grants Brought Funding Changes and Adjustments to Program Priorities (GAO/HRD-85-33, Feb. 11, 1985). Public Involvement in Block Grant Decisions: Multiple Opportunities Provided But Interest Groups Have Mixed Reactions to State Efforts (GAO/HRD-85-20, Dec. 28, 1984). Federal Agencies’ Block Grant Civil Rights Enforcement Efforts: A Status Report (GAO/HRD-84-82, Sept. 28, 1984). A Summary and Comparison of the Legislative Provisions of the Block Grants Created by the 1981 Omnibus Budget Reconciliation Act (GAO/IPE-83-2, Dec. 30, 1982). Lessons Learned From Past Block Grants: Implications For Congressional Oversight (GAO/IPE-82-8, Sept. 23, 1982). Early Observations on Block Grant Implementation (GAO/GGD-82-79, Aug. 24, 1982). Allocation of Funds for Block Grants With Optional Transition Periods (GAO/HRD-82-65, Mar. 26, 1982). Maternal and Child Health: Block Grant Funds Should Be Distributed More Equitably (GAO/HRD-92-5, Apr. 2, 1992). Maternal and Child Health Block Grant: Program Changes Emerging Under State Administration (GAO/HRD-84-35, May 7, 1984). States Use Added Flexibility Offered by the Preventive Health and Health Services Block Grant (GAO/HRD-84-41, May 8, 1984). States Use Several Strategies to Cope With Funding Reductions Under Social Services Block Grant (GAO/HRD-84-68, Aug. 9, 1984). Low-Income Home Energy Assistance: States Cushioned Funding Cuts But Also Scaled Back Program Benefits (GAO/HRD-91-13, Jan. 24, 1991). Low-Income Home Energy Assistance: A Program Overview (GAO/HRD-91-1BR, Oct. 23, 1990). Low-Income Home Energy Assistance: Legislative Changes Could Result in Better Program Management (GAO/HRD-90-165, Sept. 7, 1990). States Fund an Expanded Range of Activities Under Low-Income Home Energy Assistance Block Grant (GAO/HRD-84-64, June 27, 1984). Drug Use Among Youth: No Simple Answers to Guide Prevention (GAO/HRD-94-24, Dec. 29, 1993). ADMS Block Grant: Drug Treatment Services Could Be Improved by New Accountability Program (GAO/HRD-92-27, Oct. 17, 1991). ADMS Block Grant: Women’s Set-Aside Does Not Assure Drug Treatment for Pregnant Women (GAO/HRD-91-80, May 6, 1991). Drug Treatment: Targeting Aid to States Using Urban Population as Indicator of Drug Use (GAO/HRD-91-17, Nov. 27, 1990). Block Grants: Federal Set-Asides for Substance Abuse and Mental Health Services (GAO/HRD-88-17, Oct. 14, 1987). States Have Made Few Changes in Implementing the Alcohol, Drug Abuse, and Mental Health Services Block Grant (GAO/HRD-84-52, June 6, 1984). Community Services: Block Grant Helps Address Local Social Service Needs (GAO/HRD-86-91, May 7, 1986). Community Services Block Grant: New State Role Brings Program and Administrative Changes (GAO/HRD-84-76, Sept. 28, 1984). Education Block Grant: How Funds Reserved for State Efforts in California and Washington Are Used (GAO/HRD-86-94, May 13, 1986). Education Block Grant Alters State Role and Provides Greater Local Discretion (GAO/HRD-85-18, Nov. 19, 1984). Multiple Employment Training Programs: Major Overhaul Needed to Create a More Efficient, Customer-Driven System (GAO/T-HEHS-95-70, Feb. 6, 1995). Multiple Employment Training Programs: Overlap Among Programs Raises Questions About Efficiency (GAO/HEHS-94-193, July 11, 1994). Multiple Employment Training Programs: Most Federal Agencies Do Not Know If Their Programs Are Working Effectively (GAO/HEHS-94-88, March 2, 1994). Job Training Partnership Act: Racial and Gender Disparities in Services (GAO/HRD-91-148, Sept. 20, 1991). Job Training Partnership Act: Inadequate Oversight Leaves Program Vulnerable to Waste, Abuse, and Mismanagement (GAO/HRD-91-97, July 30, 1991). Job Training Partnership Act: Services and Outcomes for Participants With Differing Needs (GAO/HRD-89-52, June 9, 1989). Job Training Partnership Act: Summer Youth Programs Increase Emphasis on Education (GAO/HRD-87-101BR, June 30, 1987). Dislocated Workers: Exemplary Local Projects Under the Job Training Partnership Act (GAO/HRD-87-70BR, Apr. 8, 1987). Dislocated Workers: Local Programs and Outcomes Under the Job Training Partnership Act (GAO/HRD-87-41, Mar. 5, 1987). Job Training Partnership Act: Data Collection Efforts and Needs (GAO/HRD-86-69BR, Mar. 31, 1986). The Job Training Partnership Act: An Analysis of Support Cost Limits and Participant Characteristics (GAO/HRD-86-16, Nov. 6, 1985). Job Training Partnership Act: Initial Implementation of Program for Disadvantaged Youth and Adults (GAO/HRD-85-4, Mar. 4, 1985). Transportation Infrastructure: Highway Program Consolidation (GAO/RCED-91-198, Aug. 16, 1991). Transportation Infrastructure: States Benefit From Block Grant Flexibility (GAO/RCED-90-126, June 8, 1990). 20 Years of Federal Mass Transit Assistance: How Has Mass Transit Changed? (GAO/RCED-85-61, Sept. 18, 1985). Urban Mass Transportation Administration’s New Formula Grant Program: Operating Flexibility and Process Simplification (GAO/RCED-85-79, July 15, 1985). UMTA Needs Better Assurance That Grantees Comply With Selected Federal Requirements (GAO/RCED-85-26, Feb. 19, 1985). Community Development: Comprehensive Approaches Address Multiple Needs But Are Challenging to Implement (GAO/RCED/HEHS-95-69, Feb. 8, 1995). Community Development: Block Grant Economic Development Activities Reflect Local Priorities (GAO/RCED-94-108, Feb. 17, 1994). Community Development: Oversight of Block Grant Monitoring Needs Improvement (GAO/RCED-91-23, Jan. 30, 1991). States Are Making Good Progress in Implementing the Small Cities Community Development Block Grant Program (GAO/RCED-83-186, Sept. 8, 1983). Rental Rehabilitation With Limited Federal Involvement: Who is Doing It? At What Cost? Who Benefits? (GAO/RCED-83-148, July 11, 1983). Block Grants for Housing: A Study of Local Experiences and Attitudes (GAO/RCED-83-21, GAO/RCED-83-21A, Dec. 13, 1982). HUD Needs to Better Determine Extent of Community Block Grants’ Lower Income Benefits (GAO/RCED-83-15, Nov. 3, 1982). The Community Development Block Grant Program Can Be More Effective in Revitalizing the Nation’s Cities (GAO/RCED-81-76, Apr. 30, 1981). Program Evaluation: Improving the Flow of Information to the Congress (GAO/PEMD-95-1, Jan. 30, 1995). Multiple Youth Programs (GAO/HEHS-95-60R, Jan. 19, 1995). Early Childhood Programs: Multiple Programs and Overlapping Target Groups (GAO/HEHS-94-4FS, Oct. 31, 1994). Single Audit: Refinements Can Improve Usefulness (GAO/AIMD-94-133, June 21, 1994). Federal Aid: Revising Poverty Statistics Affects Fairness of Allocation Formulas (GAO/HEHS-94-165, May 20, 1994). Older Americans Act: Funding Formula Could Better Reflect State Needs (GAO/HEHS-94-41, May 12, 1994). Improving Government: Actions Needed to Sustain and Enhance Management Reforms (GAO/T-OCG-94-1, Jan. 27, 1994). State and Local Finances: Some Jurisdictions Confronted by Short- and Long-Term Problems (GAO/HRD-94-1, Oct. 6, 1993). Improving Government: Measuring Performance and Acting on Proposals for Change (GAO/T-GGD-93-14, Mar. 23, 1993). Intergovernmental Relations: Changing Patterns in State-Local Finances (GAO/HRD-92-87FS, Mar. 31, 1992). Federal Formula Programs: Outdated Population Data Used to Allocate Most Funds (GAO/HRD-90-145, Sept. 27, 1990). Federal-State-Local Relations: Trends of the Past Decade and Emerging Issues (GAO/HRD-90-34, Mar. 22, 1990). Liner, E. Blaine ed. A Decade of Devolution: Perspectives on State-Local Relations. Washington, D.C.: The Urban Institute Press, 1989. Nathan, Richard P. and Fred C. Doolittle. The Consequences of Cuts: The Effects of the Reagan Domestic Program on State and Local Governments. Princeton, NJ: Princeton Urban and Regional Research Center, 1983. Nathan, Richard P. and Doolittle. Fred C., Reagan and the States. Princeton, NJ: Princeton University Press, 1987. National Governors’ Association and the National Association of State Budget Officers. The Fiscal Survey of the States. Washington, D.C.: 1994. Palmer, John L. and Isabel V. Sawhill, eds. The Reagan Experiment. The Urban Institute Press, Washington, D.C.: 1982. Peterson, George E., et al. The Reagan Block Grants: What Have We Learned? Washington, D.C.: The Urban Institute Press, 1986. Peterson, Paul E., Barry G. Rabe and Kenneth K. Wong. When Federalism Works. Washington, D.C.: The Brookings Institution, 1986. U.S. Advisory Commission on Intergovernmental Relations. Significant Features of Fiscal Federalism. Washington, D.C.: 1994). Sigurd R. Nilsen, Assistant Director, (202) 512-7003 Jacquelyn B. Werth, Evaluator-in-Charge, (202) 512-7070 Mark Eaton Ward, Senior Evaluator Joel Marus, Evaluator David D. Bellis, Senior Evaluator John Vocino, Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO provided information on federal block grant programs, focusing on: (1) states' experiences operating block grants; and (2) lessons learned that could be useful to Congress as it considers new block grants. GAO found that: (1) 15 block grants with funding of $32 billion constituted a small portion of the total federal aid to states in fiscal year 1993; (2) in 1981, Congress created 9 block grants from about 50 categorical programs to broaden program flexibility among states; (3) the states' transition to block grants was generally smooth, since the states had existing management and delivery systems for most programs, but they had difficulties in two areas because these categorical programs were entirely federally funded or directed; (4) states reported administrative efficiencies with block grants, but documenting the cost savings was difficult; (5) although the states experienced a 12-percent funding reduction under the block grants, they used various approaches, such as using carry-over funds and additional state revenues, to help them offset the funding reductions; (6) problems with the 1981 block grant included inequitable initial state allocations, the lack of useful information for Congress and program managers to effectively oversee the grants, and reduced state flexibility due to Congress recategorizing some grants; (7) lessons learned from the 1981 experience should focus on accountability for results, equitable funding allocations based on state need, ability to pay, and cost of services; and (8) states could encounter greater transition difficulties with the larger, more complex programs being considered for inclusion in the new block grants. |
Technical testing of Army aviation systems, such as helicopters, and related support equipment is the responsibility of the Test and Evaluation Command (TECOM), under the U.S. Army Materiel Command. Since 1990, TECOM has maintained three principal aviation testing sites. The Aviation Technical Test Center (ATTC) at Fort Rucker is the primary site for testing aviation systems and support equipment. The Airworthiness Qualification Test Directorate at Edwards Air Force Base is the primary site for airworthiness qualification testing. Yuma Proving Ground tests aircraft armaments and sensor systems. The principal customers for TECOM’s aviation testing are the aviation program managers who purchase this equipment for the Army and are currently headquartered at the Aviation and Troop Command (ATCOM), St. Louis, Missouri. Significant reductions in funding, personnel, and test workloads in recent years, as well as projections for continued reductions as part of overall defense downsizing, drove TECOM in 1992 to examine options for reducing its testing infrastructure. Internal TECOM studies resulted in a recommendation ultimately endorsed by the Army’s Vice Chief of Staff in late 1993 to consolidate all three Army aviation technical testing organizations at Yuma Proving Ground. TECOM’s proposal was reinforced by the results of a separate study sponsored by ATCOM and completed in December 1993. The 1995 base realignment and closure (BRAC) process also looked at testing facilities from a Defense-wide perspective. That process identified options for consolidating Army testing at a single-site as well as an option for eliminating greater excess testing capacity by consolidating aviation testing across service lines. Consolidation or cross-servicing of common support functions such as test and evaluation activities proved very contentious among the services in BRAC 1995 and produced limited results. None of the aviation testing options were adopted as part of the BRAC process. However, Army BRAC officials indicated to our staff in January 1995 that a consolidation of its aviation testing was planned outside the BRAC process. While awaiting formal approval of the single-site consolidation at Yuma, in the spring 1995, the Army Secretary’s staff updated TECOM’s cost and savings analyses of two options: the single-site at Yuma and a dual-site at Fort Rucker and Yuma. On June 29, 1995, the Secretary tentatively approved the dual-site option because the analyses showed that greater short-term savings could be achieved with that option. Because TECOM analysts considered only the impacts on TECOM’s budget, they did not fully account for projected savings in operating costs, particularly in the personnel area. Also, some adjustments were needed in the methodology for and calculations of recurring costs and savings involving base operations, real property maintenance, and aircraft maintenance to obtain a more complete picture of relative costs and savings among the competing locations and the time required to offset implementation costs. (See app. II for a discussion of adjustments.) Table 1 shows the Army’s projected one-time implementation costs; annual recurring savings; and the time it takes, from the year consolidation begins, for savings to begin to exceed costs from each consolidation option. Table 2 shows the same information based on our adjustments to the Army’s data. As table 2 shows, the adjusted data indicates higher annual recurring operating savings from each option. Recurring savings remain the greatest from the Yuma single-site option, but the offsetting of implementation costs (including military construction) still takes longer with this option than with the other two options. Like the Army, we projected savings from the consolidation options over a 20-year period, following the approach used by DOD in its base realignment and closure process. The Army discounted long-term savings at a 2.75 percent rate—the same rate it used in conjunction with its 1995 base realignment and closure analysis. However, as noted in our report on the 1995 BRAC process, the current Office of Management and Budget approved discount rate of 4.85 percent would have been more appropriate for the 1995 BRAC process. Table 3 shows the projected net present values of the savings for each option using the Army’s cost data and the 2.75 percent discount rate. Table 4 shows our adjustments to the Army’s data, including use of the 4.85 percent discount rate. As tables 3 and 4 show, the Fort Rucker/Yuma dual-site option offers the Army the greatest short-term savings, which the Army considers important in today’s constrained budget environment. The adjusted data show that both the Fort Rucker/Yuma dual-site and Yuma single-site options have long-term savings that are much greater than those for the Edwards/Yuma dual-site option. The 20-year cost savings for the Yuma single-site option are at least comparable to, and possibly greater than, the Fort Rucker/Yuma dual-site option. Under the least savings case shown, for those two options, there would be about a $1 million difference in projected long-term savings between the two options—a difference that could be eliminated with a reduction of about $100,000 in annual operating costs for the Yuma single-site option. The costs and savings from the Yuma single-site option are based on the premise that required military construction would be completed before the consolidation. Completing the military construction after the consolidation would result in increased operating costs and reduced savings. Neither we nor the Army included several factors in cost and savings calculations because they were not easily quantified and because no consensus could be reached on what those costs and savings should be. According to officials at Edwards, movement of the base’s testing operation to Fort Rucker could result in significant recurring costs to transport test aircraft and personnel to distant ranges, such as Yuma, to complete necessary testing operations. An Army aviation official at Edwards estimated these costs could be about $400,000 per year, based on prior tests conducted at Edwards. Another estimate from a Yuma official, based on an evaluation of future testing of the new Comanche aircraft, suggested that additional transportation costs could run as high as $1 million annually. Fort Rucker officials, while acknowledging that transportation costs could increase, believe that the actual costs would not be as high as projected. A number of factors made it difficult for us to identify the most likely costs. First, prior tests are not necessarily indicative of future testing requirements. Second, Army testers already use multiple sites around the United States for various tests—sites other than the three discussed in this report. Third, Fort Rucker officials indicted they would likely seek testing sites closer to Fort Rucker if the consolidation plan is enacted. Thus, while we believe that additional transportation costs are likely with the Fort Rucker/Yuma option, it is not clear what those costs would be. Officials at Fort Rucker noted that it has a contractor-operated mini-depot repair capability to maintain the large number of aircraft associated with its aviation school. Documentation showed that the aviation test center can use this capability, particularly the electronic equipment test facility, to achieve significant savings in time and dollars over the costs of repair at a regular depot facility. Center officials estimated 1-year savings of about $1.9 million through the use of this contract. Army testing officials at Yuma and Edwards agreed that this mini-depot does provide an advantage to aviation testing at Fort Rucker. However, our other reviews of depot operations have shown that the services have excess depot capacity, which increases customer costs. At the same time, to the extent to which the practices of the mini-depot at Fort Rucker minimize customer costs over those at a regular depot, it raises a question why depot maintenance practices should not be modified more broadly so that such savings would not be limited to just Fort Rucker. These variables make it unclear what maintenance savings should be attributed to any testing consolidations involving Fort Rucker. Officials at each of the locations identified additional benefits and synergism from being located with other activities at their respective locations. However, such benefits, while undoubtedly real, were more qualitative in nature and not easily quantified from a cost standpoint or had cost advantages insufficient to affect the relative savings associated with a particular consolidation option. Additionally, other issues such as air space, safety, and weather were raised by officials at selected locations to suggest the relative merits of one location over the other. These also were more qualitative in nature and not easily quantified from a cost standpoint. While various Army officials and Army testing consolidation studies point to Yuma Proving Ground as providing the optimum testing environment for the Army, we found no indication that testing could not be conducted safely at the other locations. Various studies in recent years, including DOD’s 1995 base realignment and closure review, have concluded there is excess aviation test and evaluation capacity across DOD and have noted the need for reductions in keeping with overall defense downsizing. Likewise, Congress has urged DOD to downsize and consolidate testing activities. However, the services have been unable to agree on how best to achieve such consolidations. During the 1995 BRAC process, a cross-service review group, comprising representatives of each of the services and the Office of the Secretary of Defense, identified several alternatives for the services to consider as they evaluated their bases for potential closure or realignment. One alternative was to shift Army aviation testing from Fort Rucker and Edwards Air Force Base to Yuma. Another option, with greater excess capacity reduction potential across the services, was to consolidate the test and evaluation of air vehicles at a single DOD center at either the Navy’s Patuxent River, Maryland, testing facility or Edwards Air Force Base. Consolidation of Army aviation testing at one of these sites was contingent upon agreement by the Air Force and Navy for consolidation of their aviation testing. However, the services disagreed greatly over how to reduce their excess testing capacity, and little progress was made, particularly in the area of cross-servicing. Congress has also encouraged downsizing, consolidation, and restructuring of the services laboratories and test and evaluation infrastructure, including rotary wing aircraft. Section 277 of the National Defense Authorization Act for Fiscal Year 1996 (P.L. 104-106), requires that the Secretary of Defense, acting through the Test and Evaluation Agent Executive Board of Directors, develop and report to congressional defense committees, by May 1, 1996, a plan to consolidate and restructure DOD’s laboratories and test and evaluation centers by the year 2005. Of more immediate concern to DOD was the Army Secretary’s June 1995 tentative decision to consolidate Army aviation testing at Fort Rucker/Yuma. The Director, Test Systems Engineering and Evaluation, in the Office of the Under Secretary of Defense for Acquisition and Technology, expressed concern that Fort Rucker was not part of DOD’s Major Range and Test Facility Base (MRTFB). He noted in a letter to the Test and Evaluation Executive Agent Board of Directors on September 12, 1995, that there had been a long-standing understanding within the DOD testing community that any consolidation of test and evaluation activities should be at a MRTFB facility unless there was a compelling reason otherwise. He also noted the principle of selecting courses of action that are optimum for DOD rather than for a single program or service. The Army, tasked with responding on behalf of the Board, noted that personnel and budget constraints required the Army to take immediate action to reduce costs in many areas; additionally, the Army noted that it was these economic circumstances, as well as the Army requirement to achieve short- and medium-term budgetary savings, that led to its decision. Several service officials we met with also questioned the selection of a non-MRTFB facility (Fort Rucker) in light of future directions of aviation testing. These officials indicated that advanced helicopter systems are increasingly employing integrated electronics and, as a result, it is important to test the electronics and airworthiness at the same time. Various officials also suggest that it is important to do testing of the aircraft configured with its weapon systems, operating the electronic equipment, and firing the weapons. They also said it is important to do integrated testing to avoid gaps in testing programs. ATCOM’s 1993 study of aviation testing noted that as weapons and electronic warfare equipment become a more integral part of the air vehicle, it is increasingly important that the whole system, not merely its parts, be tested. This suggests the importance of locating testing at a MRTFB facility. There is a continuing need to reduce and consolidate excess infrastructure within DOD, including that which exists within the services testing community. Also, the Army has a compelling need to consolidate its aviation testing because of reductions in its workload and continuing reductions in authorized personnel. Consequently, we recommend that the Secretary of Defense, in conjunction with the Test and Evaluation Executive Agent Board of Directors, reexamine the Army’s aviation consolidation plan within the context of its congressionally mandated plan for consolidating laboratories and test and evaluation facilities. Such a reexamination should include a timely determination of whether DOD could reduce excess testing capacity and achieve greater long-term savings Defense-wide through consolidation of Army aviation testing on a cross-service basis and, if so, determining the appropriate locations and action plan for achieving such a consolidation. In official oral comments, DOD generally concurred with this report and agreed to examine the Army’s aviation consolidation plan within the context of its congressionally mandated plan for consolidating laboratories and test and evaluation facilities, due to Congress by May 1, 1996. However, DOD also agreed to the Army proceeding with it’s current aviation consolidation plan, but only to the extent that near-term savings can be realized, and holding in abeyance any actions such as construction or other investments that could be lost if far-term consolidation plans differ from the Army’s short-term actions. DOD’s agreement with the Army moving forward with its current consolidation plan raises questions about the extent to which the issue of cross-servicing will be dealt with in the near-term. We continue to believe that a serious examination of the potential for cross-servicing in the test and evaluation arena is warranted. DOD also expressed the view that our adjustments to the Army’s cost and savings analysis, while not affecting the outcome of our review, did result in what it considered an inflated estimate of expected annual savings in our report. Our approach, following methodology employed in the BRAC process, made appropriate and consistent calculations of one-time and long-term costs and savings for each location option; in doing so, we considered costs and savings both to the Army as a whole as well as to the test and evaluation program. We believe that this is an appropriate approach to fully account for expected costs and savings. Our scope and methodology are discussed in appendix I. Unless you announce its contents earlier, we plan no further distribution of this report until 15 days after its issue date. At that time, we will send copies to the Chairmen, Senate Committee on Armed Services; Subcommittee on Defense, Senate Committee on Appropriations; House Committee on National Security; and Subcommittee on National Security, House Committee on Appropriations; the Director, Office of Management and Budget; and the Secretaries of Defense and the Army. Please contact me at (202) 512-8412 if you or your staff have any questions concerning this report. Major contributors to this report were Barry W. Holman, Assistant Director; Raymond C. Cooksey, Evaluator-in-Charge; and David F. Combs, Senior Evaluator. We obtained and reviewed various studies completed by the Army’s Test and Evaluation Command (TECOM) and others pertaining to the consolidation of aviation test facilities. Discussions were held with pertinent officials at the Department of the Army headquarters; TECOM headquarters at Aberdeen Proving Ground, Maryland; and TECOM test sites at Yuma Proving Ground, Arizona; Edwards Air Force Base, California; and Fort Rucker, Alabama. We obtained and analyzed various data at each of these locations to assess the completeness and reasonableness of the data included in the Army’s consolidation studies and data used by the Secretary of the Army in making his June 1995 tentative decision to consolidate testing and two sites. We did not attempt to develop budget quality data, but focused on the adequacy of data to provide relative comparisons among competing locations. Because we had concerns about the comparability of private sector wage data used by the Army in projecting aircraft maintenance costs, we obtained current Department of Labor wage rate data to provide another basis for comparing potential costs. In assessing projected costs and savings for each consolidation option, we also performed selected sensitivity analyses to determine how changes in some data elements would affect the relative costs and savings of each location. To broaden our perspective on aviation test and evaluation issues and future requirements, we held discussions with key testing officials in the Office of the Secretary of Defense, the Army’s Aviation and Troop Command, the Air Force Flight Test Center at Edwards Air Force Base, and the Naval Air Warfare Center at Patuxent River, Maryland. Additionally, we reviewed pertinent documentation and analyses from the 1995 base realignment and closure process. We conducted our work between August 1995 and January 1996 in accordance with generally accepted government auditing standards. We made adjustments to the Army’s costs and savings data to obtain a more complete picture of expected savings from consolidated testing activities. We factored in savings in two areas not fully reflected in the Army’s analysis. The first involved the fact that TECOM had claimed only the savings proportional to its direct funding. Approximately 40 percent of TECOM’s budget involves direct funding; the remainder is derived from customer billings. We, therefore, adjusted the savings upward to more fully account for total Army savings. The second area involved savings attributable to reductions in military personnel that would occur as a direct result of the consolidations. TECOM’s written organizational concept outlining plans for consolidation cited specific expected reductions in military personnel because of consolidation. It had not included these savings in its analysis; we added them in. These changes produced significant increases in projected annual recurring and long-term savings to the Army. We made some adjustments to the Army’s calculations of base operating support and real property maintenance services. Cost comparisons for this area had proven problematic for the Army, since the Aviation Technical Test Center was not billed for these services at Fort Rucker. Therefore, TECOM opted to develop average base operating and real property management costs based on actual costs at Fort Rucker and Edwards Air Force Base and apply that average to all three locations. TECOM officials did not have actual cost data for Yuma. We used the Army’s data for Fort Rucker and Edwards to assess the impact on base operating costs for the various consolidation options. The effect was some decrease in projected savings from a consolidation at Edwards Air Force Base and increase in savings at Fort Rucker. Because comparable base operating cost data were not readily available for Yuma, and assuming that actual base operating costs at Yuma would likely be somewhere between those at Fort Rucker and Edwards, we applied an average cost figure to base operating costs at Yuma. The effect on the Yuma option was negligible. We recognized a concern expressed by the Edwards community that actual Army/TECOM reimbursements to the Air Force for base operations were about $400,000 less than those included in the Army’s analysis. A counter, according to TECOM officials, is that the Aviation Technical Test Center is not directly billed for any base operating support costs at Fort Rucker. Absent time for a more detailed assessment of base operating costs at each of the locations, we considered the Army’s methodology, with adjustments as noted above, to represent a reasonable approach for comparing such costs. Nevertheless, we conducted a sensitivity analysis, reducing base operating costs at Edwards by $400,000 to determine the impact on recurring savings at Fort Rucker and found that the relative cost advantage of each competing location remained unchanged. In reviewing contracted aircraft maintenance cost estimates, we found broad differences in estimates of labor costs at the three locations. The Army’s most recent study had used a wage differential of 5.7 percent between Fort Rucker and Yuma, based on actual experience at the two locations. However, it used a wage difference of 19 percent between Fort Rucker and Edwards Air Force Base, based on federal wage grade tables. The study assumed the work, if moved to Edwards, would be contracted out. Most recent Department of Labor wage rate data for aircraft mechanics showed the differences between Fort Rucker and Yuma and between Fort Rucker and Edwards Air Force Base, to be 28.2 percent and 25.8 percent, respectively. While Department of Labor wage rates provide a uniform basis for comparison, various Army officials have expressed concern that actual costs at the time a contract would be negotiated would be somewhat less than indicated by the Department of Labor data. For uniformity in comparing differences among the three locations, we chose to adjust the Army’s data to reflect current Department of Labor wage differences among the three locations. However, assuming that actual costs could likely fall somewhere between the two approaches, our adjusted data on savings show a range of savings to reflect each approach. The low end, with smaller recurring savings, are based on Department of Labor wage differentials. Our adjustments to the Army’s data affected various cost and savings data elements. For example, the aircraft maintenance adjustments had the effect of increasing projected annual operating costs at Yuma and Edwards relative to Fort Rucker and reducing projected long-term savings at those locations. Also, while Yuma, as a single-site option, had greater savings in personnel costs, Yuma’s aggregate savings were diminished by higher projected contract maintenance costs attributed to differences in area wage rates. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the Secretary of the Army's tentative decision to move aviation testing activities now at Edwards Air Force Base, California, to Fort Rucker, Alabama, and retain Yuma Proving Ground. GAO found that: (1) the Army failed to fully account for savings from consolidating its aviation testing activities; (2) consolidation at Fort Rucker and Yuma Proving Ground will result in the greatest short-term and significant long-term savings; (3) single-site consolidation at Yuma will result in the greatest long-term savings and an optimum testing environment for future testing; (4) the Department of Defense (DOD) and the services have not reached consensus about how best to consolidate and downsize test activities; (5) excess aviation testing capacity within DOD signals that consolidation is necessary to reduce this excess; and (6) the Secretary of Defense will need stronger commitment and leadership to evaluate whether these options or other options will serve DOD best. |
In November 2003, Congress authorized a new performance-based pay system for members of the SES. According to OPM’s interim regulations, SES members are to no longer receive annual across-the-board or locality pay adjustments with the new pay system. Agencies are to base pay adjustments for SES members on individual performance and contributions to the agency’s performance by considering such things as the unique skills, qualifications, or competencies of the individual and their significance to the agency’s mission and performance, as well as the individual’s current responsibilities. Specifically, the revised pay system, which took effect in January 2004, replaces the six SES pay levels with a single, open-range pay band and raises the pay cap for all SES members to $145,600 in 2004 (Level III of the Executive Schedule) with a senior executive’s total compensation not to exceed $175,700 in 2004 (Level I of the Executive Schedule). If OPM certifies and OMB concurs that the agency’s performance management system, as designed and applied, makes meaningful distinctions based on relative performance, an agency can raise the SES pay cap to $158,100 in 2004 (Level II of the Executive Schedule) with a senior executive’s total compensation not to exceed $203,000 in 2004 (the total annual compensation payable to the Vice President). In an earlier step, to help agencies hold senior executives accountable for organizational results, OPM amended regulations for senior executive performance management in October 2000. These amended regulations governing performance appraisals for senior executives require agencies to establish performance management systems that (1) hold senior executives accountable for their individual and organizational performance by linking performance management with the results-oriented goals of the Government Performance and Results Act of 1993 (GPRA); (2) evaluate senior executive performance using measures that balance organizational results with customer satisfaction, employee perspectives, and any other measures agencies decide are appropriate; and (3) use performance results as a basis for pay, awards, and other personnel decisions. Agencies were to establish these performance management systems by their 2001 senior executive performance appraisal cycles. High-performing organizations have recognized that their performance management systems are strategic tools to help them manage on a day-to- day basis and achieve organizational goals. While Education, HHS, and NASA have undertaken important and valuable efforts to link their career senior executive performance management systems to their organizations’ success, senior executives’ perceptions indicate that these three agencies have opportunities to use their career senior executive performance management systems more strategically to strengthen that link. Based on our survey of career senior executives, we estimate that generally less than half of the senior executives at Education, HHS, and NASA feel that their agencies’ are fully using their performance management systems as a tool to manage the organization or to achieve organizational goals, as shown in figure 1. Further, effective performance management systems are not merely used for once-or twice-yearly individual expectation setting and rating processes. These systems facilitate two-way communication throughout the year so that discussions about individual and organizational performance are integrated and ongoing. Effective performance management systems work to achieve three key objectives: (1) they strive to provide candid and constructive feedback to help individuals maximize their contribution and potential in understanding and realizing the goals and objectives of the organization, (2) they seek to provide management with the objective and fact-based information it needs to reward top performers, and (3) they provide the necessary information and documentation to deal with poor performers. In this regard as well, generally less than half of the senior executives felt that their agencies are fully using their performance management systems to achieve these objectives, as shown in figure 2. High-performing organizations have recognized that a critical success factor in fostering a results-oriented culture is a performance management system that creates a “line of sight” showing how unit and individual performance can contribute to overall organizational goals and helping them understand the connection between their daily activities and the organization’s success. Further, our prior work has identified nine key practices public sector organizations both here and abroad have used that collectively create this line of sight to develop effective performance management systems. To this end, while Education, HHS, and NASA have begun to apply the key practices to develop effective performance management systems for their career senior executives, they have opportunities to strengthen the link between their senior executives’ performance and organizations’ success. An explicit alignment of daily activities with broader results is one of the defining features of effective performance management systems in high- performing organizations. These organizations use their performance management systems to improve performance by helping individuals see the connection between their daily activities and organizational goals and encouraging individuals to focus on their roles and responsibilities to help achieve these goals. Education, HHS, and NASA require their senior executives to align individual performance with organizational goals in order to hold them accountable for organizational results. Our review of the senior executives’ performance plans showed that all the plans at each agency identified individual performance expectations that aligned with organizational goals. In addition, nearly all of the senior executives at each agency have reported that they communicate their performance expectations to at least a small extent to those whom they supervise. Cascading performance expectations in this way helps individuals understand how they contribute to organizational goals. Still, while most senior executives at each agency indicated that they see a connection between their daily activities and organizational goals to a very great or great extent, fewer of these senior executives felt that their agency’s SES performance management system holds them accountable for their contributions to organizational results to a very great or great extent, as shown in figure 3. These responses are generally consistent with our governmentwide surveys on the implementation of GPRA. In particular, governmentwide, senior executives have consistently reported that they are held accountable for results. Most recently, we reported in March 2004 that 61 percent of senior executives governmentwide feel they are held accountable for achieving their agencies’ strategic goals to a very great or great extent. To reinforce the accountability for achieving results-oriented goals, we have reported that more progress is needed in explicitly linking senior executives' performance expectations to the achievement of these goals. Setting specific levels of performance that are linked to organizational goals can help senior executives see how they directly contribute to organizational results. While most senior executives at HHS have set specific levels of performance in their individual performance plans, few senior executives in Education and NASA have identified specific levels. HHS requires its senior executives to set measurable performance expectations in their individual performance plans that align with organizational priorities, such as the department’s “One-HHS” objectives and strategic goals and their operating divisions’ annual performance goals or other priorities. We found that about 80 percent of senior executives’ performance plans identified specific levels of performance linked to organizational goals. For example, a senior executive in CDC set an expectation to “reduce the percentage of youth (grade 9-12) who smoke to 26.5%,” which contributes to CDC’s annual performance goal to “reduce cigarette smoking among youth” and the One-HHS program objective to “emphasize preventive health measures (preventing disease and illness).” However, specifying levels of performance varies across operating divisions. We found that approximately 63 percent of senior executives at FDA versus 80 percent at CDC identified specific levels of performance linked to organizational goals in their individual performance plans. Education requires its senior executives to include critical elements, each with specific performance requirements, in their individual performance plans that align with the department’s goals and priorities, including the President’s Management Agenda, the Secretary’s strategic plan, the Blueprint for Management Excellence, and the Culture of Accountability. We found that approximately 5 percent of senior executives’ performance plans identified specific levels of performance linked to organizational goals. NASA requires its senior executives to include seven critical elements, each with specific performance requirements that focus on the achievement of organizational goals and priorities, in their individual performance plans. For example, senior executives’ performance plans include the critical element “meets and advances established agency program objectives and achieves high-quality results,” and specifically “meets appropriate GPRA/NASA Strategic Plan goals and objectives.” Senior executives may modify the performance requirements by making them more measurable or specific to their jobs; however, only about 23 percent of senior executives added performance requirements that are specific to their positions in their individual performance plans. Also, about 1 percent of senior executives have performance expectations with specific levels of performance that are related to organizational goals in their individual plans. As public sector organizations shift their focus of accountability from outputs to results, they have recognized that the activities needed to achieve those results often transcend specific organizational boundaries. Consequently, organizations that focus on collaboration, interaction, and teamwork across organizational boundaries are increasingly critical to achieve results. In a recent GAO forum, participants agreed that delivering high performance and achieving goals requires agencies to establish partnerships with a broad range of federal, state, and local government agencies as well other relevant organizations. High-performing organizations use their performance management systems to strengthen accountability for results, specifically by placing greater emphasis on collaboration to achieve results. While most senior executives in each agency indicated that they collaborate with others to achieve crosscutting goals, fewer of these senior executives felt that their contributions to crosscutting goals are recognized through their agency’s system, as shown in figure 4. We reported that more progress is needed to foster the necessary collaboration both within and across organizational boundaries to achieve results. As a first step, agencies could have senior executives identify specific programmatic crosscutting goals that would require collaboration to achieve in their individual performance plans. As a next step, agencies could have senior executives name the relevant internal or external organizations with which they would collaborate to reinforce a focus across organizational boundaries. HHS, Education, and NASA are connecting performance expectations to crosscutting goals to varying degrees. While HHS does not require executives to identify programmatic crosscutting goals specific to the individuals in their performance plans, according to an agency official, it holds all senior executives accountable for the crosscutting One-HHS program objectives, such as to increase access to health care. We found that about 67 percent of senior executives’ performance plans identified a programmatic crosscutting goal that would require collaboration to achieve, as shown in figure 5. The extent to which the senior executives’ performance plans identified crosscutting goals varied across operating divisions. For example, 60 percent of the senior executives’ plans in FDA identified crosscutting goals compared with 50 percent of the plans in CDC. Few HHS senior executives clearly identified the specific organization(s) either internal or external with which they would collaborate. Positive examples of senior executives’ plans at HHS that included crosscutting goals, as well as either the internal or external organizations with which they would collaborate to achieve these goals, include the following: A senior executive in the National Institutes of Health set an expectation to work with FDA and other agencies and organizations to accelerate drug development by specifically working on the National Cancer Institute/FDA task force to eliminate barriers and speed development. A senior executive in the Substance Abuse and Mental Health Services Administration set an expectation to work collaboratively with the Office of National Drug Control Policy, the Department of Energy, and the Office of Juvenile Justice and Delinquency Prevention to increase the use of the National Registry of Effective Programs in other federal agencies to identify and provide for early intervention for persons with or who are at risk for mental health or substance abuse problems. As required by Education, all senior executives’ performance plans included the general performance expectation: “promotes collaboration and teamwork, including effective union-management relations, where appropriate.” However, only about 32 percent of senior executives’ performance plans identified programmatic crosscutting goals on which they would collaborate and few executives clearly identified the specific organizations with which they would collaborate, as shown in figure 6. As required by NASA, all senior executives’ performance plans included a general expectation: “integrates One-NASA approach to problem-solving, program/project management, and decision making. Leads by example by reaching out to other organizations and NASA centers to collaborate on work products; seeks input and expertise from a broad spectrum .…” This expectation is designed to contribute to achieving NASA’s mission. Only about 1 percent of the executives clearly identified specific centers in NASA and none of the executives clearly identified specific organizations outside of NASA that they need to collaborate with to achieve crosscutting goals. High-performing organizations provide objective performance information to executives to show progress in achieving organizational results and other priorities, such as customer satisfaction and employee perspectives, and help them manage during the year, identify performance gaps, and pinpoint improvement opportunities. We reported that disaggregating performance information in a useful format could help executives track their performance against organizational goals and compare their performance to that of the organization. HHS, NASA, and Education took different approaches to providing performance information to their senior executives in order to show progress toward organizational goals or priorities. While all three agencies give their components the flexibility to collect and provide performance information to their senior executives, Education also provides performance information agencywide. Of the senior executives in Education, HHS, and NASA who reported that their agency provided performance information to track their work unit’s performance, generally less than half found the performance information to be useful for making improvements, available when needed, or both to a very great or great extent, as shown in figure 7. Education provides various types of performance information to senior executives intended to help them see how they are meeting the performance expectations in their individual performance plans. A tracking system monitors how Education is making progress toward its annual performance goals and supporting action steps. Each action step has milestones that are tracked and reported each month to the officials who developed and have “ownership” for them. Education also collects performance information on customer service and employee perspectives. For example, Education uses an automated performance feedback process, whereby customers, coworkers, and employees provide feedback at midcycle and the end of the performance appraisal cycle on how the senior executives are meeting their individual performance expectations and areas for improvement. HHS conducts an annual departmentwide quality of work life survey and disaggregates the survey results for executives and other employees to use. HHS compares the overall high or low results of its survey for HHS as a whole to each operating division and to the component organizations within operating divisions. In the 2003 survey, HHS added questions about the President’s Management Agenda, and each operating division had the opportunity to add specific questions focusing on issues that it believed were important to its employees, such as flexible work schedules or knowledge management issues. In addition, HHS gives operating divisions the flexibility to use other means of collecting and providing performance information, and in turn, FDA and CDC give their centers and offices the flexibility to collect and provide performance information. For example, according to a CDC official, senior executives receive frequent reports, such as the weekly situation reports, to identify priorities and help communicate these priorities among senior executives. In addition, CDC conducts a “pulse check” survey to gather feedback on employees’ satisfaction with the agency and disaggregates the results to the center level. According to an agency official, CDC plans to conduct this survey quarterly. An official at NASA indicated that while NASA does not systematically provide performance information to its senior executives on a NASA-wide basis, the centers have the flexibility to collect and provide performance information to their senior executives on programs’ goals and measures and customer and employee satisfaction. This official indicated that NASA uses the results of the OPM Human Capital survey to help identify areas for improvement throughout NASA and its centers. NASA provides the OPM Human Capital survey data to its centers, showing NASA-wide and center- specific results, to help centers conduct their own analyses and identify areas for improvement and best practices. High-performing organizations require individuals to take follow-up actions based on the performance information available to them. By requiring and tracking such follow-up actions on performance gaps, these organizations underscore the importance of holding individuals accountable for making progress on their priorities. Within Education, only the senior executives who developed the action steps for the annual performance goals are to incorporate expectations to demonstrate progress toward the goal(s) in their individual plans. HHS and NASA do not require senior executives to take follow-up actions agencywide, but they encourage their components to have executives take follow-up actions to show progress toward the organizational priorities. Of the senior executives at each agency who indicated that they took follow-up actions on areas of improvement, generally less than two-thirds felt they were recognized through their performance management systems for such actions, as shown in figure 8. At Education, senior executives who developed the action steps for Education’s annual goals are required to set milestones that are tracked each month using a red, yellow, or green scoring system; assess how they are progressing toward the action steps and annual goals; and revise future milestones, if necessary. According to agency officials, these senior executives are to incorporate these action steps when developing their individual performance plans. Senior executives are also to address the feedback that their supervisors provide about their progress in achieving their performance expectations. HHS as a whole does not require senior executives to take follow-up actions, for example, on the quality of work life survey results, or incorporate the performance information results into their individual performance plans. In addition, FDA and CDC do not require their senior executives agencywide to take any type of follow-up actions. However, FDA centers have the flexibility to require their senior executives to identify areas for improvement based on the survey results or other types of performance information. Similarly, CDC encourages its executives to incorporate relevant performance measures in their individual performance plans. For example, those senior executives within each CDC center responsible for issues identified at emerging issues meetings are required to identify when the issues will be resolved, identify the steps they will take to resolve the issues in action plans, and give updates at future meetings with the CDC Director and other senior officials. NASA does not require its senior executives to take follow-up actions agencywide on the OPM Human Capital Survey data or other types of performance information, rather it encourages its centers to have their executives take follow-up action on any identified areas of improvement. However, an agency official stated that NASA uses the results of the survey to identify areas for improvement and that managers are ultimately accountable for ensuring the implementation of the improvement initiatives. High-performing organizations use competencies to examine individual contributions to organizational results. Competencies, which define the skills and supporting behaviors that individuals are expected to demonstrate to carry out their work effectively, can provide a fuller picture of individuals’ performance in the different areas in which they are appraised, such as organizational results, employee perspectives, and customer satisfaction. We have reported that core competencies applied organizationwide can help reinforce behaviors and actions that support the organization’s mission, goals, and values and can provide a consistent message about how employees are expected to achieve results. Education and NASA identified competencies that all senior executives in the agency must include in their performance plans, while HHS gave its operating divisions the flexibility to have senior executives identify competencies in their performance plans. Most of the senior executives in each agency indicated that the competencies they demonstrate help them contribute to the organization’s goals to a very great or great extent. However, fewer of these executives felt that they were recognized through their performance management system for demonstrating these competencies, as shown in figure 9. Education requires all senior executives to include a set of competencies in their individual performance plans. Based on our review of Education’s senior executives’ performance plans, we found that all of the plans, unless otherwise noted, included the following examples of competencies. Organizational results—“sets and meets challenging objectives to achieve the Department’s strategic goals.” Employee perspectives—“fosters improved workforce productivity and effective development and recognition of employees.” Customer satisfaction—“anticipates and responds to customer needs in a professional, effective, and timely manner.” NASA requires all senior executives to include a set of competency-based critical elements in their individual performance plans. Based on our review of NASA’s senior executives’ performance plans, we found all of the plans included the following examples of competencies. Organizational results—Understands the principles of the President’s Management Agenda and actively applies them; capitalizes on opportunities to integrate human capital issues in planning and performance and to expand e-government and competitive sourcing; and pursues other opportunities to reduce costs and improve service to customers. Employee perspectives—Demonstrates a commitment to equal opportunity and diversity by proactively implementing programs that positively impact the workplace and NASA’s external stakeholders and through voluntary compliance with equal opportunity laws, regulations, policies, and practices. Customer satisfaction—Provides the appropriate level of high-quality support to peers and other organizations to enable the achievement of the NASA mission; results demonstrate support of One-NASA and that stakeholder and customer issues were taken into account. According to an HHS official, the HHS senior executive performance management system, while not competency based, is becoming more outcome oriented. However, operating divisions may require senior executives to include competencies. For example, senior executives in FDA and CDC include specific competencies related to organizational results, employee perspectives, and customer satisfaction in their individual performance plans. Based on our review of HHS’s senior executives’ performance plans, we found that all of the plans at FDA and CDC and nearly all across HHS identified competencies that executives are expected to demonstrate. Organizational results—About 94 percent of HHS senior executives’ plans identified a competency related to organizational results. For example, all senior executives’ plans in FDA included a competency to “demonstrate prudence and the highest ethical standards when executing fiduciary responsibilities.” Employee perspectives—About 89 percent of HHS senior executives’ plans identified a competency related to employee perspectives. For example, senior executives in CDC are required to include a competency to exercise leadership and management actions that reflect the principles of workforce diversity in management and operations in such areas as recruitment and staffing, employee development, and communications. Customer satisfaction—About 61 percent of HHS senior executives’ plans identified a competency related to customer satisfaction. For example, all senior executives’ plans in FDA included a competency to “lead in a proactive, customer-responsive manner consistent with agency vision and values, effectively communicating program issues to external audiences.” High-performing organizations seek to create pay, incentive, and reward systems that clearly link employee knowledge, skills, and contributions to organizational results. These organizations recognize that valid, reliable, and transparent performance management systems with reasonable safeguards for employees are the precondition to such an approach. To this end, Education’s, HHS’s, and NASA’s performance management systems are designed to appraise and reward senior executive performance based on each executive’s achievement toward organizational goals as outlined in the executive’s performance plan. Overall, the majority of senior executives at each agency either strongly agreed or agreed that they are rewarded for accomplishing the performance expectations in their individual performance plan or helping their agency accomplish its goals, as shown in figure 10. These responses are similar to those from our governmentwide survey on the implementation of GPRA. We reported that about half of senior executives governmentwide perceive to a very great or great extent that employees in their agencies received positive recognition for helping their agencies accomplish their strategic goals. GAO-04-38. safeguards will become especially important under the new performance- based pay system for the SES. Education, HHS, and NASA have built the following safeguards required by OPM into their senior executive performance management policies. Each agency must establish one or more performance review boards (PRB) to review senior executives’ initial summary performance ratings and other relevant documents and to make written recommendations to the agency head on the performance of the agency’s senior executives. The PRBs are to have members who are appointed by the agency head in a way that assures consistency, stability, and objectivity in senior executive performance appraisals. For example, HHS specifically states that each operating division will have one or more PRBs with members appointed by the operating division head. HHS’s PRB members may include all types of federal executives, including noncareer appointees, military officers, and career appointees from within and outside the department. In addition, NASA’s PRB is to evaluate the effectiveness of the senior executive performance management system and report its findings and any appropriate recommendations for process improvement or appropriate policy changes to NASA management. For example, the PRB completed a study on NASA’s senior executive bonus system in 2003. A senior executive may provide a written response to his or her initial summary rating that is provided to the PRB. The PRB is to consider this written response in recommending an annual summary rating to the agency head. A senior executive may ask for a higher-level review of his or her initial summary rating before the rating is provided to the PRB. The higher- level reviewer cannot change the initial summary rating, but may recommend a different rating to the PRB that is shared with the senior executive and the supervisor. Upon receiving the annual summary rating, senior executives may not appeal their performance appraisals and ratings. We have observed that a safeguard for performance management systems is to ensure reasonable transparency and appropriate accountability mechanisms in connection with the performance management process. Agencies can help create transparency in the performance management process by communicating the overall results of the performance appraisal cycle to their senior executives. Education, NASA, and HHS officials indicated that they do not make the aggregate distribution of performance ratings or bonuses available to their senior executives. In addition, agencies can communicate the criteria for making performance-based pay decisions and bonus decisions to their senior executives to enhance the transparency of the system. Generally, less than half of the senior executives at each agency reported that they understand the criteria used to award bonuses to a very great or great extent, and some senior executives at each agency reported that they do not understand the criteria at all, as shown in figure 11. High-performing organizations make meaningful distinctions between acceptable and outstanding performance of individuals and appropriately reward those who perform at the highest level. Executive agencies can reward senior executives’ performance in a number of ways: through performance awards or bonuses, nominations for Presidential Rank Awards, or other informal or honorary awards. With the new performance- based pay system for senior executives, agencies are required to have OPM certify and OMB concur that their performance management systems are making meaningful distinctions based on relative performance in order to raise the pay for their senior executives to the highest available level. Recently, the Director of OPM stated that agencies’ SES performance management systems should rely on credible and rigorous performance measurements to make meaningful distinctions based on relative performance in order for the new SES performance-based pay system to succeed. She also noted that while a growing number of agencies have improved in their distributions of SES ratings and awards based on agencies’ fiscal year 2002 rating and bonus data, these data also suggest that more work is needed. Specifically, see the following: Executive branch agencies rated about 75 percent of senior executives at the highest level their systems permit in their performance ratings in fiscal year 2002, the most current year for which data are available from OPM—a decrease from about 84 percent the previous fiscal year. When disaggregating the data by rating system, approximately 69 percent of senior executives received the highest rating under five-level systems in fiscal year 2002 compared to about 76 percent in fiscal year 2001, and almost 100 percent of senior executives received the highest rating under three-level systems in both fiscal years 2001 and 2002. Approximately 49 percent of senior executives received bonuses in fiscal year 2002 compared to about 52 percent the previous fiscal year. At HHS, about 86 percent of senior executives received the highest possible rating in fiscal year 2003 compared with approximately 99 percent in fiscal year 2002. While HHS gives its operating divisions the flexibility to appraise their senior executives’ performance using a three-, four-, or five-level performance management system, most of HHS’s operating divisions, including FDA and CDC, rate their senior executives under a three-level system. Almost all of HHS’s senior executives rated under a three-level system received the highest rating of “fully successful” in fiscal years 2002 and 2003. Approximately 23 percent of senior executives rated under a five-level system received the highest rating of “outstanding” in fiscal year 2003 compared with approximately 94 percent in fiscal year 2002. According to its Chief Human Capital Officer, HHS recognizes that its rating systems do not always allow for distinctions in senior executives’ performance, and it has chosen to focus on the bonus process as the method for reflecting performance distinctions. Senior executive bonuses are to provide a mechanism for distinguishing and rewarding the contributions of top performers, specifically for circumstances in which the individual’s work has substantially improved public health and safety or citizen services. Since the fiscal year 2001 performance appraisal cycle, HHS has restricted the percentage of senior executives’ bonuses to generally no more than one-third of each operating division’s senior executives. HHS, including FDA and CDC, is making progress toward distinguishing senior executive performance through bonuses compared to the percentage of senior executives governmentwide who received bonuses, as shown in table 1. Additionally, HHS generally limited individual bonus amounts to no more than 12 percent of base pay for top performers in fiscal year 2003. Most of the senior executives who received a bonus were awarded less than a 10 percent bonus in fiscal year 2003, as shown in table 2. Lastly, senior executive responses to our survey indicated that they did not feel HHS was making meaningful distinctions in ratings or bonuses to a very great or great extent. Approximately 31 percent of senior executives felt that their agency makes meaningful distinctions in performance using ratings; approximately 38 percent felt that their agency makes meaningful distinctions in performance using bonuses. NASA uses a five-level system to appraise senior executive performance. More than three-fourths of the senior executives received the highest rating of “outstanding” for the 2003 performance appraisal cycle (July 2002–June 2003), as shown in figure 12. The distribution of senior executives across the rating levels was similar to the previous performance appraisal cycle. NASA’s senior executive bonus recommendations are to be based solely on exceptional performance as specified and documented in senior executives’ performance plans. While NASA established a fixed allocation of bonuses for its organizations based on the total number of senior executives, an organization can request an increase to its allocation. Sixty percent of eligible senior executives within the organization’s bonus allocation may be recommended for bonuses larger than 5 percent of base pay. For the 2003 appraisal cycle, the percentage of senior executives who received bonuses increased from the previous year, as shown in table 3. An agency official indicated that this increase resulted from a study NASA’s PRB conducted on the senior executive bonus system. The PRB reviewed NASA’s bonus system in the context of OPM’s data on senior executive bonuses across federal agencies and recommended that NASA revise its bonus system to move NASA into the upper half of the number and average amount of bonuses given across federal agencies. According to the PRB study, NASA made this change to meet its management’s need to reward more senior executives while recognizing that bonus decisions must be based on performance. During NASA’s 2003 appraisal cycle, the Space Shuttle Columbia accident happened. We reviewed the aggregate senior executive performance rating and bonus data for that cycle; however, we did not analyze individual senior executives’ performance appraisals or bonus recommendations or determine if those who received ratings of outstanding, bonuses, or both were involved with the Columbia mission. Lastly, senior executive responses to our survey indicated that about half of the executives felt NASA was making meaningful distinctions in ratings or bonuses to a very great or great extent. Approximately 46 percent of senior executives felt that their agency makes meaningful distinctions in performance using ratings; approximately 48 percent felt that their agency makes meaningful distinctions in performance using bonuses. Education uses a three-level rating system. About 98 percent of senior executives received the highest rating of “successful” in the 2003 performance appraisal cycle (July 2002–June 2003), a slight decrease from the previous performance appraisal cycle when all senior executives received this rating. Education’s senior executive bonus recommendations are to be based on senior executives’ demonstrated results and accomplishments toward the department’s strategic goals and organizational priorities. About 63 percent of senior executives received bonuses in the 2003 appraisal cycle, compared to approximately 60 percent in the previous appraisal cycle. The majority of the senior executives who received bonuses were awarded 5 percent bonuses in the 2003 appraisal cycle, as shown in table 4. Lastly, senior executive responses to our survey indicated that they did not feel Education was making meaningful distinctions in ratings or bonuses to a very great or great extent. Specifically, about 10 percent of senior executives felt that their agency makes meaningful distinctions in performance using ratings; about 33 percent felt that their agency makes meaningful distinctions in performance using bonuses. High-performing organizations have found that actively involving employees and stakeholders when developing or refining results-oriented performance management systems helps improve employees’ confidence and belief in the fairness of the system and increase their understanding and ownership of organizational goals and objectives. Further, to maximize the effectiveness of their performance management systems these organizations recognize that they must conduct frequent training for staff members at all levels of the organization. Generally, at Education, HHS, and NASA senior executives became involved in refining the performance management system or participated in formal training on those systems when provided the opportunities. Of the senior executives at each agency who reported that they have been given the opportunity to be involved in refining their agency’s performance management system to at least a small extent, most of these senior executives said they took advantage of this opportunity, as shown in figure 13. Similarly, while less than three-fourths of the senior executives at each agency said formal training on their agency’s performance management system is available to them, most of these senior executives said they participated in the training, as shown in figure 14. At all three agencies, a proportion of senior executives reported that they had no opportunity to become involved with or trained on their performance management systems. At HHS, about 38 percent of senior executives said they did not have the opportunity to be involved in refining their agency’s system, while about 24 percent of senior executives said formal training on their agency’s system was not available to them, as shown in figure 15. According to an HHS official, the Office of the Secretary developed the One-HHS objectives, the basis of its senior executive performance management system, with input from the leadership of all HHS staff offices and operating divisions. This official indicated that HHS conducted extensive interviews to develop and validate the goals. All career senior executives were briefed on the goals and offered training on development of outcome-oriented individual performance objectives derived from those goals. The agency official said that the operating divisions had the flexibility to involve their senior executives in customizing the new individual performance plans for their operating divisions. According to HHS’s guidance, the operating divisions are to develop and provide training on the performance management system to their senior executives on areas such as developing performance plans, conducting progress reviews, writing appraisals, and using appraisals as a key factor in making other management decisions. For example, according to an FDA official, the Human Resources Director briefed all of the senior executive directors on how to cascade the FDA Commissioner’s performance plan into their fiscal year 2002 individual plans and incorporate the One-HHS objectives. FDA does not provide regular training to the senior executives on the performance management system; rather the training is provided as needed and usually on a one-on-one basis when a new senior executive joins FDA. The agency official also stated that because few senior executives are joining the agency, regular training on the system is not as necessary. About half of NASA’s senior executives reported that they did not have the opportunity to be involved in refining their agency’s system, while about 21 percent of senior executives said formal training on their agency’s system was not available to them, as shown in figure 16. According to an agency official, the NASA Administrator worked with the top senior executives to develop a common set of senior executive critical elements and performance requirements that reflect his priorities and are central to ensuring a healthy and effective organization. The Administrator then instructed the senior executives to review the common critical elements and incorporate them into their individual performance plans. When incorporating the elements into their individual plans, the senior executives have the opportunity to modify the performance requirements for each element to more clearly reflect their roles and responsibilities. According to NASA’s guidance, the centers and offices are to provide training and information on the performance management system to their senior executives. In addition, an official at NASA said that most centers and offices provide training to new senior executives on aspects of the performance management system, such as developing individual performance plans. Also, NASA provides training courses for all employees on specific aspects of performance management, such as writing performance appraisals and self-assessments. Approximately half of Education’s senior executives reported that they did not have the opportunity to be involved in refining their agency’s system, while about one-fourth of the senior executives reported that formal training on their agency’s system was not available to them, as shown in figure 17. An official at Education indicated that senior executives have the opportunity to comment on changes proposed to the performance management system by the Executive Resources Board. In addition, according to Education’s guidance, training for all senior executives on the performance management system is to be provided periodically. An agency official said that Education provided training for all managers, including senior executives, on how to conduct performance appraisals and write performance expectations near the end of the performance appraisal cycle last year. The experience of successful cultural transformations in large public and private organizations suggests that it can often take 5 to 7 years until such initiatives are fully implemented and cultures are transformed in a substantial manner. We reported that among the key practices consistently found at the center of successful transformations is to use the performance management system to define responsibility and assure accountability for change. The average tenure of political leadership can have critical implications for the success of those initiatives. Specifically, in the federal government the frequent turnover of the political leadership has often made it difficult to obtain the sustained and inspired attention required to make needed changes. We reported that the average tenure of political appointees governmentwide for the period 1990–2001 was just under 3 years. Performance management systems help provide continuity during these times of transition by maintaining a consistent focus on a set of broad programmatic priorities. Individual performance plans can be used to clearly and concisely outline top leadership priorities during a given year and thereby serve as a convenient vehicle for new leadership to identify and maintain focus on the most pressing issues confronting the organization as it transforms. We have observed that a specific performance expectation in senior executives’ performance plans to lead and facilitate change during transitions could be critical as organizations transform themselves to succeed in an environment that is more results oriented, less hierarchical, and more integrated. While many senior executives at each agency reported that their agency’s senior executive performance management system helped to maintain a consistent focus on organizational goals during transitions, the majority of senior executives felt this occurred to a moderate extent or less, as shown in figure 18. According to an agency official, HHS as a whole struggles with transitions between secretaries as with each change in leadership comes a change in initiatives. Approximately 25 percent of HHS senior executives’ plans identified performance expectations related to leading and facilitating change in the organization. For example, several senior executives’ plans identified actions the executives were going to take in terms of succession planning and leadership development for their organizations. Specifically, a senior executive in the National Institutes of Health set the expectation to develop a workforce plan that supports the future needs of the office, including addressing such things as succession and transition planning. About 33 percent of senior executives’ plans in FDA and 15 percent in CDC identified performance expectations related to leading and facilitating change. To help address this issue of continuity in leadership and transitions, HHS identified as part of its One-HHS objectives a goal to “implement strategic workforce plans that improve recruitment, retention, hiring and leadership succession results for mission critical positions.” Education requires all senior executives to include a general performance expectation in their performance plans related to change: “initiates new and better ways of doing things; creates real and positive change.” Approximately 98 percent of the senior executives’ plans included this expectation. Almost none of the NASA senior executives’ performance plans identified an expectation related to leading and facilitating change during transitions. An agency official indicated that while NASA did not set a specific expectation for senior executives to include in their individual performance plans, leading and facilitating change is addressed through several of the critical elements. For example, for the “Health of NASA” critical element, senior executives are to demonstrate actions that contribute to safe and successful mission accomplishment and facilitate knowledge sharing within and between programs and projects. We have reported that NASA recognizes the importance of change management through its response to the Columbia Accident Investigation Board’s findings. NASA indicated that it would increase its focus on the human element of change management and organizational development, among other things, to improve the agency’s culture. Senior executives need to lead the way for federal agencies to transform their cultures to be more results oriented, customer focused, and collaborative in nature to meet the challenges of the 21st century. Performance management systems can help manage and direct this transformation process. Education, HHS, and NASA have undertaken important and valuable efforts, but these agencies need to continue to make substantial progress in using their senior executive performance management systems to strengthen the linkage between senior executive performance and organizational success through the key practices for effective performance management. Consistent with our findings and OPM’s reviews across the executive branch, these agencies must use their career senior executive performance management systems as strategic tools. In addition, as the administration is about to implement a performance-based pay system for the SES, valid, reliable, and transparent performance management systems with reasonable safeguards are critical. The experiences and progress of Education, HHS, and NASA should prove helpful to those agencies as well as provide valuable information to other agencies as they seek to use senior executive performance management as a tool to drive internal change and achieve external results. Overall, we recommend that the Secretaries of Education and HHS and the Administrator of NASA continue to build their career senior executive performance management systems along the nine key practices for effective performance management. Specifically, we recommend the following. The Secretary of Education should reinforce these key practices by taking the following seven actions: Require senior executives to set specific levels of performance that are linked to organizational goals to help them see how they directly contribute to organizational goals. Require senior executives to identify in their individual performance plans programmatic crosscutting goals that would require collaboration to achieve and clearly identify the relevant internal or external organizations with which they would collaborate to achieve these goals. Provide disaggregated performance information from various sources to help facilitate senior executive decision making and progress in achieving organizational results, customer satisfaction, and employee perspectives. Require senior executives to take follow-up actions based on the performance information available to them in order to make programmatic improvements, and formally recognize executives for these actions. Build in additional safeguards when linking pay to performance by communicating the overall results of the performance management decisions. Make meaningful distinctions in senior executive performance through both ratings and bonuses. Involve senior executives in future refinements to the performance management system and offer training on the system, as appropriate. The Secretary of HHS should reinforce these key practices by taking the following seven actions: Require senior executives to clearly identify in their individual performance plans the relevant internal or external organizations with which they would collaborate to achieve programmatic crosscutting goals. Provide disaggregated performance information from various sources to help facilitate senior executive decision making and progress in achieving organizational results, customer satisfaction, and employee perspectives. Require senior executives to take follow-up actions based on the performance information available to them in order to make programmatic improvements, and formally recognize executives for these actions. Build in additional safeguards when linking pay to performance by communicating the overall results of the performance management decisions. Make meaningful distinctions in senior executive performance through ratings. Involve senior executives in future refinements to the performance management system and offer training on the system, as appropriate. Set specific performance expectations for senior executives related to leading and facilitating change management initiatives during ongoing transitions throughout the organization that executives should include in their individual performance plans. The Administrator of NASA should reinforce these key practices by taking the following eight actions: Require senior executives to set specific levels of performance that are linked to organizational goals to help them see how they directly contribute to organizational goals. Require senior executives to identify in their individual performance plans programmatic crosscutting goals that would require collaboration to achieve and clearly identify the relevant internal or external organizations with which they would collaborate to achieve these goals. Provide disaggregated performance information from various sources to help facilitate senior executive decision making and progress in achieving organizational results, customer satisfaction, and employee perspectives. Require senior executives to take follow-up actions based on the performance information available to them in order to make programmatic improvements, and formally recognize executives for these actions. Build in additional safeguards when linking pay to performance by communicating the overall results of the performance management decisions. Make meaningful distinctions in senior executive performance through both ratings and bonuses. Involve senior executives in future refinements to the performance management system and offer training on the system, as appropriate. Set specific performance expectations for senior executives related to leading and facilitating change management initiatives during ongoing transitions throughout the organization that executives should include in their individual performance plans. We provided a draft of this report to the Secretaries of Education and HHS and the Administrator of NASA for their review and comment. We also provided a draft of the report to the Directors of OPM and OMB for their information. We received written comments from Education, HHS, and NASA, which are presented in appendixes IV, V, and VI. NASA’s Deputy Administrator stated that the draft report is generally positive and that NASA concurred with all the recommendations and plans to implement them in its next SES appraisal cycle beginning July 1, 2004. HHS’s Acting Principal Deputy Inspector General stated that HHS had no comments upon review of the draft report. In responding to our recommendations, Education’s Assistant Secretary for Management and Chief Information Officer stated that Education plans to revise its existing senior executive performance management system dramatically given OPM’s draft regulations for the new SES pay for performance system and described specific actions Education plans to take. These actions are generally consistent with our recommendations and their successful completion will be important to achieving the intent of our recommendations. However, Education stated that it does not plan to require the specific identification of the internal/external organizations with which the executives collaborate, as we recommended. We disagree that Education does not need to implement this recommendation. Education is taking important steps by requiring senior executives to include a general performance expectation related to collaboration and teamwork in their individual performance plans, but placing greater emphasis on this expectation is especially important for Education. We reported that Education will have to help states and school districts meet the goals of congressional actions such as the No Child Left Behind Act. Consequently, Education should require senior executives to identify the crosscutting goals and relevant organizations with which they would collaborate to achieve them in order to help reinforce the necessary focus on results. Lastly, Education stated that it has fully implemented our recommendation for providing senior executives disaggregated performance information from various sources to help facilitate decision making and progress in achieving organizational priorities. We disagree that Education has fully implemented this recommendation. While we recognize Education’s two sources of agencywide performance information for its senior executives, we also reported that only about one-third of the senior executives who reported that the agency provided performance information felt that the performance information was useful for making improvements and available when needed to a very great or great extent. Consequently, Education should provide all of its senior executives performance information from various sources that is disaggregated in a useful format to help them track their progress toward achieving organizational results and other priorities, such as customer satisfaction and employee perspectives. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its date. At that time, we will provide copies of this report to other interested congressional parties, the Secretaries of Education and HHS, the Administrator of NASA, and the Directors of OPM and OMB. We will also make this report available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me or Lisa Shames on (202) 512-6806 or at mihmj@gao.gov or shamesl@gao.gov. Other contributors are acknowledged in appendix VII. To meet our objective to assess how well selected agencies are creating linkages between senior executive performance and organizational success through their performance management systems, we applied the key practices we previously identified for effective performance management. We focused on agencies’ career Senior Executive Service (SES) members, rather than all senior-level officials, because the Office of Personnel Management (OPM) collects data on senior executives across the government. In addition, career senior executives are common to all three of the selected agencies and typically manage programs and supervise staff. We selected the Department of Education, the Department of Health and Human Services (HHS), and the National Aeronautics and Space Administration (NASA) for our review to reflect variations in mission, size, organizational structure, and use of their performance management systems for career senior executives. Within HHS, we selected two of the operating divisions—the Food and Drug Administration (FDA) and the Centers for Disease Control and Prevention (CDC)—to determine how HHS’s SES performance management system cascades down to the operating division level. We selected these two operating divisions after reviewing HHS’s strategic plan and its operating divisions’ annual performance plans to identify two agencies that contributed to the same HHS strategic goal(s) through their annual performance goals. We then reviewed the SES population data from OPM’s Central Personal Data File to verify that the two operating divisions each had a relatively large number of senior executives. We collected and analyzed each agency’s senior executive performance management system policy manual; personnel policies and memorandums; strategic plan and annual performance plan; employee and customer satisfaction survey instruments and analyses, as appropriate; and aggregate trend data for senior executive performance ratings and bonus distributions. In addition, we reviewed OPM’s draft proposed regulations prescribing the criteria agencies must meet to obtain certification of their systems, which OPM provided for review and comment to the heads of departments and agencies, including GAO, on April 28, 2004. We also assessed the reliability of the senior executive performance rating and bonus data provided by Education, HHS, NASA, and OPM to ensure that the data we used for this report were complete and accurate by (1) performing manual and electronic testing of required data elements; (2) comparing the data to published OPM data, when applicable; and (3) interviewing agency officials knowledgeable about the data. We determined that the data provided by the agencies and OPM were sufficiently reliable for the purposes of this report. We also interviewed the chief human capital officers at Education and HHS as well as officials at all three agencies responsible for managing human capital; implementing the strategic and annual performance plans; and administering agencywide employee and customer satisfaction surveys, as appropriate, and other agency officials identified as having a particular knowledge about issues related to senior executive performance management. In addition, we met with the President of the Senior Executives Association to obtain her thoughts on the new SES performance-based pay structure and performance management in general. We assessed a probability sample of SES individual performance plans at HHS and NASA and all the SES plans at Education using a data collection instrument we prepared in order to identify how senior executives were addressing certain practices—aligning individual performance expectations with organizational goals, connecting performance expectations to crosscutting goals, using competencies, and maintaining continuity during transitions—through their individual performance plans. To randomly select the plans, we collected a list of all current career senior executives as of August/September 2003 from each agency. Since HHS’s operating divisions develop their own SES performance plans and implement their performance management systems, we drew the sample such that it would include each operating division and be representative of all of HHS. In addition to the stratified sample for HHS overall, we reviewed all senior executives plans at FDA and CDC to ensure that estimates could be produced for these operating divisions. For all three agencies, we reviewed the individual performance plans most recently collected by the human resources offices. We reviewed plans from the performance appraisal cycle for HHS covering fiscal year 2003, for Education covering July 2002–June 2003, and for NASA covering July 2003–June 2004. We selected and reviewed all senior executives’ individual performance plans from Education, a simple random sample from NASA, and a stratified sample from HHS. The sample of SES performance plans allowed us to estimate characteristics of these plans for each of these three agencies. For each agency, the SES population size, number of SES plans in sample, and number of plans reviewed are shown in table 5. We excluded out of scope cases from our population and sample, which included senior executives who had retired or resigned, were not career senior executives, or did not have individual performance plans because they were either new executives or on detail to another agency. For HHS, excluding CDC and FDA, we do not know the number of out of scope SES plans in the entire senior executive population; however, there were seven out of scope SES plans in our sample of performance plans. For this review, we only estimate to the population of in scope SES plans. All population estimates based on this plan review are for the target population defined as SES performance plans for the most recent year available from each of the three agencies. For Education, we report actual numbers for our review of individual performance plans since we reviewed all the plans. For HHS and NASA, we produced estimates to the population of all SES performance plans in those agencies for the relevant year. Estimates are produced using appropriate methods for simple random sampling for NASA and for stratified random sampling for HHS. For NASA and for each stratum for HHS, we formed estimates by weighting the data by the ratio of the population size to the number of plans reviewed. For NASA, we considered the 81 plans obtained and reviewed to be a probability sample. The HHS and NASA performance plan samples are subject to sampling error. There was no sampling error for the census review of senior executives’ performance plans for FDA, CDC, and Education. The effects of sampling errors, due to the selection of a sample from a larger population, can be expressed as confidence intervals based on statistical theory. Sampling errors occur because we use a sample to draw conclusions about a larger population. As a result, the sample was only one of a large number of samples of performance plans that might have been drawn. If different samples had been taken, the results might have been different. To recognize the possibility that other samples might have yielded other results, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval. The 95 percent confidence intervals are expected to include the actual results for 95 percent of samples of this type. We calculated confidence intervals for this sample using methods that are appropriate for the sample design used. For HHS estimates in this report, we are 95 percent confident that when sampling error is considered, the results we obtained are within +9 percentage points of what we would have obtained if we had reviewed the plans of the entire study population, unless otherwise noted. For NASA, the 95 percent confidence intervals for percentage estimates are no wider than +6 percentage points, unless otherwise noted. We administered a Web-based questionnaire to the study population of all career senior executives at Education, HHS, and NASA to obtain information on their experiences with and perceptions of their performance management systems. We collected a list of all career senior executives and e-mail addresses from each agency as of August/September 2003 to identify the respondents for our survey. We structured the questionnaire around the key practices we identified for effective performance management and included some questions about senior executives’ overall perceptions of their performance management systems. The questions were nearly identical across the agencies, though some introductory language and terminology varied. The complete questionnaire and results are shown in appendix II. Although all senior executives were sampled, in the implementation of the survey, we found that some executives were out of scope because they retired or resigned, were not career senior executives, or otherwise did not respond. Table 6 contains a summary of the survey disposition for the surveyed cases at the three agencies. Table 7 summarizes why individuals originally included in the target population by each agency were removed from the sample. For Education, we surveyed a total of 57 career senior executives and received completed questionnaires from 41 senior executives for a response rate of 72 percent. For HHS, we surveyed a total of 317 career senior executives and received completed questionnaires from 213 senior executives for a response rate of 67 percent. For NASA, we surveyed a total of 393 career senior executives and received completed questionnaires from 260 senior executives for a response rate of 66 percent. We obtained responses from across Education and from all subentities within HHS and NASA and had no reason to expect that the views of nonrespondents might be different from the respondents. Consequently, our analysis of the survey data treats the respondents as a simple random sample of the populations of senior executives at each of the three agencies. We also reviewed whether senior executives who have served less than 1 year at an agency tended to respond differently than those with more than 1 year of experience. We did find some differences on certain questions for which individuals who served as senior executives for less than 1 year were more likely to answer “no basis to judge/not applicable” and noted these differences in the report. The estimated percentage of the senior executives responding “no basis to judge/not applicable” to questions ranged from 0 to 24 percent. Since this range is relatively wide, we have reported “no basis to judge/not applicable” as a separate response category for each question in appendix II. The particular sample of senior executives (those who responded to the survey) we obtained from each agency was only one of a large number of such samples of senior executives that we might have obtained. Each of these different samples might have produced slightly different results. To recognize the possibility that other samples might have yielded other results, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval. For Education, unless otherwise noted, the survey responses have a margin of error within ± 9 percent with a 95 percent level of confidence. For HHS and NASA, unless otherwise noted, the survey responses have a margin of error within ± 4 percent with a 95 percent level of confidence. In addition to sampling error, other potential sources of errors associated with surveys, such as question misinterpretation, may be present. Nonresponse may also be a source of nonsampling error. We took several steps to reduce these other sources of error. We conducted pretests of the questionnaire both with appropriate senior executives in GAO and senior executives in the three agencies surveyed to ensure that the questionnaire (1) was clear and unambiguous, (2) did not place undue burden on individuals completing it, and (3) was independent and unbiased. We pretested a paper copy of the survey with three senior executives in GAO who did not work in the human capital area. We then had a human resources professional with each agency review the survey for agency-specific content and language. We conducted six pretests overall with senior executives in the audited agencies—one at Education, three at HHS, and two at NASA. The first four were conducted using a paper version of the questionnaire and the final two were conducted using the Web version. To increase the response rate for each agency, we sent a reminder e-mail about the survey to those senior executives who did not complete the survey in the initial time frame and conducted follow-up telephone calls to persons who had not completed the survey following the reminder e-mail. The HHS and NASA surveys were available from October 22, 2003, through January 16, 2004, and the Education survey was available from November 3, 2003, through January 16, 2004. We performed our work in Washington, D.C., from August 2003 through March 2004 in accordance with generally accepted government auditing standards. We administered a Web-based questionnaire to the study population of all career senior executives at Education, HHS, and NASA to obtain information on their experiences with and perceptions of their performance management systems. We structured the questionnaire around key practices we identified for effective performance management. The response rates and margins of error for each agency are as follows. For Education, we surveyed a total of 57 career senior executives and received completed questionnaires from 41 senior executives for a response rate of 72 percent. Unless otherwise noted, the survey responses have a margin of error within ± 9 percent with a 95 percent level of confidence. For HHS, we surveyed a total of 317 career senior executives and received completed questionnaires from 213 senior executives for a response rate of 67 percent. Unless otherwise noted, the survey responses have a margin of error within ± 4 percent with a 95 percent level of confidence. For NASA, we surveyed a total of 393 career senior executives and received completed questionnaires from 260 senior executives for a response rate of 66 percent. Unless otherwise noted, the survey responses have a margin of error within ± 4 percent with a 95 percent level of confidence. The information below shows the senior executives’ responses for each question by agency. You see a connection between your daily activities and the achievement of organizational goals. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) No basis to judge / Not applicable (percent) You communicate your performance expectations to the individuals who report to you to help them understand how they can contribute to organizational goals. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) No basis to judge / Not applicable (percent) You see a connection between your daily activities and HHS's priorities. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) You collaborate with others to achieve crosscutting goals. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) No basis to judge / Not applicable (percent) You identify strategies for collaborating with others to achieve crosscutting goals. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) No basis to judge / Not applicable (percent) You are recognized through your performance management system for contributing to crosscutting goals. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) No basis to judge / Not applicable (percent) Do you collaborate with other offices within Education to achieve crosscutting goals? Does not apply given my current position. (percent) 5 Do you collaborate with other agencies or organizations outside of Education to achieve crosscutting goals? Does not apply given my current position. (percent) Do you collaborate with other operating divisions within HHS to achieve crosscutting goals? Does not apply given my current position. (percent) Do you collaborate with other agencies or organizations outside of HHS to achieve crosscutting goals? Does not apply given my current position. (percent) Do you collaborate with other centers within NASA to achieve crosscutting goals? Does not apply given my current position. (percent) Do you collaborate with other agencies or organizations outside of NASA to achieve crosscutting goals? Your agency formally provides performance information that allows you to track your work unit's performance. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) No basis to judge / Not applicable (percent) Your agency formally provides performance information that allows you to compare the performance of your work unit to that of other work units. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) No basis to judge / Not applicable (percent) Your agency formally provides performance information that allows you to compare the performance of your work unit to that of your agency. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) Your agency formally provides performance information that is available to you when you need it. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) No basis to judge / Not applicable (percent) Your agency formally provides performance information that is useful for making improvements in your work unit's performance. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) No basis to judge / Not applicable (percent) You identified areas for improvement based on performance information formally provided by your agency. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) No basis to judge / Not applicable (percent) You took action on any identified areas of improvement. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) You documented areas for improvement in your individual performance plan. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) No basis to judge / Not applicable (percent) You are recognized through your performance management system for taking follow-up actions. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) No basis to judge / Not applicable (percent) The competencies you demonstrate help you contribute to the organization's goals. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) No basis to judge / Not applicable (percent) You are recognized through your performance management system for your demonstration of the competencies. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) I am rewarded for accomplishing the performance expectations identified in my individual performance plan. Agree (percent) Neither agree or disagree (percent) Disagree (percent) Strongly disagree (percent) No basis to judge / Not applicable (percent) I am rewarded for helping my agency accomplish its goals. Agree (percent) Neither agree or disagree (percent) Disagree (percent) Strongly disagree (percent) No basis to judge / Not applicable (percent) You understand the criteria used to award bonuses (e.g., cash awards). To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) No basis to judge / Not applicable (percent) You understand the criteria used to award pay level adjustments (e.g., an increase from SES level 1 to level 2). To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) Pay level adjustments are dependent on an individual's contribution to the organization's goals. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) No basis to judge / Not applicable (percent) Bonuses are dependent on an individual's contribution to the organization's goals. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) No basis to judge / Not applicable (percent) Your agency's SES performance management system uses performance ratings to make meaningful distinctions between acceptable and outstanding performers. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) No basis to judge / Not applicable (percent) Your agency's SES performance management system uses bonuses to make meaningful distinctions between acceptable and outstanding performers. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) Your agency uses performance information and documentation to make distinctions in senior executive performance. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) No basis to judge / Not applicable (percent) Your agency provides candid and constructive feedback that allows you to maximize your contribution to organizational goals. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) No basis to judge / Not applicable (percent) You have been given the opportunity to be involved in refining your agency's SES performance management system. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) No basis to judge / Not applicable (percent) You have been involved in refining your agency's SES performance management system. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) Formal training on your agency's SES performance management system is available to you. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) No basis to judge / Not applicable (percent) You have participated in formal training on your agency's SES performance management system. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) No basis to judge / Not applicable (percent) Your overall involvement in the SES performance management system has increased your understanding of it. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) No basis to judge / Not applicable (percent) Your agency's SES performance management system is used as a tool to manage the organization. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) Your agency's SES performance management system is used in achieving organizational goals. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) No basis to judge / Not applicable (percent) Your agency's SES performance management system holds you accountable for your contributions to organizational results. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) No basis to judge / Not applicable (percent) Your agency's SES performance management system facilitates discussions about your performance as it relates to organizational goals during the year. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) No basis to judge / Not applicable (percent) Your agency's SES performance management system helps to maintain a consistent focus on organizational goals during transitions, such as changes in leadership (at any level) and change management initiatives. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) Education required all of its senior executives to include three critical elements in their individual performance plans for the 2003 performance appraisal cycle (July 2002–June 2003). The critical elements and examples of the related individual and organizational performance requirements include the following. Leadership, management, and coaching: Takes leadership in promoting and implementing the department’s mission, values, and goals; develops and communicates a clear, simple, customer-focused vision/direction for the organization and customers that is consistent with the department’s mission and strategic goals; fosters improved workforce productivity and effective development and recognition of employees; and promotes collaboration and teamwork, including effective union-management relations, where appropriate. Work quality, productivity, and customer service: Produces or assures quality products that are useful and succinct, that identify and address problems or issues, and that reflect appropriate analysis, research, preparation, and sensitivity to department priorities and customer needs; anticipates and responds to customer needs in a professional, effective, and timely manner; initiates new and better ways of doing things; and creates real and positive change. Job specifics: Senior executives are to include performance expectations that are applicable to their individual positions and support their principal offices’ goals as well as the department’s strategic goals and priorities, including the President’s Management Agenda, the Blueprint for Management Excellence, and the Culture of Accountability. Education sets guidelines for its offices to follow in appraising performance and recommending senior executives for bonuses. The senior executive performance appraisals are to be based on demonstrated results related to Education’s goals and priorities, including the President’s Management Agenda, the Blueprint for Management Excellence, the Culture of Accountability, and the Secretary’s strategic plan. In addition, the senior executive’s appraisal is to be based on both individual and organizational performance, taking into account results achieved in accordance with the department’s strategic plan and goals, which are developed in accordance with the Government Performance and Results Act of 1993 (GPRA); the effectiveness, productivity, and performance quality of the employees for whom the senior executive is responsible; and equal employment opportunity and diversity and complying with merit systems principles. In addition, the responses of the customers, coworkers, and employees through the automated performance feedback process are to be considered in determining the senior executive’s performance rating. Senior executives must receive a performance rating of “successful” to be eligible for a bonus. Bonus recommendations are to be based on the senior executive’s demonstrated results and accomplishments toward the department’s strategic goals and organizational priorities. Accomplishments should demonstrate how Education’s achievements could not have been possible without the senior executive’s leadership and contribution. HHS required its senior executives to set measurable, specific performance expectations in their fiscal year 2003 individual performance plans (or performance contracts) that align with HHS’s strategic goals, the “One- HHS” management and program objectives, and their operating divisions’ annual performance goals. According to agency officials, senior executives are to choose the One-HHS objectives and strategic and annual performance goals that relate to their job responsibilities, and tailor their individual performance expectations to reflect these responsibilities in their performance plans. The One-HHS objectives, which reflect the program and management priorities of the Secretary, include the following. Management objectives: The purpose of the objectives is to better integrate HHS management functions to ensure coordinated, seamless, and results-oriented management across all operating and staff divisions of the department. 1. Implement results-oriented management. 2. Implement strategic human capital management. 3. Improve grants management operation and oversight. 4. Complete the fiscal year 2003 competitive sourcing program. 5. Improve information technology management. 6. Administrative efficiencies. 7. Continue implementation of unified financial management system. 8. Consolidate management functions. 9. Achieve efficiencies through HHS-wide procurements. 10. Conduct program evaluations and implement corrective strategies for any deficiencies identified. Program objectives: The purpose of the objectives is to enhance the health and well-being of Americans by providing for effective health and human services and by fostering strong, sustained advances in the sciences underlying medicine, public health, and social services. 1. Increase access to health care (Closing the Gaps in Health Care). 2. Expand consumer choices in health care and human services. 3. Emphasize preventive health measures (Preventing Disease and Illness). 4. Prepare for and effectively respond to bioterrorism and other public health emergencies (Protecting Our Homeland). 5. Improve health outcomes (Preventing Disease and Illness). 6. Improve the quality of health care (21st Century Health Care). 7. Advance science and medical research (Improving Health Science). 8. Improve the well-being and safety of families and individuals, especially vulnerable populations (Leaving No Child Behind). 9. Strengthen American families (Working Toward Independence). 10. Reduce regulatory burden on providers, patients, and consumers of HHS’s services. In addition to the annual performance goals, operating divisions may have their senior executives include specific individual performance expectations in their performance plans. According to an agency official, the senior executives in FDA have set expectations in their plans that are relevant to the work in their centers. For example, the senior executives who work on issues related to mad cow disease in the Center for Veterinary Medicine have included goals related to this type of work in their individual performance plans. HHS sets general guidance for operating divisions to follow when appraising senior executive performance and recommending senior executives for bonuses and other performance awards, such as the Presidential Rank Awards. Overall, a senior executive’s performance is to be appraised at least annually based on a comparison of actual performance with expectations in the individual performance plan. The operating divisions are to appraise senior executive performance taking into account such factors as measurable results achieved in accordance with the goals of GPRA; customer satisfaction; the effectiveness, productivity, and performance quality of the employees for whom the executive is responsible; and meeting affirmative action, equal employment opportunity, and diversity goals and complying with the merit systems principles. In recommending senior executives for bonuses, operating divisions are to consider each senior executive’s performance, including the rating and the extent of the executive’s contributions to meeting organizational goals. Senior executives who receive ratings of “fully successful” are eligible to be considered for bonuses. For fiscal year 2003, bonuses generally were to be recommended for no more than one-third of the operating division’s senior executives and awarded to only the exceptional performers. Operating divisions were to consider nominating only one or two of their very highest contributors for the governmentwide Presidential Rank Awards. The greatest consideration for bonuses and Presidential Rank Awards was to be given to executives in frontline management positions, with direct responsibility for HHS’s programs. NASA requires its senior executives to include seven critical elements, which reflect the Administrator’s priorities and NASA’s core values of safety, people, excellence, and integrity, in their individual performance plans for the 2004 performance appraisal cycle (July 2003–June 2004). Senior executives may modify the related performance requirements by making them more specific to their jobs. These seven critical elements and the related performance requirements are as follows. The President’s Management Agenda: Understands the principles of the President’s Management Agenda and actively applies them; assures maximum organizational efficiency, is customer focused, and incorporates presidential priorities in budget and performance plans; capitalizes on opportunities to integrate human capital issues in planning and performance and expand electronic government and competitive sourcing; and pursues other opportunities to reduce costs and improve service to customers. Performance requirement: Applicable provisions of the agency human capital plan are implemented; financial reports are timely and accurate; clear measurable programmatic goals and outcomes are linked to the agency strategic plan and the GPRA performance plan; and human capital, e-government, and competitive sourcing goals are achieved. Health of NASA: Actions contribute to safe and successful mission accomplishment and/or strengthen infrastructure of support functions; increases efficient and effective management of the agency; facilitates knowledge sharing within and between programs and projects; and displays unquestioned personal integrity and commitment to safety. Performance requirement: Demonstrates that safety is the organization’s number one value; actively participates in safety and health activities, supports the zero lost-time injury goals, and takes action to improve workforce health and safety; meets or exceeds cost and schedule milestones and develops creative mechanisms and/or capitalizes on opportunities to facilitate knowledge sharing; and achieves maximum organizational efficiency through effective resource utilization and management. Equal opportunity (EO) and diversity: Demonstrates a commitment to EO and diversity by proactively implementing programs that positively impact the workplace and NASA’s external stakeholders and through voluntary compliance with EO laws, regulations, policies, and practices; this includes such actions as ensuring EO in hiring by providing, if needed, reasonable accommodation(s) to an otherwise qualified individual with a disability or ensuring EO without regard to race, color, national origin, sex, sexual orientation, or religion in all personnel decisions and in the award of grants or other federal funds to stakeholder recipients. Performance requirement: Actively supports EO/diversity efforts; consistently follows applicable EO laws, regulations, Executive Orders, and administration and NASA policies, and the principles thereof, in decision making with regard to employment actions and the award of federal grants and funds; cooperates with and provides a timely and complete response to NASA’s Discrimination Complaints Division, the U.S. Equal Employment Opportunity Commission, and the courts during the investigation, resolution, and/or litigation of allegations of illegal discrimination under applicable EO laws and regulations. Collaboration: Integrates One-NASA approach to problem solving, program/project management, and decision making; leads by example by reaching out to other organizations and NASA centers to collaborate on work products; seeks input and expertise from a broad spectrum; and demonstrates possession of organizational and interpersonal skills. Performance requirement: Provides the appropriate level of high- quality support to peers and other organizations to enable the achievement of the NASA mission; results demonstrate support of One-NASA and that stakeholder and customer issues were taken into account. Professional development: Has a breadth of experience in different organizations, agencies, functional areas, and/or geographic locations; demonstrates continual learning in functional and leadership areas, for example, through advanced education/training or participating in seminars; encourages and supports development and training of assigned staff; and where feasible, seeks, accepts, and encourages opportunities for developmental assignments in other functional areas and elsewhere in NASA, with a focus on broadening agencywide perspective. Performance requirement: Participates in training/learning experiences appropriate to position responsibilities and to broaden agencywide perspective and actively plans for and supports the participation of subordinate staff in training and development activities. Meets program objectives: Meets and advances established agency program objectives and achieves high-quality results; demonstrates the ability to follow through on commitments; and individual fits into long- term human capital strategy and could be expected to make future contributions at a higher level or in a different capacity at the same level. Performance requirement: Meets appropriate GPRA/NASA strategic plan goals and objectives; customers recognize results for their high- quality and responsiveness to requirements/agreements. Implements a fair and equitable performance-based system within organizational component (applicable only for supervisory positions): Implements/utilizes a fair, equitable, and merit/performance-based process/system for the evaluation of individuals for bonuses, promotions, career advancements, and general recognition. Performance requirement: System reflects the key leadership, teamwork, and professional excellence on which decisions are based; results have credibility with supervisors, subordinates, and peers. NASA provides guidance for the centers and offices to follow in appraising senior executive performance and recommending executives for bonuses or other performance awards, such as Presidential Rank Awards or incentive awards. The senior executive’s performance appraisal is to focus on results toward the performance requirements specified in the individual performance plan, specifically the achievements that address the agency’s goals rather than the quality of effort expended. In addition, senior executive appraisals are to be based on individual and organizational performance, taking into account such factors as results achieved in accordance with the goals of GPRA; the effectiveness, productivity, and performance of assigned employees; meeting safety and diversity goals; complying with merit system principles; customer perspective focusing on customer needs, expectations, and employee perspective focusing on employee needs, such as training, internal processes, and tools to successfully and efficiently accomplish their tasks; and business perspective focusing on outcomes and the social/political impacts that define the role of the agency and the business processes needed for organizational efficiency and effectiveness. In considering customer, employee, and other stakeholder perspectives for senior executive appraisals, rating officials may use formal mechanisms, such as surveys, or less formal mechanisms, such as unsolicited customer and employee feedback, and analysis of personnel data, such as turnover rates, diversity reports, grievances, and workforce awards and recognition. All senior executives with annual summary ratings of “fully successful” or higher are eligible to be considered for bonuses. Bonus recommendations are to be based solely on exceptional performance as specified and documented in the senior executive’s performance plan. In addition to the individuals named above, Janice Lichty Latimer, Erik Hallgren, Ronald La Due Lake, Mark Ramage, Nyree M. Ryder, and Jerry Sandau made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading. | Congress and the administration have established a new performance-based pay system for members of the Senior Executive Service (SES) that is designed to provide a clear and direct linkage between SES performance and pay. Also, GAO previously reported that significant opportunities exist for agencies to hold the SES accountable for improving organizational results. GAO assessed how well selected agencies are creating linkages between SES performance and organizational success by applying nine key practices GAO previously identified for effective performance management. GAO selected the Department of Education, the Department of Health and Human Services (HHS), and the National Aeronautics and Space Administration (NASA). Senior executives need to lead the way to transform their agencies' cultures to be more results-oriented, customer focused, and collaborative in nature. Performance management systems can help manage and direct this process. While Education, HHS, and NASA have undertaken important and valuable efforts to link their career SES performance management systems to their organizations' success, there are opportunities to use their systems more strategically. For example, as indicated by the executives themselves, the agencies can better use their performance management systems as a tool to manage the organization or to achieve organizational goals. As Congress and the administration are reforming SES pay to better link pay to performance, valid, reliable, and transparent performance management systems with reasonable safeguards are critical. Information on the experiences and knowledge of these agencies should provide valuable insights to other agencies as they seek to drive internal change and achieve external results. |
The Resource Conservation and Recovery Act (RCRA) requires EPA to identify which wastes should be regulated as hazardous waste under subtitle C and establish regulations to manage them. For example, hazardous waste landfills, such as those used for disposing ash from hazardous waste incinerators, generally must comply with certain technological requirements. These requirements include having double liners to prevent groundwater contamination as well as groundwater monitoring and leachate collection systems. In 1980 the Congress amended RCRA to, among other things, generally exempt cement kiln dust from regulation under subtitle C, pending EPA’s completion of a report to the Congress and subsequent determination on whether regulations under subtitle C were warranted. The Congress required that EPA’s report on cement kiln dust include an analysis of (1) the sources and the amounts of cement kiln dust generated annually, (2) the present disposal practices, (3) the potential danger the disposal of this dust poses to human health and the environment, (4) the documented cases of damage caused by this dust, (5) the alternatives to current disposal methods, (6) the costs of alternative disposal methods, (7) the impact these alternatives have on the use of natural resources, and (8) the current and potential uses of cement kiln dust. As of May 1994, there were about 115 cement kiln facilities operating in 37 states and Puerto Rico. Of these, 24 were authorized to burn hazardous waste to supplement their normal fuel. Even with the 1980 exemption, certain aspects of cement kilns’ operations must comply with some environmental controls. Under the Clean Air Act, EPA requires cement kiln facilities to comply with ambient air quality standards for particulate matter. Under the Clean Water Act, EPA regulates the discharge of wastewater and storm water runoff from cement kiln facilities. Under the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA or Superfund), EPA can require cement kiln facilities to clean up contamination resulting from cement kiln dust. In August 1991, EPA’s regulations for boilers and industrial furnaces that burn hazardous waste took effect. While every cement kiln that burns hazardous waste is subject to these regulations, its dust is not classified as hazardous waste if at least 50 percent (by weight) of the materials the kiln processes is normal cement-production raw materials and the kiln’s owner or operator demonstrates that burning hazardous waste does not significantly affect the toxicity of the dust. According to EPA Office of Solid Waste officials, of the 24 cement kilns authorized to burn hazardous waste, they are not aware of any that are required to manage the dust as a hazardous waste. Despite these existing controls, in making its regulatory determination in February 1995, EPA stated that additional controls over cement kiln dust are warranted under RCRA because of its potential to harm human health and the environment. EPA also determined that existing regulations, such as those under the Clean Air Act, may also need to be improved because they are not tailored to cement kiln dust or because their implementation is inconsistent among the states. As partial justification, EPA cited 14 cases in which cement kiln dust has damaged groundwater and/or surface water and 36 cases in which cement kiln dust has damaged the air. EPA also cited the general lack of groundwater monitoring systems around dust management units at cement kiln facilities and the current lack of federal regulations to protect groundwater from the risks posed by cement kiln dust. Furthermore, after collecting and analyzing site-specific information, EPA concluded that potential risks did exist at some facilities. Although in 1980 the Congress directed EPA to complete its report on cement kiln dust by 1983 and to determine within 6 months thereafter whether regulations were warranted, EPA did not do so. It completed its report in December 1993 and issued its determination in February 1995.EPA officials said that the agency did not meet these statutory deadlines because, at that time, EPA viewed completing its report on cement kiln dust as a lower priority than other work. According to EPA’s Acting Chief of the Special Wastes Branch, the agency ranked completing its report and determination on cement kiln dust a low priority because cement facilities were considered to pose minimal risk because of the very small proportion of them on EPA’s National Priorities List. In addition, cement kiln dust exists in smaller volumes in comparison to other high-volume wastes that EPA was required to study, such as wastes from mining for ores and minerals and exploring for oil and gas. EPA wanted to complete studies of these high-volume, temporarily exempt wastes prior to completing its study on cement kiln dust. For example, EPA estimated that the mining industry generated 1.3 billion metric tons of waste in 1982, and it completed its study on these wastes in 1985. EPA officials said that they also needed to meet other statutory time frames for completing standards for other wastes that the agency placed a higher priority on, such as treatment standards for land disposal of hazardous waste. In settlement of a 1989 lawsuit filed against EPA because of its failure to comply with the statutory time frames, EPA entered into a consent decree to publish a report to the Congress on cement kiln dust on or before December 31, 1993. This decree also called for EPA to make a regulatory determination on cement kiln dust by January 31, 1995. RCRA specifically authorizes EPA to modify several requirements that apply to hazardous waste in regulating cement kiln dust. EPA is authorized to modify those requirements that would impose minimum technological standards on new landfills or expansions of existing landfills as well as those that impose corrective action to clean up releases of wastes from units used to dispose of cement kiln dust. EPA is authorized to modify these requirements to accommodate practical difficulties associated with implementing them when disposing of cement kiln dust as well as such site-specific characteristics as the area’s climate, geology, hydrology, and soil chemistry. However, any modifications must ensure the protection of human health and the environment. Although RCRA allows EPA to modify several requirements and thus propose different standards for cement kiln dust than those for hazardous waste, EPA has not yet determined which standards might differ and how they might differ. For example, according to Office of Solid Waste officials, it is not clear whether EPA will include a corrective action requirement to clean up releases from cement kiln dust disposal units that is similar to its corrective action requirement to clean up hazardous waste disposal units. These officials said that EPA will likely focus its management standards on dust generated in the future, as opposed to dust that already exists at cement kiln facilities, because RCRA allows EPA to consider several factors in developing standards for cement kiln dust management, including the impact or cost any management standard may have on the cement kiln industry. Furthermore, these officials said that EPA has to be sensitive to the Congress’s regulatory reform efforts as well as the agency’s goal of taking a more common sense approach to regulating industry. Even though EPA has determined that additional controls are warranted over dust from cement kilns burning hazardous waste as well as dust from those kilns that do not, it has not determined if it will impose the same standards or controls over dust from both types of kilns. EPA’s analysis found that concentrations of 12 metals in dust from both types of cement kilns were at higher than normally occurring levels. Dust from cement kilns burning hazardous waste had concentrations of nine of these metals that were the same or lower than dust from cement kilns that did not burn hazardous waste. Conversely, EPA found that concentrations of three metals—cadmium, chromium, and lead—were higher in dust from cement kilns that burn hazardous waste. (See app. I.) Even though the concentrations of these three metals were higher, EPA found that these increases did not result in discernible differences in risk estimates between dust generated by cement kilns that burn hazardous waste and those that do not. EPA also analyzed the extent to which these metals leached, or washed, out of the dust and found no significant difference between cement kilns that burn hazardous waste and those that do not burn this waste. Although EPA has not yet determined what management standards it will impose on cement kiln dust, Office of Solid Waste officials said that the agency may regulate air emissions from cement kilns burning hazardous waste differently from those that do not burn hazardous waste. According to these officials, because dioxins and furans were found in dust from cement kilns burning hazardous waste, EPA is considering revising its regulations for boilers and industrial furnaces to control their emissions. Even though the levels of these hazardous wastes were generally low, EPA believes their presence warrants concern. Even though EPA did not conclude that cement kiln dust should be classified as a hazardous waste, EPA did conclude that some facilities (in addition to those where damage to surface and/or groundwater and the air has been found) do have the potential to pose a threat to human health and the environment. While EPA plans to propose a program to control cement kiln dust within 2 years, if the agency proceeds with developing federal regulations, it could be several more years after that until cement kilns are required to implement these controls. Interim and possible final actions to reduce the current threat that cement kiln dust may pose at some facilities include requiring the cement kiln industry to adopt dust control standards without EPA’s first having to proceed through a lengthy regulatory development process and making greater use of existing regulatory authority to control cement kiln dust. One action EPA is considering to control this dust is the use of a cement kiln industry proposal called an enforceable agreement. After drafting the general terms of the agreement, the cement kiln industry has been working with EPA and other interested parties to negotiate what controls would be needed to protect human health and the environment. Some possible industry controls are to require landfills used to dispose of cement kiln dust to have such site-specific features as hydrogeological assessments, groundwater monitoring, surface water management, and measures to control emissions of cement kiln dust. The agreement would also specify that EPA would not impose subtitle C regulations on cement kiln dust. EPA is currently analyzing the agreement’s general terms to determine if it is allowable under RCRA and whether it would sufficiently protect human health and the environment. EPA’s consideration of this enforceable agreement to manage cement kiln dust has triggered a negative response from environmental groups. For example, the Environmental Defense Fund has questioned EPA’s authority to enter into these agreements and their enforceability if EPA does not first develop regulations that contain specific standards. In addition, the Fund questions whether these agreements would provide the same level of protection as federal regulations and whether they would allow for the public involvement that occurs in developing regulations. The Fund also questions how these agreements would affect the citizens’ ability to sue and to obtain information through the Freedom of Information Act and whether these agreements would limit federal and state criminal and civil enforcement authorities. Finally, the Fund questions whether these agreements would limit the development of state programs to control cement kiln dust. According to an Office of Solid Waste official, EPA intends to decide by late September 1995 whether it will pursue developing enforceable agreements to control cement kiln dust. Should this approach be challenged in the courts, however, controls over cement kiln dust could be further delayed. A second action under consideration is for EPA and the states to make greater use of existing regulatory authority to control cement kiln dust. Although EPA has determined that current regulations need to be improved for the proper management of cement kiln dust, in the past EPA regional offices and the states have used existing authorities at some facilities to control surface water runoff, emissions from dust piles, and groundwater contamination (i.e., the damage cases mentioned earlier). For example, according to an environmental inspector in Ohio, the state used an enforcement authority under its Remedial Response Act to better control runoff from waste piles that was contaminating a nearby stream. According to a waste management official in Michigan, the state used enforcement authority under its Air Pollution Control Act to better control emissions from dust piles. EPA has also used the Superfund program to clean up groundwater contamination at two facilities. In the course of completing its regulatory determination, EPA’s Office of Solid Waste collected information on 83 cement kiln facilities and conducted a series of studies on risk-screening and site-specific risk-modeling that could be used to determine whether existing regulatory authority should be used to control cement kiln dust at particular cement kilns. On the basis of the information collected and analyzed, EPA projected that several cement kiln facilities may be posing a high risk because of such factors as the amount of metals that may exist in dust disposed at those facilities, the lack of dust management controls at those facilities, and other facility-specific factors, such as proximity to agricultural lands. However, EPA’s Office of Solid Waste has not provided the results of its risk-screening and risk-modeling studies to other EPA offices or the states that are responsible for investigating facilities and taking necessary enforcement actions. (See app. II for additional information on the results of these studies.) According to Office of Solid Waste officials, much of this information is available in the public docket and EPA’s contractor has the computer tapes that were used to develop the risk estimates. However, because they did not believe that most facilities posed the degree of risk that warranted emergency action, they did not provide this information directly to EPA’s Office of Enforcement and Compliance Assurance, its regional officials, or state enforcement officials. EPA’s RCRA officials in four regions with cement kilns whose dust potentially poses a risk to groundwater said they would be interested in having the facility-specific information EPA’s Office of Solid Waste developed to prepare its report and determination. They said that they could provide the information to state environmental officials for the states’ use or could take enforcement action themselves if the regions believed the situation warranted it. In those instances in which EPA or the states lack clear enforcement authority, other actions, such as assessing facilities to better understand the risks and working cooperatively with cement kiln owners/operators to reduce these risks, could be taken. Similarly, EPA air and water officials said they would be interested in having facility-specific information for these purposes. It may be several years before EPA completes its management control program for cement kiln dust regardless of whether it decides to issue new regulations or adopt the use of an enforceable agreement to control this dust. EPA obtained information on 83 cement kiln facilities that it used to conduct a series of risk-screening and site-specific risk-modeling studies. While this information is readily available and much of it is in the public docket, EPA has not distributed it to EPA’s regional or state enforcement officials because the agency did not believe that the estimated risks warranted emergency action. Even so, EPA believes that some facilities, because of the manner in which their cement kiln dust is managed, could pose a risk. EPA regional and state enforcement officials believe that this information could assist them in determining if action should be taken at some facilities prior to EPA’s finalizing its management program to control cement kiln dust. We recommend that the Administrator, EPA, provide to EPA’s regional officials and state enforcement officials the risk-screening and site-specific risk-modeling information developed during its study of cement kiln dust so they can use this information to determine whether interim actions are needed to protect human health and the environment. We provided a draft of this report to EPA for its comments. We met with EPA officials, including the Acting Director, Waste Management Division, Office of Solid Waste, who generally concurred with the information presented in this report. They agreed that it would be appropriate for them to provide EPA’s regional officials and state enforcement officials information that may be useful to determine whether action should be taken to reduce the risks posed at cement kiln facilities prior to the agency’s finalizing its management program to control dust from cement kilns. Office of Solid Waste officials also suggested we clarify certain technical points. We have revised the report accordingly. To determine what priorities EPA set for making its regulatory determination on cement kiln dust, we interviewed officials from EPA’s Special Wastes Branch in its Waste Management Division, Office of Solid Waste. To determine if EPA is authorized to modify hazardous waste management requirements in regulating cement kiln dust, we reviewed RCRA and EPA’s regulatory determination on cement kiln dust. To determine whether EPA believes that dust from cement kilns that burn hazardous waste should be regulated the same as dust from those not burning such waste, we reviewed EPA’s Report to Congress on Cement Kiln Dust, its regulatory determination, and public comments received on that report as well as on other documents. We also discussed the basis for EPA’s determination with its Special Wastes Branch officials as well as officials representing the hazardous waste industry, the cement kiln industry, and environmental groups. To determine whether interim actions could be taken to control cement kiln dust while EPA is developing its management control program, we reviewed EPA’s legal authority for taking action at facilities that may pose a threat to human health and the environment, reviewed cases in which EPA or the states have used this authority in the past, and discussed EPA’s risk-screening and risk-modeling results with Office of Solid Waste officials. We also discussed options EPA and the states have with Special Wastes Branch officials in the Office of Solid Waste, Office of Enforcement and Compliance Assurance officials, EPA attorneys, and EPA and state environmental enforcement officials. We conducted our review between March and June 1995 in accordance with generally accepted government auditing standards. As discussed with your office, this report does not address new information that you provided us recently relating to metals in cement kiln dust. We agreed that we will address that information separately. As arranged with your office, unless you publicly announce this report’s contents earlier, we plan no further distribution until 30 days after its publication. At that time, we will send copies of this report to the Administrator of EPA and make copies available to others upon request. Please contact me at (202) 512-6112 if you or your staff have any questions. Major contributors to this report are listed in appendix III. EPA used a model to analyze the effect cement kiln dust could have at 52 facilities if they did not have adequate dust suppression controls for their waste piles. EPA’s model projected that over half of these facilities would exceed EPA’s health standards for fine particulate matter at plant boundaries and, potentially, at nearby residences. Although almost all of these facilities have some controls to suppress cement kiln dust, EPA does not have information on the adequacy of these controls and EPA officials also noted that they saw cement kiln dust blowing during some visits to 20 facilities. EPA used the same model to analyze the effects of water running off of dust piles at 83 of the facilities. The model projected that 25 facilities could pose higher than acceptable cancer risks or noncancer threats to subsistence farmers and fishermen. Seven of these facilities did not have runoff controls. EPA also estimated that 19 facilities could pose a risk because of dioxins and furans. EPA cautioned, however, that these risk results were based on very limited sampling and modeled worst-case scenarios of unusually high dioxin and furan levels. EPA further cautioned that all of the results from its analyses of indirect exposure risks should be carefully interpreted because its model was still under peer review. Even so, Office of Solid Waste officials said that the results of all of EPA’s analyses were cause for concern. EPA’s analysis of the effects of cement kiln dust on groundwater found that about half of the cement kiln facilities were built on bedrock having characteristics that allow for the direct transport of groundwater offsite. In its analysis of 31 of these facilities, EPA found that dust from 13 of them could contaminate groundwater at levels that could exceed health standards. None of these 13 facilities had installed man-made liners under their dust piles and 11 lacked leachate collection systems. EPA also found that groundwater at three of these facilities was within 10 feet of the bottom of their dust piles; EPA did not have information on the depth to groundwater at the remaining 10 facilities. In addition, some facilities managed cement kiln dust in quarries that could subsequently fill with water; if this occurs, leachate could more readily contaminate groundwater. In addition to the potential risks from the disposal of cement kiln dust, EPA is concerned over the use of this dust as a substitute for lime to fertilize agricultural fields. According to EPA, this use of cement kiln dust could pose cancer risks and noncancer threats for subsistence farmers if that dust contains relatively high levels of metals and dioxins. Richard P. Johnson, Attorney Gerald E. Killian, Assistant Director Marcia B. McWreath, Evaluator-in-Charge Rita F. Oliver, Senior Evaluator Mary D. Pniewski, Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the Environmental Protection Agency's (EPA) decisionmaking process with respect to regulating cement kiln dust, focusing on: (1) EPA priorities in making its kiln dust determination; (2) whether EPA is authorized to modify hazardous waste management requirements in regulating cement kiln dust; (3) whether EPA believes that cement kilns burning hazardous waste should be regulated the same as those not burning hazardous waste; and (4) whether interim actions can be taken to control cement kiln dust. GAO found that EPA: (1) does not give as high a priority to making a cement kiln dust determination as developing standards for other wastes considered to be of higher risk; (2) has the statutory authority to modify its hazardous waste regulations to control cement kiln dust as long as the regulations adequately protect human health and the environment; (3) believes that cement kiln dust from both types of kilns could adversely affect human health and the environment, if improperly managed; (4) has not yet determined whether it will subject the dust from the two types of kilns to the same regulations; and (5) is considering interim actions to control cement kiln dust, such as making greater use of existing regulatory authority to enforce controls over the dust and entering into an agreement with the cement kiln industry to impose additional controls over the dust. |
In 1990, we designated the Medicare program, which is administered by the Centers for Medicare and Medicaid Services (CMS) in HHS, as at high risk for improper payments because of its sheer size and vast range of participants—including about 40 million beneficiaries and nearly 1 million physicians, hospitals, and other providers. The program remains at high risk today. In fiscal year 2001, Medicare outlays totaled over $219 billion, and the HHS/OIG reported that $12.1 billion in fiscal year 2001 Medicare fee-for-service payments did not comply with Medicare laws and regulations. The Congress enacted HIPAA, in part, to respond to the problem of health care fraud and abuse. HIPAA consolidated and strengthened ongoing efforts to combat fraud and abuse in health programs and provided new criminal enforcement tools as well as expanded resources for fighting health care fraud, including $158 million in fiscal year 2000 and $182 million in fiscal year 2001. Under the joint direction of the Attorney General and the Secretary of HHS (acting through the HHS/OIG), the HCFAC program goals are as follows: coordinate federal, state, and local law enforcement efforts to control fraud and abuse associated with health plans; conduct investigations, audits, and other studies of delivery and payment for health care for the United States; facilitate the enforcement of the civil, criminal, and administrative statutes applicable to health care; provide guidance to the health care industry, including the issuance of advisory opinions, safe harbor notices, and special fraud alerts; and establish a national database of adverse actions against health care providers. Funds for the HCFAC program are appropriated from the trust fund to an expenditure account, referred to as the Health Care Fraud and Abuse Control Account, maintained within the trust fund. The Attorney General and the Secretary of HHS jointly certify that the funds transferred to the control account are necessary to finance health care anti–fraud and abuse activities, subject to limits for each fiscal year as specified by HIPAA. HIPAA authorizes annual minimum and maximum amounts earmarked for HHS/OIG activities for the Medicare and Medicaid programs. For example, of the $182 million available in fiscal year 2001, a minimum of $120 million and a maximum of $130 million were earmarked for the HHS/OIG. By earmarking funds specifically for the HHS/OIG, the Congress ensured continued efforts by the HHS/OIG to detect and prevent fraud and abuse in the Medicare and Medicaid programs. CMS performs the accounting for the control account, from which all HCFAC expenditures are made. CMS sets up allotments in its accounting system for each of the HHS and DOJ entities receiving HCFAC funds. The HHS and DOJ entities account for their HCFAC obligations and expenditures in their respective accounting systems and report them to CMS monthly. CMS then records the obligations and expenditures against the appropriate allotments in its accounting system. At DOJ, payroll constituted 78 percent of its total expenditures in fiscal year 2000 and 69 percent in fiscal year 2001. Within DOJ, the Executive Office for the United States Attorneys (EOUSA) receives the largest allotment of HCFAC funds. In EOUSA, each district is allocated a predetermined number of full-time equivalent (FTE) positions based on the historical workload of the district. Specific personnel who ordinarily work on health care activities, such as the Health Care Fraud Coordinator, are designated within the DOJ accounting system to have their payroll costs charged to the HCFAC account. In some districts, one FTE could be shared among several individuals, each contributing a portion of time to HCFAC assignments. EOUSA staff track the portion of time devoted to health care activity and other types of cases and investigations in the Monthly Resource Summary System on a daily or monthly basis. DOJ monitors summary information from the Monthly Resource Summary System to determine how staff members’ time is being used. The HHS/OIG expenditures represented over 96 percent of HHS’s total HCFAC expenditures in fiscal years 2000 and 2001. At HHS/OIG, HCFAC expenditures are allocated based on relative proportions of the HCFAC budget authority and the discretionary funding sources. Table 1 below identifies the relative percentages HHS/OIG used in fiscal years 2000 and 2001. HHS/OIG uses these percentages to compute the amounts of payroll and nonpayroll expenditures to be charged to their two funding sources. HHS/OIG tracks staff time spent on various assignments in separate management information systems (MIS). The information in the MIS is summarized and monitored quarterly to adjust the type of work planned and performed, if necessary, so that the use of the funds is consistent with the funding sources’ intended use. HIPAA also requires that amounts equal to the following types of collections be deposited into the trust fund: criminal fines recovered in cases involving a federal health care offense, including collections pursuant to section 1347 of Title 18, United States Code; civil monetary penalties and assessments imposed in health care fraud amounts resulting from the forfeiture of property by reason of a federal health care offense, including collections under section 982(a)(7) of Title 18, United States Code; penalties and damages obtained and otherwise creditable to miscellaneous receipts of the Treasury’s general fund obtained under the False Claims Act (sections 3729 through 3733 of Title 31, United States Code), in cases involving claims related to the provision of health care items and services (other than funds awarded to a relator, for restitution, or otherwise authorized by law); and unconditional gifts and bequests. Criminal fines resulting from health care fraud cases are collected through the Clerks of the Administrative Office of the United States Courts. Criminal fines collections are reported to DOJ’s Financial Litigation Unit associated with their districts. Based on cash receipt documentation received from the Clerks, the Financial Litigation Units then post the criminal fines collection to a database. The database generates at least a biannual report of the amount of criminal fines collected, which is sent to the Department of the Treasury. Treasury relies on this report to determine the amount to deposit to the trust fund. Civil monetary penalties for federal health care offenses are imposed by CMS regional offices or the HHS/OIG against skilled nursing facilities or long-term care facilities and doctors. CMS collects civil monetary penalty amounts and reports them to the Department of the Treasury for deposit to the trust fund. Penalties and multiple damages resulting from health care fraud cases are collected by DOJ’s Civil Division in Washington, D.C., and by Financial Litigation Units in the United States Attorneys’ offices located throughout the country. The Civil Division and United States Attorneys’ offices report collection information to DOJ’s Debt Accounting Operations Group, which reports the amount of penalties and multiple damages to the Department of the Treasury for deposit to the trust fund. HIPAA also allows CMS to accept unconditional gifts and bequests made to the trust fund. The objectives of our review were to identify and assess the propriety of amounts for fiscal years 2000 and 2001 reported as (1) deposits to the trust fund, (2) appropriations from the trust fund for HCFAC activities, (3) expenditures at DOJ for HCFAC activities, (4) expenditures at HHS for HCFAC activities, (5) expenditures for non-Medicare anti–fraud and abuse activities, and (6) savings to the trust fund. To identify and assess the propriety of deposits, we reviewed the joint HCFAC reports, interviewed personnel at various HHS and DOJ entities, obtained electronic data and reports from HHS and DOJ for the various types of deposits, and tested selected transactions to determine whether the proper amounts were deposited to the trust fund. To identify and assess the propriety of amounts appropriated from the trust fund, we reviewed the joint HCFAC reports, and reviewed and analyzed documentation to support the allocation and certification of the HCFAC appropriation. To identify and assess the propriety of expenditure amounts at HHS, we interviewed personnel, obtained electronic data and reports supporting nonpayroll transactions, tested selected nonpayroll transactions, reviewed payroll allocation methodologies, and interviewed selected employees to assess the reasonableness of time and attendance charges to the HCFAC appropriation account for payroll expenditures. To identify and assess the propriety of expenditure amounts at DOJ, we interviewed personnel, obtained electronic data and reports supporting nonpayroll transactions, tested selected nonpayroll transactions, performed analytical procedures, and interviewed selected employees to assess the reasonableness of time and attendance charges to the HCFAC appropriation account for payroll expenditures. We were unable to identify and assess the propriety expenditures for non- Medicare antifraud activities because HHS/OIG and DOJ do not separately account for or monitor such expenditures. To identify and assess the propriety of savings to the trust fund, as well as any other savings, resulting from expenditures from the trust fund for the HCFAC program, we reviewed the joint reports, interviewed personnel, reviewed recommendations and the resulting cost savings as reported in the HHS/OIG’s fiscal years 2000 and 2001 semiannual reports, and tested selected cost savings. We were unable to directly associate the reported cost savings to HCFAC because HHS and DOJ officials do not track them as such due to the nature of health care anti–fraud and abuse activities. We interviewed and obtained documentation from officials at the CMS in Baltimore, Maryland; HHS headquarters—including the Administration on Aging (AOA), the Assistant Secretary for Budget, Technology and Finance (ASBTF) which was formerly the Assistant Secretary for Management and Budget (ASMB), the OIG, and the Office of General Counsel (OGC)—in Washington, D.C.; HHS’s Program Support Center (PSC) in Rockville, Maryland; and DOJ’s Justice Management Division, EOUSA, Criminal Division, Civil Division, and Civil Rights Division in Washington, D.C. We conducted our work in two phases, from April 2001 through June 2001 focusing primarily on fiscal year 2000 HCFAC activity, and from October 2001 through April 2002 focusing primarily on fiscal year 2001 HCFAC activity, in accordance with generally accepted government auditing standards. A detailed discussion of our objectives, scope, and methodology is contained in appendix I of this report. We requested comments on a draft of this report from the Secretary of HHS and the Attorney General or their designees. We received written comments from the Inspector General of HHS and the Acting Assistant Attorney General for Administration at DOJ. We have reprinted their responses in appendices II and III, respectively. The joint HCFAC reports included deposits of about $210 million in fiscal year 2000 and $464 million in fiscal year 2001, pursuant to HIPAA. As shown in figure 1, the sources of these deposits were primarily penalties and multiple damages. In testing at DOJ, we identified some errors in the recording of HCFAC collections that resulted in an estimated overstatement of $169,765 to the trust fund in fiscal year 2001. These uncorrected errors, which related to criminal fines deposited to the trust fund, were not detected by DOJ officials responsible for submitting collection reports to the Department of the Treasury. Our work did not identify errors in recording collections in any of the other categories for fiscal years 2000 and 2001. We did not identify errors related to fiscal year 2000 criminal fines. Of the 58 statistically sampled criminal fines transactions, we tested the collection of 2 fines reported at $8,693 and $50,007 that were supported by documentation for $6,097 and $25,000, respectively, and resulted in overstatements to the trust fund totaling over $27,000. We estimated the most likely overstatement of collections of criminal fines deposited to the trust fund as a result of transactions incorrectly recorded was $169,765. In both cases, the errors were not detected by DOJ staff responsible for submitting the criminal fines report to the Department of the Treasury. DOJ officials told us that there was a programming mistake in generating the criminal fines report that resulted in these errors. DOJ officials also told us that the mistake has been corrected to address the problem in the future and they plan to research the impact of the programming oversight to determine what, if any, adjustments or offsets are needed and will make the necessary corrections next quarter. While the total estimated overstatement is relatively insignificant compared to the total amount of $464 million in HCFAC collections that was reported to the trust fund in fiscal year 2001, the control weaknesses that gave rise to these errors could result in more significant misstatements. As reported in the joint HCFAC reports for fiscal years 2000 and 2001, the Attorney General and the Secretary of HHS certified the entire $158.2 million and $181.9 million appropriations, respectively, as necessary to carry out the HCFAC program. Based on our review, the requests for fiscal years 2000 and 2001 HCFAC appropriations were properly supported for valid purposes under HIPAA. Figures 2 and 3 present fiscal years 2000 and 2001 allocations for the HCFAC program, respectively. Based on our review, we found that the planned use of HCFAC appropriations was intended for purposes as stated in HIPAA statute. According to the joint HCFAC reports, HCFAC’s increased resources have enabled HHS/OIG to broaden its efforts both to detect fraud and abuse and to help deter the severity and frequency of it. The HHS/OIG reported that HCFAC funding allowed it to open 14 new investigative offices and increase its staff levels by 61 during fiscal year 2000, with the result that OIG is closer to its goal of extending its investigative and audit staff to cover all geographical areas in the country. As shown in figures 2 and 3, we also found that DOJ and other HHS organizations requested and were granted $38.9 million in fiscal year 2000 and $51.9 million in fiscal year 2001. DOJ’s funds were used primarily to continue its efforts to litigate health care fraud cases and provide health care fraud training courses. In fiscal year 2001, $4 million of HHS’s HCFAC allocation was approved by designees of the Attorney General and the Secretary of HHS for reallocation to DOJ to support the federal government’s tobacco litigation activities for fiscal year 2001. In addition, $12 million of fiscal year 2001 HCFAC funds allocated to DOJ’s Civil Division were used to support the federal government’s suit against the major tobacco companies, as allowed under HIPAA. In addition, other HHS organizations used their HCFAC allocations for the following purposes in fiscal years 2000 and 2001: The Office of General Counsel used its funds primarily for litigation activity, both administrative and judicial. CMS, the agency with primary responsibility for administering the Medicare and Medicaid programs, along with the ASMB, used its HCFAC funds allocated in fiscal year 2000 to fund contractual consultant services on establishing a formal risk management function within each organization. CMS used its HCFAC funds allocated in fiscal year 2001 to assist states in developing Medicaid payment accuracy measurements methodologies and to conduct pilot studies to measure and reduce state Medicaid payment errors. The AOA was allocated funds to develop and disseminate consumer education information to older Americans and to train staff to recognize and report fraud, waste, and abuse in the Medicare and Medicaid programs. The ASBTF, formerly the ASMB, used its HCFAC funds for consultant services that will help ensure that the new HHS integrated financial management system, of which the CMS Healthcare Integrated General Ledger Accounting System will be a major component, is being developed to meet the department’s financial management goals, which include helping to prevent waste and abuse in HHS health care programs. At DOJ, we identified problems indicating that oversight of HCFAC expenditure transaction processing needs to be reemphasized. These problems include charging non-HCFAC transactions to the HCFAC appropriation and the inability to provide us with a detailed list of HCFAC expenditure transactions to support summary totals on their internal financial report in a timely manner. These problems could impede DOJ’s ability to adequately account for growing HCFAC expenditures, which totaled over $23.7 million for fiscal year 2000 and $26.6 million for fiscal year 2001, as shown in figure 4. We found that over $480,000 in interest penalties not related to HCFAC activities were miscoded and inadvertently charged to the HCFAC appropriation. The DOJ officials responsible for recording this transaction told us there was an offsetting error of $482,000 in HCFAC-related expenditures that were not recorded to the HCFAC account. Regardless of whether these errors essentially offset, they are indicative of a weakness in DOJ’s financial processes for recording HCFAC and other expenditures. DOJ was also unable to provide a complete and timely reconciliation of detailed transactions to summary expenditure amounts reported in its internal reports. DOJ made several attempts beginning in January 2002 to provide us with an electronic file that reconciled to its internal expenditure report. As of mid-May 2002, we have not received a reconciled file for fiscal year 2001 HCFAC expenditures. We did, however, receive a reconciled file for fiscal year 2000 HCFAC expenditures on April 23, 2002. To their credit, DOJ officials responsible for maintaining DOJ financial systems identified problems associated with earlier attempts to provide this essential information to support its internal reports. While we were ultimately able to obtain this information for fiscal year 2000, we did not receive it in sufficient time to apply statistical sampling techniques for selecting expenditure transactions for review as we had done at HHS. While we used other procedures to compensate for not obtaining this detailed data file in a timely manner, we cannot project the results of our procedures to the population of DOJ expenditures. Both Office of Management and Budget Circular (OMB) A-127, Financial Management Systems, and the Comptroller General’s Standards for Internal Control in the Federal Government require that all transactions be clearly documented and that documentation be readily available for examination. DOJ’s financial statement auditor noted several problems related to the Department’s internal controls over financial reporting, such as (1) untimely recording of financial transactions, (2) weak general and application controls over financial management systems, and (3) inadequate financial statement preparation controls. The financial statement audit report specifically discusses problems related to untimely recording of financial transactions and inadequate financial statement preparation controls at offices, boards, and divisions that process HCFAC transactions. The financial statement auditor recommended that DOJ monitor compliance with its policies and procedures. Further, the auditor recommended that DOJ consider centralizing information systems that capture redundant financial data, or consider standardizing the accumulation and recording of financial transactions in accordance with the department’s requirements. Overall, we generally found adequate documentation to support $114.9 million in fiscal year 2000 and $129.8 million in fiscal year 2001 HCFAC expenditures shown in figure 5. However, we found that a purchase for an HHS/OIG employee award in fiscal year 2001 was questionable because it did not have adequate documentation to support that it was a valid HCFAC expenditure. We also found that HHS’s policies and procedures for employee awards did not include specific guidance on documenting the purchase of such nonmonetary awards. As stated before, the Comptroller General’s Standards for Internal Control in the Federal Government calls for appropriate control activities to ensure that transactions and internal control policies and procedures are clearly documented. HHS/OIG has since provided us with documentation to support the award as a valid HCFAC transaction and told us that it is revising its current policies and procedures to include nonmonetary employee awards. We were not able to identify HCFAC program trust fund expenditures that were unrelated to Medicare because the HHS/OIG and DOJ do not separately account for or monitor such expenditures. Even though HIPAA requires us to report on expenditures related to non-Medicare activities, it does not specifically require HHS or DOJ to separately track Medicare and non-Medicare expenditures. However, HIPAA does restrict the HHS/OIG’s use of HCFAC funds to Medicare and Medicaid programs. According to HHS/OIG officials, they use HCFAC funds only for audits, evaluations, or investigations related to Medicare and Medicaid. The officials also stated that while some activities may be limited to either Medicare or Medicaid, most activities are generally related to both programs. Because HIPAA does not preclude the HHS/OIG from using HCFAC funds for Medicaid efforts, HHS/OIG officials have stated they do not believe it is necessary or beneficial to account for such expenditures separately. Similarly, DOJ officials told us that it is not practical or beneficial to account separately for non-Medicare expenditures because of the nature of health care fraud cases. HIPAA permits DOJ to use HCFAC funds for health care fraud activities involving other health programs. According to DOJ officials, health care fraud cases usually involve several health care programs, including Medicare and health care programs administered by other federal agencies, such as the Department of Veterans Affairs, the Department of Defense, and the Office of Personnel Management. Consequently, it is difficult to allocate personnel costs and other litigation expenses to specific parties in health care fraud cases. Also, according to DOJ officials, even if Medicare is not a party in a health care fraud case, the case may provide valuable experience in health care fraud matters, allowing auditors, investigators, and attorneys to become more effective in their efforts to combat Medicare fraud. Since there is no requirement to do so, HHS and DOJ continue to assert that they do not plan to identify these expenditures in the future. Nonetheless, attributing HCFAC activity costs to particular programs would be helpful information for the Congress and other decision makers to use in determining how to allocate federal resources, authorize and modify programs, and evaluate program performance. The Congress also saw value in having this information when it tasked us with reporting expenditures for HCFAC activities not related to Medicare. We believe that there is intrinsic value in having this information. For example, HCFAC managers face decisions involving alternative actions, such as whether to pursue certain cases. Making these decisions should include a cost awareness along with other available information to assess the case potential. Further, having more refined data on HCFAC expenditures is an essential element to developing effective performance measures to assess the program’s effectiveness. In the joint HCFAC reports, HHS/OIG reported approximately $14.1 billion of cost savings during fiscal year 2000 and over $16 billion of cost savings during fiscal year 2001 from implementation of its recommendations and other initiatives. We were unable to directly associate these savings to HCFAC and other program expenditures from the trust fund, as required by HIPAA, because HHS and DOJ officials do not track them as such due to the nature of health care anti–fraud and abuse activities. HIPAA does not specifically require HHS and DOJ to attribute savings to HCFAC expenditures. Of the reported cost savings, $2.1 billion in fiscal year 2000 and $3.1 billion in fiscal year 2001 were reported as related to the Medicaid program, which is funded through the general fund of the Treasury, not the Medicare trust fund. Our analysis indicated that the vast majority of HHS/OIG work related to the reported cost savings of $14 billion and $16 billion was performed prior to the passage of HCFAC. Based on our review, we found that amounts reported as cost savings were adequately supported. Cost savings represent funds or resources that will be used more efficiently as a result of documented measures taken by the Congress or management in response to HHS/OIG audits, investigations, and inspections. These savings are often changes in program design or control procedures implemented to minimize improper use of program funds. Cost savings are annualized amounts that are determined based on Congressional Budget Office estimates over a 5-year period. HHS and DOJ officials have stated that audits, evaluations, and investigations can take several years to complete. Once they have been completed, it can take several more years before recommendations or initiatives are implemented. Likewise, it is not uncommon for litigation activities to span many years before a settlement is reached. According to DOJ and HHS officials, any savings resulting from health care anti–fraud and abuse activities funded by the HCFAC program in fiscal years 2000 and 2001 will likely not be realized until subsequent years. Because the HCFAC program has been in existence for over 4 years, information may now be available for agencies to determine the cost savings associated with expenditures from the trust fund pursuant to HIPAA. Associating specific cost savings with related HCFAC expenditures is an important step in helping the Congress and other decision makers evaluate the effectiveness of the HCFAC program. Our review of fiscal years 2000 and 2001 HCFAC activities found that appropriations, HHS expenditures, and reported cost savings were adequately supported, but we did identify some errors in the recording of collections and expenditures at DOJ. These errors indicate the need to strengthen controls over DOJ’s processing of HCFAC collections and expenditures to ensure that (1) moneys collected from fraudulent acts against the Medicare program are accurately recorded and (2) expenditures for health care antifraud activities are justified and accurately recorded. Effective internal control procedures and management oversight are critical to supporting management’s fiduciary role and its ability to manage the HCFAC program responsibly. Further, separately tracking Medicare and non-Medicare expenditures and cost savings and associating them by program could provide valuable information to assist the Congress, management, and others in making difficult programmatic choices. To improve DOJ’s accountability for the HCFAC program collections, we recommend that the Attorney General fully implement plans to make all necessary correcting adjustments for collections transferred to the trust fund in error and ensure that subsequent collection reports submitted to the Department of the Treasury are accurate. To improve DOJ’s accountability for HCFAC program expenditures, we recommend that the Attorney General make correcting adjustments for expenditures improperly charged to reinforce financial management policies and procedures to minimize errors in recording HCFAC transactions. To facilitate providing the Congress and other decision makers with relevant information on program performance and results, we recommend that the Attorney General and the Secretary of HHS assess the feasibility of tracking cost savings and expenditures attributable to HCFAC activities by the various federal programs affected. A draft of this report was provided to HHS and DOJ for their review and comment. In written comments, HHS concurred with our recommendation to assess the feasibility of tracking cost savings and expenditures attributable to HCFAC activities by the various federal programs affected. In its written comments, DOJ agreed with all but one of our recommendations, and expressed concern with some of our findings. The following discussion provides highlights of the agencies’ comments and our evaluation. Letters from HHS and DOJ are reprinted in appendixes II and III. DOJ acknowledged the two errors we found in fiscal year 2001 criminal fine amounts and attributed them to a programming problem. As we discussed in the report, DOJ indicated it had already taken action to address our recommendations by correcting the programming error to address future amounts reported for criminal fines. DOJ also stated that an effort is currently under way to research the impact of the programming error and plans to determine what, if any, adjustments or offsets are needed to correct amounts previously reported to the Department of the Treasury. DOJ indicated that it had already discovered and fixed the programming error prior to our review. However, as we reported, DOJ was not aware of the errors we identified, nor did it call our attention to the possibility of errors occurring due to this programming problem. In addition, DOJ acknowledged in its comments that errors have occurred in the recording of valid HCFAC expenditure transactions and stated that corrections have been made to address our related recommendation. Additionally, DOJ incorrectly interpreted our statement that the problems identified in our review could impede its ability to account for growing HCFAC expenditures. In its comments, DOJ construed this to mean that we concluded that program managers lack timely access to financial reports or supporting transactions. That was not our intent nor the focus of our review. As stated in our report, the problems we encountered indicate that additional emphasis should be placed on DOJ’s financial management policies and procedures to minimize errors in recording HCFAC transactions. DOJ did state that it will continue its standing practice of continually educating its staff and reinforcing its financial management policies and procedures to minimize errors in recording HCFAC and all other transactions within DOJ. However, based on our findings, this standing practice needs modification in order to bolster its effectiveness. DOJ also stated that our reference to the findings for departmental systems as cited in the Audit Report: U.S. Department of Justice Annual Financial Statement Fiscal Year 2001, Report No. 02-06, was inapplicable. To address DOJ’s concerns, we clarified the report to cite problems that its financial statement auditors found at entities within DOJ that process HCFAC transactions. Finally, regarding our recommendation to both HHS and DOJ to assess the feasibility of tracking cost savings and expenditures attributable to HCFAC activities by the various federal programs affected, HHS/OIG stated in its written comments that it had previously considered alternatives that would allow it to track and attribute cost savings and expenditures but had identified obstacles to doing so. At the same time, HHS/OIG agreed with our recommendation to perform an assessment of tracking cost savings and expenditures by program, which is critical to developing effective performance measures. However, DOJ stated that it is neither practical nor beneficial to track cost savings or non-Medicare expenditures associated with HCFAC enforcement activities. Without capturing such information, the Congress and other decision makers do not have the ability to fully assess the effectiveness of the HCFAC program. Therefore, we continue to believe that, at a minimum, DOJ should study this further, as HHS has agreed to do. We are sending copies of this report to the Secretary of HHS, the Attorney General, and other interested parties. Copies will be made available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions, please contact me at (202) 512-9508 or by e-mail at calboml@gao.gov or Kay L. Daly, Assistant Director, at (202) 512-9312 or by e-mail at dalykl@gao.gov. Key contributors to this assignment are listed in appendix IV. To accomplish the first objective, identifying and assessing the propriety of amounts reported for deposits in fiscal years 2000 and 2001 as (1) penalties and multiple damages, (2) criminal fines, (3) civil monetary penalties, and (4) gifts and bequests, we did the following: Reviewed the joint HHS and DOJ HCFAC reports for fiscal years 2000 and 2001 to identify amounts deposited to the trust fund. Interviewed personnel at various HHS and DOJ entities to update our understanding of procedures related to collections/deposits. Obtained access to databases and reports from HHS and DOJ for the various collections/deposits as of September 30, 2000, and September 30, 2001. Tested selected transactions to determine whether the proper amounts were deposited to the trust fund. We obtained and recomputed supporting documentation from various sources depending on the type of collection/deposit. We traced amounts reported on the supporting documentation to reports and other records to confirm that proper amounts were appropriately reported. To perform these tests, we did the following: Drew dollar unit samples of 60 items from a population of 626 penalties and multiple damages (PMD), totaling $454,615,907, from an electronic database for CMS PMDs and from the FMIS Dept Management Transfer of Funds from the U.S. Department of Justice Via OPAC Report for DOJ PMDs for fiscal year 2001, and 60 items from a population of 479 penalties and multiple damages, totaling $147,268,092, from an electronic database for CMS PMDs and from the FMIS Dept Management Detail Report for DOJ PMDs for fiscal year 2000. Drew dollar unit samples of 58 items from a population of 179 criminal fines, totaling $2,894,234, from the Criminal Fines Report for fiscal year 2001, and 58 items from a population of 178 criminal fines totaling $57,209,390 from the Criminal Fines Report for fiscal year 2000. Drew dollar unit samples of 29 items from a population of 2,381 civil monetary penalties, totaling $6,060,481, from an electronic database for fiscal year 2001, and 57 items from a population of 1,221 civil monetary penalties, totaling $5,220,177, from an electronic database for fiscal year 2000. Reviewed the entire population of four gifts and bequests, totaling $5,501, for fiscal year 2001. We obtained and analyzed supporting documentation including the letters and checks retained at CMS. There were no gifts and bequests reported for fiscal year 2000, therefore none were tested. To accomplish our second objective, identifying and assessing the propriety of amounts reported in fiscal years 2000 and 2001 as appropriations from the trust fund for HCFAC activities, we did the following: Obtained the funding decision memorandum and reallocation documents to verify the HCFAC funds certified by HHS and DOJ officials. Analyzed the reasons for requesting HCFAC funds to determine that amounts appropriated from the trust fund met the purposes stated in HIPAA to, among other things, coordinate federal, state, and local law enforcement efforts; conduct investigations, audits, and studies related to health care; and provide guidance to the health care industry regarding fraudulent practices. Compared allocations amount reported in the joint HCFAC reports to the approved funding decision memorandum and reallocation documents to verify the accuracy of amounts reported. To accomplish our third objective, identifying and assessing the propriety of amounts for HCFAC expenditures at DOJ for fiscal years 2000 and 2001, we obtained DOJ’s internal financial report, the Expenditure and Allotment Report, EA101, which detailed total expenditure data for each component by subobject class for fiscal year 2000 and fiscal year 2001. To test our population, we further requested that DOJ provide us with a complete detailed population of transactions to support the summary totals on the internal financial report. Because the data were not provided to us on time, nor were they fully reconciled, we could not statistically select a sample and project the results to the population as a whole. We modified our methodology and nonstatistically selected 19 transactions, totaling $2,695,211 in fiscal year 2000, and 38 transactions, totalling $1,362,579 in fiscal year 2001, from DOJ focusing on large dollar amounts, unusual items, and other transactions, which would enhance our understanding of the expenditure process. To determine whether these transactions were properly classified as HCFAC transactions, we interviewed DOJ officials to obtain an understanding of the source and processing of transactions and reviewed, analyzed, and recomputed supporting documentation, such as purchase orders, invoices, and receipts, to determine the propriety of the expenditures. We performed analytical procedures and tested DOJ payroll on the largest component, EOUSA offices. To assess the reasonableness of payroll expenses, we performed a high-level analytical review. To enhance our understanding of how personnel record their work activity in the Monthly Resource Summary System, we nonstatistically selected 20 individuals from 10 districts for fiscal years 2000 and 2001. We interviewed these individuals on their method for charging time to the HCFAC program for fiscal year 2000 and 2001 and to verify whether time charged to the Monthly Resource Summary System was accurate. In the interview, employees were asked whether the time that was recorded in the system was accurate and how and where they received guidance on charging of time. To accomplish our fourth objective, identifying and assessing the propriety of amounts for HCFAC expenditures at HHS for fiscal years 2000 and 2001, we obtained internal reports generated from the agency’s accounting system to identify HCFAC expenditure amounts, obtained detailed records to support HHS payroll and nonpayroll tested selected payroll and nonpayroll transactions to determine whether they were accurately reported. To evaluate payroll charges to the HCFAC appropriation by HHS/OIG employees during fiscal years 2000 and 2001, we performed analytical procedures. We analyzed the methodology used by the HHS/OIG to verify that expenditures were within the predetermined allocation percentages for HCFAC and non-HCFAC expenditures. We also reviewed 10 HHS/OIG employee time charges for fiscal years 2000 and 2001. The selected employees were interviewed regarding their time charges for fiscal years 2000 and 2001. In the interview, employees were asked to verify the time that was recorded by the department’s management information systems or timecards. We also inquired as to how and where employees received guidance on charging their time and whether they understood the various funding sources used to support OIG activities. We verified that the pay rate listed on the employees Standard Form 50 Notification of Personnel Action was the same as the amount charged to the Department of Health and Human Services Regional Core Accounting System Data Flowback Name List (CORE - Central Accounting System). We verified that the summary hours as recorded in the U.S. Department of Health & Human Services Employee Data Report (TAIMS - Time and Attendance application) traced to the management information system or time and attendance records. We interviewed the employees to verify that the time charged to the management information system or time and attendance records were accurate. We drew dollar unit samples of 44 items from a population of 36,380 nonpayroll expenditures, totaling $34,156,369, from HHS’s internal accounting records for fiscal year 2001, and 39 items from a population of 27,884 nonpayroll expenditures, totaling $32,914,328, for fiscal year 2000. To assess the propriety of these transactions, we obtained supporting documentation such as invoices, purchase orders, and receipts. We recomputed the documentation as appropriate to the transaction. We were unable to accomplish our fifth objective, to identify and assess the propriety of amounts reported as fiscal years 2000 and 2001 expenditures for non-Medicare anti–fraud and abuse activities, because HHS/OIG and DOJ do not separately account for or monitor such expenditures. Even though HIPAA requires that we report on expenditures related to non- Medicare activities, it does not specifically require HHS or DOJ to separately track Medicare and non-Medicare expenditures. To accomplish our sixth objective, to identify and assess the propriety of amounts reported as savings to the trust fund, we obtained the fiscal years 2000 and 2001 HHS/OIG semiannual reports to identify cost savings as reported in the joint reports and tested selected cost saving transactions to determine whether the amounts were substantiated. We were unable to attribute the reported cost savings to HCFAC expenditures as well as identify any other savings from the trust fund because, according to DOJ and HHS officials, any savings resulting from health care anti–fraud and abuse activities funded by the HCFAC program in fiscal years 2000 and 2001 will likely not be realized until subsequent years. We interviewed and obtained documentation from officials at CMS in Baltimore, Maryland; HHS headquarters–AOA, ASBTF, OIG and the OGC–in Washington, D.C.; HHS’s Program Support Center in Rockville, Maryland; and DOJ’s Justice Management Division, EOUSA, Criminal Division, Civil Division, and Civil Rights Division in Washington, D.C. We conducted our work in two phases, from April 2001 through June 2001 focusing primarily on fiscal year 2000 HCFAC activity, and from October 2001 through April 2002 focusing primarily on fiscal year 2001 HCFAC activity, in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Secretary of HHS and the Attorney General. We received written comments from the Inspector General of HHS and the Acting Assistant Attorney General for Administration at DOJ. We have reprinted their responses in appendixes II and III, respectively. Ronald Bergman, Sharon Byrd, Lisa Crye, Jacquelyn Hamilton, Corinne Robertson, Gina Ross, Sabrina Springfield, and Shawnda Wilson made key contributions to this report. Civil Fines and Penalties Debt: Review of OSM’s Management and Collection Processes. GAO-02-211. Washington, D.C.: December 31, 2001. Criminal Debt: Oversight and Actions Needed to Address Deficiencies in Collection Processes. GAO-01-664. Washington, D.C.: July 16, 2001. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading. | The Medicare program is the nation's largest health insurer with almost 40 million beneficiaries and outlays of over $219 billion annually. Because of the susceptibility of the program to fraud and abuse, Congress enacted the Health Care Fraud and Abuse Control (HCFAC) Program as part of the Health Insurance Portability and Accountability Act (HIPPAA) of 1996. HCFAC, which is administered by the Department of Health and Human Services' (HHS) Office of Inspector General (OIG) and the Department of Justice (DOJ), established a national framework to coordinate federal, state, and local law enforcement efforts to detect, prevent, and prosecute health care fraud and abuse in the public and private sectors. HIPPAA requires HHS and DOJ to issue a joint annual report no later than January 1 of each year to Congress for the proceeding fiscal year. The joint HCFAC reports included deposits of $210 million for fiscal year 2000 and $464 million for fiscal year 2001, pursuant to the act. In testing at DOJ, GAO found errors in the recording of criminal fines deposits to the Federal Hospital Insurance Trust Fund in fiscal year 2001 that resulted in an estimated overstatement to the trust fund of $169,765. GAO found that the planned use of HCFAC appropriations was in keeping with the stated purpose in the act. Although GAO found expenditures from the trust fund were generally appropriate at HHS, at DOJ GAO found $480,000 in interest penalties not related to HCFAC activities that were charged to the HCFAC appropriation. GAO was unable to identify expenditures from the HCFAC trust fund for activities unrelated to Medicare because the HHS/OIG and DOJ do not separately account for or monitor these activities. Likewise, GAO was unable to identify savings specifically attributable to activities funded by the HCFAC program. |
AMES, Iowa — The simmering rivalry between Tim Pawlenty and Michele Bachmann erupted at Thursday night’s Republican primary debate here, transforming Iowa’s first 2012 forum into a full-blown slugfest.
The Minnesota duo have been in a low-grade tug of war for months over the affections of Iowa conservatives. With a crucial test looming for both at the Ames Straw Poll this Saturday, the Pawlenty-Bachmann rivalry turned so intense that it threatened to crowd out the other candidates completely.
Text Size -
+
reset Pawlenty on Bachmann: ‘Not about gender’ Bachmann and TPaw spar at debate POLITICO 44
The charges were familiar: Pawlenty once again called Bachmann’s accomplishments “nonexistent.” Bachmann wielded well-worn attacks on Pawlenty's tenure as governor.
But this was their most ferocious exchange to date — with more than a hint of desperation visible for both.
“She speaks of leading these [conservative] efforts in Washington and Minnesota,” Pawlenty lashed out. “Leading and failing is not the objective.”
Bachmann assailed Pawlenty with a litany of alleged deviations from conservative orthodoxy, blasting: “When you were governor, you implemented cap-and-trade in our state and you praised the unconstitutional individual mandate.
“You said the era of small government is over. That sounds a lot more like Barack Obama if you ask me,” Bachmann said.
The caustic exchanges were no accident: A defeat on Saturday could snap Bachmann’s momentum in the race, or seal Pawlenty’s fate as a 2012 also-ran.
For the race beyond the straw poll, however, neither the candidates nor the moderators did much to draw blood from national front-runner Mitt Romney, who sauntered unscathed through his second consecutive debate.
The questions from Fox News and the Washington Examiner lobbed potentially difficult questions at Romney, asking him to defend his record on taxes in Massachusetts and his near-absence from the recent debate over whether to raise the federal debt ceiling.
On both issues, Romney stuck to narrow talking points, declaring that his support for the conservative Cut, Cap and Balance pledge told voters all they needed to know about his views on the debt ceiling.
Asked about a presentation his administration once gave to Standard & Poor’s, saying that Massachusetts deserved a credit rating upgrade in part because it raised taxes, Romney sidestepped the issue entirely.
Romney was barely challenged on his carefully parsed answers. With Pawlenty and Bachmann focused on each other, and several of the other candidates flailing in their attempts to stand out from the crowd, Romney took little heat from his fellow Republicans.
Indeed, virtually all of the candidates helped confirm — in one form or another — that Romney will likely face a tougher political challenge from a late-announcing candidate like Texas Gov. Rick Perry than from any of his currently declared rivals. ||||| Charlie Neibergall-Pool / Getty Images
Thursday night’s Republican presidential debate in Ames, Iowa, saw the first face-to-face hostility of the 2012 campaign. After an early round of self-congratulation and obligatory shots at President Obama — “You are finished in 2012 and you will be a one-term president,” Minnesota Rep. Michele Bachmann roared before former Governor Tim Pawlenty promised to cook dinner for anyone who could “find” Obama’s entitlement reform plans — the candidates began to focus their ire on each other in a shared forum for the first time.
Given a second chance to ding Mitt Romney for the similarities between the health reform legislation he championed as governor of Massachusetts and the federal law passed by Democrats in 2010, Pawlenty got in a few whacks on the front-runner, and Bachmann piled on. But that exchange was mild compared to two spats that broke out between the Minnesotans.
Pawlenty tried to draw a sharp distinction between his eight years as governor and Bachmann’s thin record in Congress, criticizing her for failing to stop assorted Democratic evils. “Her record of accomplishment and results is nonexistent,” he said. “That’s not going to be good enough.” He even went as far as to call her a liar: “She’s got a record of misstating and making false statements.”
Bachmann gave as good as she got, presenting a concise invoice of Pawlenty’s transgressions against conservative orthodoxy: establishing cap-and-trade, praising an individual health insurance mandate and so on. “You said the era of small government was over. That sounds a lot more like Barack Obama if you ask me,” she said to approving applause.
Pawlenty responded by once again pointing out the obvious: that Bachmann’s opposition to any number of Democratic initiatives didn’t actually reverse them. “If that’s your record of results, please stop, because you’re killing us,” he said. “Leading and failing is not the objective.” In a later exchange, Bachmann accused Pawlenty of strong-arming her into supporting a Minnesota cigarette tax by tying it to an abortion measure.
Beyond that — and a few prickly back-and-forths between Former House Speaker Newt Gingrich and Fox News moderator Chris Wallace — Thursday’s debate was fairly tame. If you watched the last one, you wouldn’t be surprised to learn that Texas Rep. Ron Paul (R. – El Dorado) answered most questions by patiently explaining how returning to the gold standard would solve America’s ills; Gingrich packed more Ronald Reagan references into each sentence than 30 Rock‘s Jack Donaghy (He made a Six Sigma reference too!), Rick Santorum mentioned that he wasn’t getting enough airtime a lot, and pizza magnate Herman Cain mellifluously told the audience how great it was that he’s never been a politician. Jon Huntsman, Obama’s former ambassador to China and a new addition to the dais, added very little. He was eminently staid, moderate and forgettable. None of the three bode well for his campaign.
It was appropriate then that this debate saw a conflict between Pawlenty and Bachmann, but little more. The Ames straw poll on Saturday will be a crucial test for both those candidates and the former likely needs a breakout performance to keep his White House prospects from fading altogether. But the broader contours of the race remain unchanged. Romney made no glaring errors and even managed to inspire a few rounds of applause. As frontrunner, the status quo is his best friend. If the Republican presidential contest is due for some radical shakeup, it will have to wait at least another day. On Saturday, when Rick Perry enters the race and Pawlenty, Bachmann et al. face each other at Ames, we’ll learn a lot more about the 2012 presidential election we did from Thursday’s display. ||||| The Texas governor wasn’t on hand for Thursday’s GOP debate. But the weak showing made it clear: the field is wide open. By Michael Tomasky
Hard to say that there was a winner at the GOP’s Iowa debate Thursday night. It was more like Tee-Ball, where everybody gets a trophy, win or lose. First of all, the panel did ask genuine, and genuinely difficult, questions. Byron York of the Washington Examiner forced the night’s most telling moment, when all the candidates (or at least all those asked) said they wouldn’t back a cuts-to-revenue deal even at a 10-to-1 ratio. That’s all you need to know to understand that no matter what they say on other questions, they would all run governments that would rack up massive deficits and force massive expenditures onto old people, and in fact pretty much the rest of us, too.
It was a pretty good show though. Michele Bachmann. I’m sorry, but those eyes are crazy. Something effulges through them from within that most of us don’t have, and it’s not something I saw in my mother’s eyes, let’s just put it that way. And what was that bathroom break, or whatever it was? Rick Santorum always sounds to me in these things like the most textbook conservative Republican on both domestic and foreign policy, but he never seems to gain any ground. He’s like a set of positions attached to the least compelling human being you can imagine. Jon Huntsman was honorable in his bizarre way, but absurd. Tim Pawlenty was other-than-honorable and close to absurd. Mitt Romney just disappeared for 15-minute stretches at a time, but did himself no harm.
Some other people were up there. Oh, Newt Gingrich: If he were in double digits, his throwdowns at Chris Wallace would’ve looked strong, but as it was they looked annoying. And Herman Cain. Yikes. And Ron Paul. He had loads of homers there. But he went way too far on foreign policy for that crowd. I guess the winner was Rick Perry.
||||| Enlarge Charlie Neibergall / AP Republican presidential candidates participated in the Iowa GOP/Fox News Debate at the CY Stephens Auditorium in Ames, Iowa, Thursday. From left, former Sen. Rick Santorum (R-PA); businessman Herman Cain; Rep. Ron Paul (R-TX), former Massachusetts Gov. Mitt Romney; Rep. Michele Bachmann (R-MN); former Minnesota Gov. Tim Pawlenty; former Utah Gov. Jon Huntsman; former House Speaker Newt Gingrich. Charlie Neibergall / AP
The front-runner for a major party's presidential nomination is always happiest when his intraparty rivals turn their attacks on each other instead of him.
So by that measure, Mitt Romney had to be very pleased indeed because he was left largely unmolested by the seven other Republican candidates contending for the party's presidential nomination at the debate at Iowa State University Thursday evening.
Instead of directing their attacks at Romney, the other candidates on the stage went after each other, with the two Minnesotans, Rep. Michele Bachmann and the state's former governor, Tim Pawlenty, providing much of the night's fireworks.
And when the two Minnesotans weren't cudgeling each other, Rick Santorum, the former U.S. senator from Pennsylvania, was going after Rep. Ron Paul of Texas and Bachmann.
And in some of the most baffling moments of the night, Newt Gingrich, the former House speaker, attacked the media.
For Bachmann and Pawlenty, their fight was mostly about getting the biggest bounce heading out of the debate and into Saturday's Ames Straw Poll.
Hailing from a neighboring state as they both do, they are counting on strong showings in the straw poll over the weekend to sustain their their candidacies through the next phase of the campaign.
The importance of such a showing for them only increases in light of the fact that Texas Gov. Rick Perry is about to enter the race Saturday and a recent poll had him virtually tied with Romney.
So the two attacked each other's records repeatedly in some of the most heated political debate moments in recent history. Pawlenty dismissed Bachmann's claims to leadership in the fight against the new health care law and the economic stimulus, saying she had accomplished little to nothing.
"If that's your view of effective leadership with results, please stop. You're killing us."
Pawlenty also got in a haymaker, noting Bachmann's "record of misstatements," a tendency that has been documented by PolitiFact.com, among others.
Bachmann, for her part, ticked off several pieces of Pawlenty's record from his time as governor — that he enacted a cap and trade regime in Minnesota and backed an individual health insurance mandate. She said:
"That sounds a lot like Barack Obama to me."
Meanwhile, Santorum and Paul clashed over foreign policy with the former Pennsylvania senator saying that the U.S. had been in hostilities with Iran since 1979. Paul corrected him, saying the bad blood actually began in 1953 when the CIA engineered the overthrow of Iran's elected leader. Then Paul argued to a seemingly incredulous Santorum that the nuclear bombs of the Soviet Union were a much bigger problem for the U.S. than Iran's getting a bomb would be.
This was all good for Romney since so long as his opponents tried to score points against one another during the two-hour debate, they weren't taking direct aim at him.
The one time a rival was invited to go after the frontrunner, it was Pawlenty who was urged by Fox News Sunday host Chris Wallace to take another whack at Romney over the Massachusetts health care legislation he signed into law as governor.
That law contained an individual mandate requiring most people to have health insurance and was the model for the federal law signed by President Obama. Pawlenty was widely seen as backing away from a fight at an earlier debate after he failed to follow-up on a pre-debate attack on what he called Obamneycare:
"I don't want to miss that chance again, Chris. Look, Obamacare was patterned after Mitt's plan in Massachusetts. And for Mitt or anyone else to say there aren't substantial similarities or they are not essentially the same plan — it just isn't credible. So that's why I called it Obamneycare, and I think that's a fair label. I'm happy to call it that again tonight."
Romney parried by repeating his defense that he viewed the Massachusetts law as a solution for his state but not necessarily for others.
Romney, who had already made a memorable comment earlier Thursday with his "corporations are people" remark at the Iowa State Fair, actually delivered another one at the debate.
Asked about the debt-ceiling deal and why he refused to support it, he said:
"I'm not going to eat Barack Obama's dog food. What he served up is not what I would have done if I'm president of the United States."
Romney's comment reflected his message discipline which was to go after Obama relentlessly and leave his rivals for the nomination to fight with each other.
Or with the media. At one point, Gingrich took to task the journalists posing queries to the candidates for asking "gotcha" questions.
Responding to a question from Wallace about the disarray in his campaign, Gingrich referred to an instruction by moderator and Fox News anchor, Brett Baier, to the candidates to shelve their canned stump speeches.
"I took seriously Bret's injunction to put aside the talking points. And I wish you would put aside the gotcha questions."
Romney wasn't the only debate winner. Obama was too since, for the most part, any Republican attacks on him were quickly obscured by the fog of war that rose from the GOP candidates' attacks on each other.
Meanwhile, waiting in the wings was Perry, the Texas governor, who didn't attend the debate but just as surely cast a shadow over it, with some of the candidates asked what they made of his johnny-come-lately entry.
All of them responded generously, welcoming him to the race. For Romney, however, Perry's presence is likely to mean the end of his being able to remain above the fray.
With his access to big donors with deep pockets and appeal to the social conservative base of the party, Perry could wind up testing Romney to the limits, setting up a lengthy primary fight resembling what the Democrats experienced in 2008.
The debate's biggest loser had to Jon Huntsman, Jr., a former Utah governor who also served in the Obama administration as ambassador to China.
The Iowa debate was his first and after a relatively weak performance, as well as reported troubles within his campaign, one question has to be just how many future debates will he participate in?
Republican voters looking for reasons to support him likely finished the evening still searching. | Last night's GOP debate in Iowa was a testy affair, but Mitt Romney managed to stay above the fray and his front-runner status still isn't in doubt—at least until Rick Perry enters the race, pundits say. "Neither the candidates nor the moderators did much to draw blood" from Romney, who "sauntered unscathed through his second consecutive debate," writes Alexander Burns at Politico. Romney "stuck to narrow talking points" when asked about his taxation record in Massachusetts and the debt ceiling debate, and his answers weren't challenged, Burns notes. Romney "just disappeared for 15-minute stretches at a time, but did himself no harm," while Jon Huntsman was "honorable in his bizarre way, but absurd" and Ron Paul "had plenty of homers, but went way too far on foreign policy," writes Michael Tomasky at the Daily Beast. The real winner, he decides, was the absent Rick Perry. Michele Bachmann and Tim Pawlenty scored a couple of small hits on Romney, but they were pretty mild compared to the fierce hostilities between the two Minnesotans, writes Adam Sorensen at Time. Romney "made no glaring errors and even managed to inspire a few rounds of applause," he writes. Frank James at NPR names another winner: President Obama. "Any Republican attacks on him were quickly obscured by the fog of war that rose from the GOP candidates' attacks on each other," and, in the case of Newt Gingrich, on the media, he writes. |
A California parole board in Coalinga on Wednesday rejected a parole request by the the man who assassinated Robert F. Kennedy.
Sirhan Sirhan has spent 42 years behind bars for the assassination in 1968 at the Ambassador Hotel.
This was his 13th parole hearing. The parole board has repeatedly rejected Sirhan's appeals for release for failing to accept responsibility or show remorse for Kennedy's death.
Sirhan's attorney, William F. Pepper, told the Associated Press that his client had no memory of the events and suggested a second gunman was involved in the crime.
Pepper, who is based in New York, gained publicity for his efforts to prove the innocence of James Earl Ray in the assassination of the Rev. Martin Luther King Jr.
Pepper says Ray, who was convicted of killing King two months before Kennedy was slain, was framed by the federal government and that King was killed in a conspiracy involving the FBI, the CIA, the military, the Memphis police and organized crime figures from New Orleans and Memphis.
Ray, who confessed to killing King and then recanted and won the support of King's widow and children, died in 1998.
Sirhan, now 66, shot Kennedy on June 5, 1968, moments after the New York senator had claimed victory in the California presidential primary. Sirhan was convicted and sentenced to death in April 1969. The sentence was commuted to life in prison with the possibility of parole when the death penalty was outlawed in California in 1972 before being re-instituted. ||||| Coalinga, California (CNN) -- A California state panel on Wednesday denied parole for Sirhan B. Sirhan, saying the convicted assassin of Robert F. Kennedy hasn't demonstrated an understanding of the "magnitude" of his crimes.
Commissioner Mike Prizmich of the California Board of Parole Hearings told Sirhan that he failed to meet the state's criteria for suitability for parole by blaming others for his problems, behaving immaturely and not seeking enough self-help programs.
In response, Sirhan sought to interrupt Prizmich, who admonished the inmate. Prizmich, however, said Sirhan would be eligible for parole again in five years.
"At this hearing, you're interrupting me time and time again, demonstrating a lack of control and impulsivity," Prizmich told Sirhan.
Sirhan made his first appearance before a California parole board since 2000, supported by two psychologists' reports saying he no longer poses a threat to society, his attorney said.
Wednesday marked Sirhan's 14th parole hearing, held in the Pleasant Valley State Prison in Coalinga, California, which is 200 miles northwest of downtown Los Angeles.
Members of the Kennedy family and their representatives didn't attend the meeting, nor did they return messages or e-mails seeking a comment prior to the hearing.
Sirhan was convicted of killing Kennedy and wounding five other people in the shooting in the kitchen pantry of the Ambassador Hotel in Los Angeles in 1968. The hotel was later razed and a public school now occupies the site.
In Wednesday's hearing, Prizmich did say that Sirhan's record -- clean of any significant discipline problems -- was encouraging.
Sirhan's overall demeanor, prior to the rejection of his parole request, "does give some evidence and hope you're improving," Prizmich said.
But, Prizmich added, "you have failed in some areas."
Dressed in a prison denim jacket and blue shirt, Sirhan appeared nervous as he entered the hearing room, and he told the panel his breathing was labored because he's been fighting valley fever.
His hair now graying and balding after 43 years in prison, a clean-shaven Sirhan spoke for much of the four-hour hearing, answering questions from the parole board about the crime and what he has done to improve himself.
"It's a horrible nightmare not just for me but for you and the whole country," Sirhan said of the Kennedy assassination and his other convictions for wounding five persons.
On occasion, Sirhan flashed a gap-toothed smile, but as Prizmich announced the parole denial, Sirhan bit his tongue.
Prizmich said that Sirhan needed to reflect more deeply on the 12-step program of Alcoholics Anonymous, which he attended in prison from the mid 1980s to early 1990s. Sirhan said he drank four Tom Collins highballs prior to the Kennedy shooting.
Prizmich also told Sirhan to read books and demonstrate improvement.
"I see," Sirhan said, and he whispered to his attorney, Pepper.
"No, you're not," Prizmich replied. "You're talking to your attorney and smiling."
At another point, Prizmich told Sirhan: "What we didn't see today is an understanding of the magnitude of this loss."
The parole board was also disturbed by how Sirhan described his wounding of five other people in the 1968 shooting as "flesh wounds." In fact, the injuries were more serious, Prizmich said.
"Your lack of insight into this crime is one of great concern to us," Prizmich told Sirhan.
Prizmich urged Sirhan to enter a program for anger management and expressed dismay at his "vaguely made references to conspiracies."
"Conspiracies that law enforcement or the CIA set you up or the district attorney was part of this ... it seems as though everything negative that occurred to you was someone else's fault," Prizmich told Sirhan.
"It does indicate you need more work," Prizmich continued.
Sirhan sought to interrupt Prizmich several times, leading the parole board commissioner to state: "Sir, you're not going to interrupt again."
Prizmich told Sirhan that while he received two "positive" psychological reviews, he had problems with "acting out in an immature and impulsive way," Prizmich said.
After Prizmich and deputy commissioner Randy Kevorkian outlined self-help programs for Sirhan, Prizmich added: "I hope I'm giving you some positive encouragement."
Prizmich also said "I've noticed your improvement. I'm glad you're now cooperating with psychologists. But there were years of not participating in any of that."
Sirhan's attorney, William Pepper, expressed "disappointment" with the parole board's decision and said Sirhan will appeal the matter to the courts.
"Whenever you see a system of justice as we saw this afternoon, one has to be very chagrined," Pepper said.
Sirhan has shown remorse and has even avoided inmates' provocations, especially after the September 11 attacks in which other prisoners mistakenly accused Sirhan, a Palestinian Christian, of being Muslim and a terrorist.
The parole board "ignored every thing we had to say, and they went on the emotional kick of a loss of a presidential candidate," Pepper said. "The magnitude of the crime has nothing to do with his suitability of being released from prison after 43 years."
In what attorneys on both sides called an extraordinary appearance, one of the surviving shooting victims in the 1968 assassination, retired TV journalist William Weisel, attended the parole hearing and told the parole panel that he wouldn't object to Sirhan's release if the board okayed it.
Weisel based his statement on the fact that a state psychologist and a private psychologist hired by Sirhan's attorney both agreed that Sirhan wouldn't pose a threat to society if he were paroled.
But after the state panel rejected Sirhan's parole request, Weisel said he wasn't surprised.
"He was argumentative," Weisel said of Sirhan's occasional behavior when the rejection was announced. "He spoke when other people were speaking."
Now 73, Weisel, of Healdsburg, California, was an ABC News associate director at the time of the shooting.
Weisel, who shared with CNN his prepared statement to the parole board, said he was hit by a stray bullet in the abdomen "on that terrible evening" a quarter past midnight on June 5, 1968, after Kennedy had just won the California primary in his bid for the Democratic presidential nomination.
Wednesday's hearing marked the first time that Weisel ever saw Sirhan face-to-face. Weisel said he never saw Sirhan during the 1968 shooting in the Los Angeles hotel where Kennedy was celebrating his California victory.
"I would not be telling the truth if it wasn't something of a shock to see him in person," Weisel told reporters after the parole hearing.
In closing statements prior the board's ruling, Los Angeles County Deputy District Attorney David Dahle urged the panel to deny parole.
"Certainly no other prisoner's crime presents as dark a moment in American history, in California's history, in Los Angeles County's history as the killing of a presidential candidate following his primary win," Dahle told the board.
Dahle declined to comment on Weisel's statement, but he said Weisel's appearance marked the first time during Sirhan's imprisonment that a surviving witness voiced no objection to his possible parole -- at least since 1970.
Prior to 1970, "there's no record of the proceedings and I don't know if anyone showed up," Dahle said.
"It's fairly unusual. It's not common," Dahle added with respect to victims attending a parole hearing and not objecting to the prisoner being released. "We don't get many, at least in cases in Los Angeles County -- where we get victims or victims' next of kin coming to cases. It's an expensive proposition."
Pepper, an international human rights attorney and a barrister with offices in New York and London, said he and Sirhan were "very grateful" for Weisel's statement.
Sirhan was convicted of first-degree murder and five counts of assault with attempt to commit murder.
Four of those five surviving victims are alive, including Weisel. The others are Paul Schrade, a Kennedy family friend and former UAW Union regional leader; Ira Goldstein, a former radio journalist; and Elizabeth Y. Evans, a friend of the late Pierre Salinger.
A Palestinian Christian who was born in Jerusalem and whose parents brought him and his siblings to America in the 1950s, Sirhan killed Kennedy because of statements the New York senator made about the United States sending fighter jets to aid Israel, prosecutors argued during Sirhan's 1969 trial.
In 1968, Sen. Kennedy, who was a younger brother of assassinated President John F. Kennedy, in whose administration he also served as attorney general, was a leading contender for the Democratic presidential nomination, competing against Vice President Hubert Humphrey and Sen. Eugene McCarthy. Kennedy was shot only minutes after a hotel ballroom speech televised live to American households, in which he claimed victory over McCarthy in the California primary.
The shooting, in the hotel's kitchen pantry, was not captured by any cameras.
Sirhan was the only person arrested in the shooting.
Sirhan has Jordanian citizenship, but never became a U.S. citizen, so if the parole board were to release him, he would be deemed an illegal immigrant and deported to Jordan, where he has extended family, his attorney said.
Sirhan's younger brother, Munir, 63, continues to live in the southern California community where the Sirhan family siblings were raised, Munir Sirhan said.
Sirhan Sirhan "has maintained a good relationship with his brother and he would love to live with this brother in Pasadena, but that's very unlikely because of his immigration status," Pepper said.
Daniel Brown, an associate clinical professor in psychology at Harvard Medical School, submitted a statement to the parole board after interviewing Sirhan for 60 hours over a three-year period, Pepper said in an interview prior to the hearing.
"The report is part of a sealed file, but I can say that Sirhan does not have any violent tendency that should be regarded as a threat to the community," Pepper told CNN before the hearing.
Brown's report "confirms Sirhan's legitimacy of the loss of his memory," including in the pantry during the shooting and in moments of his life in the year prior to the Kennedy slaying, Pepper said.
"Sirhan has at various times taken responsibility (for the Robert Kennedy assassination), but the actual fact is that he doesn't remember what happened in the pantry at all. But because everyone around there told him he did it and he had a pistol and he did fire that pistol, he came to believe that he was actually guilty," Pepper said.
Sirhan shows no sign of mental illness and has demonstrated remorse for the shootings, Pepper said.
"He's said no day of his life goes by where he doesn't have remorse and deep regret that this took place and the role he played in this thing," Pepper said. "He's not schizophrenic or psychotic, and he has not shown any history of violence during incarceration."
Pepper said he became Sirhan's pro bono attorney in the fall of 2007 after he learned of the results of an audio analysis conducted on a sound track of the Kennedy shooting. The audio recording, made 40 feet away from the crime scene by free-lance newspaper reporter Stanislaw Pruszynski, is the only known recording of the gunshots in that June 1968 assassination.
Pepper said he believes the Pruszynski recording is evidence showing that there was a second gun firing in addition to Sirhan's Iver-Johnson handgun. The tape was uncovered in 2004 by CNN's Brad Johnson, who had the recording independently examined by two audio analysts, Spence Whitehead in Atlanta, Georgia, and Philip Van Praag in Tucson, Arizona. Johnson reported on their separate findings for CNN's Backstory in June 2009.
But the parole board didn't hear arguments on the second-gun evidence. Rather, the parole panel focused on Sirhan's suitability for parole.
The Pruszynski recording "clearly showed that 13 shots were fired in the pantry, and Sirhan's gun had only eight shots, so it definitely means there was a second shooter," Pepper said in an interview before the hearing.
But Weisel, joined by authorities who have dismissed the second-gun assertion, said he was convinced that Sirhan was a lone gunman.
"I've seen so many theories after 43 years. Please -- I think you can have a conspiracy in a dictatorship and some countries, but I don't think so in a democracy or our country where there is freedom of speech," Weisel told CNN in an interview before the hearing.
However, another shooting victim sees it differently than Weisel: Schrade. He is a Kennedy friend who was shot in the forehead while standing immediately behind Robert Kennedy in the pantry.
In 2008, Schrade, now 86, told CNN that he believes evidence clearly shows Sirhan was not the only person who fired shots in that assassination. "We have proof that the second shooter was behind us and off to our right. Sirhan was off to the left and in front of us," Schrade told CNN anchor Adrian Finighan.
Schrade declined to comment to CNN this week about Wednesday's scheduled parole hearing for Sirhan.
Pepper said he was chairman of Kennedy's citizens campaign in Westchester County, New York, during his successful 1964 bid for the U.S. Senate, and Pepper's duties included taking Kennedy's sisters and mother to political events. He said he was also a volunteer in the successful 1960 presidential campaign of Kennedy's brother, John Kennedy.
"I knew Bob Kennedy, and I came on this case reluctantly," Pepper said, explaining he became convinced a second gunman was involved.
In 1999, Pepper represented the Rev. Dr. Martin Luther King's family in a wrongful death lawsuit concerning King's April 4, 1968, murder and successfully persuaded a Memphis, Tennessee, jury to find Lloyd Jowers responsible as an accomplice in the King assassination.
Sirhan was initially sentenced to death, but three years later that sentence was commuted by California courts to life imprisonment plus six months to 14 years in prison, to run concurrently. | Robert F. Kennedy's assassin has been denied parole for a 14th time. A California parole board decided that Sirhan Sirhan, who has spent 42 years behind bars for the 1968 shooting, has failed to accept the magnitude of his crime, the Los Angeles Times reports. Board members noted that Sirhan described his wounding of five other people as "flesh wounds" when the injuries were in fact more serious. A parole commissioner told the 66-year-old killer, who was chided several times for interruptions, that he had failed to seek out self-help programs and his behavior was immature, CNN notes. He expressed disappointment as Sirhan "vaguely made references to conspiracies." Sirhan now claims he was brainwashed and doesn't remember killing Kennedy. Click here for that story. |
As early as 1987, we identified the need for FAA to develop criteria for targeting safety inspections to airlines with characteristics that may indicate safety problems and noted that targeting was important because FAA may never have enough resources to inspect all aircraft, facilities, and pilots. FAA employs about 2,500 aviation safety inspectors to oversee about 7,300 scheduled commercial aircraft, more than 11,100 charter aircraft, about 184,400 active general aviation aircraft, about 4,900 repair stations, slightly more than 600 schools for training pilots, almost 200 maintenance schools, and over 665,000 active pilots. Although FAA has taken steps to better target its inspection resources to areas with the greatest safety risks, these efforts are still not complete. SPAS, which FAA began developing in 1991, is intended to analyze data from up to 25 existing databases that contain such information as the results of airline inspections and the number and the nature of aircraft accidents. This system is then expected to produce indicators of an airline’s safety performance, which FAA will use to identify safety-related risks and to establish priorities for FAA’s inspections. FAA completed development and installation of the initial SPAS prototype in 1993. As of April 1996, FAA had installed SPAS in 59 locations but is experiencing some logistical problems in installing SPAS hardware and software. Full deployment of the $32-million SPAS system to all remaining FAA locations nationwide is scheduled to be completed in 1998. In February 1995, we reported that although FAA had done a credible job in analyzing and defining the system’s user requirements, SPAS could potentially misdirect FAA resources away from the higher-risk aviation activities if the quality of its source data is not improved. SPAS program officials have acknowledged that the quality of information in the databases that are linked to SPAS poses a major risk to the system. To improve the quality of data to be used in SPAS analyses, we recommended that FAA develop and implement a comprehensive strategy to improve the quality of all data used in its source databases. FAA concurred with the need for this comprehensive strategy and planned to complete it by the end of 1995. As of April 1996, the strategy drafted by an FAA contractor had not been approved by agency management. Until FAA completes and implements its strategy, the extent and the impact of the problems with the quality of the system’s data will remain unclear. Although we have not determined the full extent of the problems, our recent audit work and recent work by the DOT IG have identified continuing problems with the quality of data entered into various source databases for SPAS. FAA’s Program Tracking and Reporting Subsystem (PTRS), which contains the results of safety inspections, has had continuing problems with the accuracy and consistency of its data. Several FAA inspectors mentioned concerns about the reliability and consistency of data entered into PTRS. According to an inspector who serves on a work group to improve SPAS data inputs, reviews of inspectors’ entries revealed some inaccurate entries and a lack of standardization in the comment section, where inspectors should report any rules, procedures, practices, or regulations that were not followed. He said inspectors continued to comment on things that were not violations while some actual violations went unreported. For example, during our ongoing work we recently found a PTRS entry indicating an inspection that never occurred on a type of aircraft that the carrier did not use. The DOT IG also concluded in a November 1995 report that FAA inspectors did not consistently and accurately report their inspection results in PTRS because reporting procedures were not interpreted and applied consistently by FAA inspectors, and management oversight did not identify reporting inconsistencies. The DOT IG recommended that FAA clarify PTRS reporting procedures to ensure consistent and accurate reporting of inspections and to establish controls to ensure supervisors review PTRS reports for reporting inconsistencies and errors. Such problems can jeopardize the reliability of SPAS to target inspector resources to airlines and aircraft that warrant more intensive oversight than others. Over the last decade, we, the DOT IG, and internal FAA groups have repeatedly identified problems and concerns related to the technical training FAA has provided to its inspectors. For example, both we and the IG have reported that FAA inspectors were inspecting types of aircraft that they had not been trained to inspect or for which their training was not current. In the wake of these findings, FAA has revised its program to train inspectors by (1) developing a process to assess training needs for its inspector workforce, (2) attempting to identify those inspections that require aircraft-specific training and limiting this training to the number of inspectors needed to perform these inspections, and (3) decreasing the requirements for recurrent flight training for some of its inspectors. However, our interviews with 50 inspectors indicate that some inspectors continue to perform inspections for which they are not fully trained, and some inspectors do not believe they are receiving sufficient training. While we cannot determine the extent of these problems from our limited interviews, the training issues reflect persistent concerns on which we and others have reported for many years. For example, we reported in 1989 that airworthiness inspectors received about half of the training planned for them in fiscal year 1988. Furthermore, we reported in 1989 and the DOT IG reported again in 1992 that inspectors who did not have appropriate training or current qualifications were conducting flight checks of pilots.The Director of FAA’s Office of Flight Standards Service acknowledged that the adequacy of inspector training remains a major concern of inspectors. Recognizing that some of its employees had received expensive training they did not need to do their jobs while others did not receive essential training, in 1992 FAA developed a centralized process to determine, prioritize, and fund its technical training needs. This centralized process is intended to ensure that funds are first allocated for training that is essential to fulfilling FAA’s mission. In accordance with this process, each FAA entity has developed a needs assessment manual tailored to the entity’s activities and training needs. For example, the manual for the Flight Standards Service outlines five categories of training. The highest priority is operationally essential training, which is defined as training required to provide the skills needed to carry out FAA’s mission. The other four categories, which are not considered operationally essential, involve training to enhance FAA’s ability to respond to changes in workload, to use new technologies, to enhance individual skills, or to provide career development. To identify initial course sequences for new hires and time frames for their completion as well as some continuing development courses that are not aircraft-specific, FAA created profiles for the various types of inspectors. Although each profile notes that additional specialized training may be required according to an inspector’s assigned responsibilities and prior experience, the centralized process provides no guidance for analyzing individualized needs. According to several inspectors we interviewed who had completed initial training, they were not receiving the specific technical training needed for their assigned responsibilities. The inspectors said that the assessment process does not fully address their advanced training needs and that some inspectors were performing inspections for which they have not received training. For example, one maintenance inspector told us he was responsible for inspecting seven commuter airlines but had never attended maintenance training school for the types of aircraft he inspects. He said that he had requested needed training for 5 years with his supervisor’s approval, but his requests were not ranked high enough in the prioritization process to receive funding. Instead, FAA sent the maintenance inspector to training on Boeing 727s and composite materials, which were not related to the aircraft he was responsible for. He said that he did not request these courses and assumed he was sent to fill available training slots. Another maintenance inspector said that although he was trained on modern, computerized Boeing 767s, he was assigned to carriers who fly 727s, 737s, and DC-9s with older mechanical systems. While the Director of the Flight Standards Service said that inspectors could obtain some aircraft-specific training by attending classes given by the airlines they inspect, inspectors with whom we spoke said that supervisors have not allowed them to take courses offered by airlines or manufacturers because their participation could present a potential conflict of interest if the courses were taken for free. Some inspectors we interviewed said that when they could not obtain needed training through FAA they have audited an airline’s classes while inspecting its training program. Although the inspectors might acquire some knowledge by auditing an airline’s class, they stressed that learning to oversee the repair of complex mechanical and computerized systems and to detect possible safety-related problems requires concentration and hands-on learning, not merely auditing a class. The inspectors said that extensive familiarity with the aircraft and its repair and maintenance enhances their ability to perform thorough inspections and to detect safety-related problems. While technical training is especially important when inspectors assume new responsibilities, other inspectors we interviewed said that they sometimes do not receive this training when needed. For example, although an operations inspector requested Airbus 320 training when a carrier he inspected began using that aircraft, he said that he did not receive the training until 2 years after that carrier went out of business. Similarly, several inspectors told us that despite their responsibility to approve global positioning system (GPS) receivers, a navigation system increasingly being used in aircraft, they have had no formal training on this equipment. Finally, a maintenance inspector, who was responsible for overseeing air carriers and repair stations that either operate or repair Boeing 737, 757, 767, and McDonnell Douglas MD-80 aircraft, said that the last course he received on maintenance and electronics was 5 years ago for the 737. Although the other three aircraft have replaced mechanical gauges with more sophisticated computer systems and digital displays, the inspector has not received training in these newer technologies. While acknowledging the desirability of updating training for new responsibilities, the Director of the Flight Standards Service said that prioritizing limited training resources may have defined essential training so narrowly that specialized training cannot always be funded. The Acting Manager of FAA’s Flight Standards National Field Office, which oversees inspector training, told us that to improve training programs for inspectors FAA is also providing training through such alternative methods as computer-based instruction, interactive classes televised via satellite, and computer-based training materials obtained from manufacturers. However, the effectiveness of these initiatives depends on how FAA follows through in promoting and using them. For example, while FAA has developed a computer-based course to provide an overview of GPS, the course is not currently listed in the training catalogue for the FAA Academy. We found that several inspectors who had requested GPS training were unaware of this course. According to the Manager of the Regulatory Standards and Compliance Division of the FAA Academy, their lack of awareness may be because the course is sponsored by a different entity of FAA, the Airway Facilities Service. If this GPS course meets inspectors’ needs, they could be informed of its availability through a special notice and by cross-listing it in FAA’s training catalogue. The extent to which inspectors will use distance learning equipment (e.g., computer-based instruction) and course materials depends in great part on their awareness of existing courses and whether the equipment and software are readily available. Because of resource constraints, FAA has reduced the number of inspections for which aircraft-specific training is considered essential and has limited such training to inspectors who perform those inspections. For example, FAA requires inspectors to have pilot credentials (type ratings by aircraft) when they inspect commercial aircraft pilots during flight. FAA has a formula to determine how many inspectors each district office needs to perform inspections requiring aircraft-specific skills. A district office must perform a minimum number of aircraft-specific inspections each year to justify training for that type of aircraft. Offices that perform fewer than the minimum number of inspections that require specialized skills may borrow a “resource inspector” from FAA headquarters or a regional office. According to the Director of the Flight Standards Service, FAA cannot afford to maintain current pilot credentials for all inspectors so they can conduct pilot inspections. However, inspectors interviewed mentioned problems with using resource inspectors, although we have not determined how pervasive these problems are. Some of the inspectors said that they had difficulties obtaining resource inspectors when needed. Additionally, they said that sometimes resource inspectors are not familiar with the operations and manuals of the airline they are asked to inspect and may therefore miss important safety violations of that airline’s policies or procedures. For example, while one inspector, who had primary responsibility for a carrier that was adding a new type of aircraft, had to terminate the inspection because the airline’s crew was not operating in accordance with the carrier’s operations manual, the resource inspector who accompanied him had not detected this problem because he was unfamiliar with that carrier’s specific procedures. In responding to these concerns, the Director of the Flight Standards Service acknowledged that the resource inspector may need to be paired with an inspector familiar with the airline’s manuals. According to the Director of the Flight Standards Service and the Acting Manager of the Evaluations and Analysis Branch, identifying inspections that require aircraft-specific training and limiting training to those who perform such inspections has reduced the number of inspectors who need expensive aircraft-specific flight training. They said this policy also helps to ensure that inspections requiring a type rating are only conducted by inspectors who hold appropriate, current credentials. As we recommended in 1989, reevaluating the responsibilities of inspectors, identifying the number needed to perform flight checks, and providing them with flight training makes sense in an era of limited resources for technical training. The DOT IG’s ongoing work has found differences of opinion and confusion within FAA about which inspections require aircraft-specific training and type ratings. For example, while the Flight Standards Service training needs assessment manual lists 48 inspection activities for which operations inspectors need aircraft-specific training, during the DOT IG’s ongoing audit the Acting Manager of the Evaluations and Analysis Branch listed only 15 inspection activities requiring current type ratings. Until FAA identifies the specific inspection activities that require aircraft-specific training or type ratings, it will remain unclear whether some inspections are being performed by inspectors without appropriate credentials. The DOT IG’s ongoing study is evaluating this issue in more detail. We and the DOT IG have previously reported that FAA inspectors making pilot flight checks either did not have the credentials (type ratings) or were not current in their aircraft qualifications in accordance with FAA requirements. Being current is important because some inspectors may actually have to fly an aircraft in an emergency situation. In May 1993, FAA decreased the frequency of inspector training and more narrowly defined those inspector activities requiring type ratings. Under FAA’s previous policy, inspectors overseeing air carrier operations received actual flight training (aircraft or simulator flying time) every 6 months to maintain their qualifications to conduct flight checks on pilots. FAA now requires recurrent flight training every 12 months and limits this requirement to those inspectors who might actually have to assume the controls (flight crewmember, safety pilot, or airman certification) in aircraft requiring type ratings. Because inspectors who ride in the jump seat would not be in a position to assume control of an aircraft, they no longer need to remain current in their type ratings, whereas inspectors of smaller general aviation aircraft who might actually have to assume the controls, are required to receive flight training. However, this annual requirement for general aviation inspectors has been changed to every 24 months. Inspectors we interviewed opposed the change requiring less frequent flight training. An operations inspector for general aviation aircraft believed training every 2 years was inadequate for inspectors who have to be at the controls every time they conduct a check ride. Another inspector, who is type rated in an advanced transport category aircraft, said he has not received any aircraft flying time and only half the simulator time he needs. According to the Acting Manager of the Evaluations and Analysis Branch, the decision to reduce the requirements for flight training was driven by budget constraints, and FAA has not studied the potential or actual impact of this reduction. Consequently, it is unknown whether the change in inspector flight training frequency is affecting aviation safety. The Director of the Flight Standards Service said that FAA has been placed in a position of having to meet the safety concerns of the aviation industry and the public at a time when air traffic is projected to continue increasing while resources are decreasing. Between fiscal years 1993 and 1996, decreases in FAA’s overall budget have significantly reduced the funding available for technical training. FAA’s overall training budget has decreased 42 percent from $147 million to $85 million. FAA has taken a number of steps over the years to make its technical training program more efficient. For example, the prescreening of air traffic controller trainees has improved the percentage of students who successfully complete this training and decreased the number of FAA and contract classes needed. Additionally, in response to our recommendation, FAA has limited expensive flight training to inspectors who require current flight experience. FAA has also realized savings from the increased use of distance learning (e.g., computer-based instruction) and flight simulation in place of more expensive aircraft training time. FAA’s reduced funding for technical training has occurred at a time when it has received congressional direction to hire over 230 additional safety inspectors in fiscal year 1996. To achieve this staffing increase, FAA will have to hire about 400 inspectors to overcome attrition. New staff must be provided initial training at the FAA Academy to prepare them to assume their new duties effectively. The cost of this training, combined with overall training budget reductions, constrains FAA’s ability to provide its existing inspectors with the training essential to effectively carry out FAA’s safety mission. For fiscal year 1996, FAA’s training needs assessment process identified a need for $94 million to fund operationally essential technical training. However, due to overall budget reductions, FAA was allocated only $74 million for this purpose. For example, the budget for Regulation and Certification is $5.2 million short of the amount identified for operationally essential training. Specific effects of this shortfall include: delaying the training of fourth quarter inspector new hires until fiscal year 1997; cancellation of 164 flight training, airworthiness, and other classes planned to serve over 1,700 safety inspectors; and delay of recurrent and initial training for test pilots who certify the airworthiness of new aircraft. Based on the fiscal year 1997 request, the gap between FAA’s request and the amount needed to fund operationally essential technical training will be even greater in fiscal year 1997, in part because of training postponed in fiscal year 1996. Regulation and Certification, for example, is projecting an $8.1-million shortfall in operationally essential training. FAA’s Center for Management Development in Palm Coast, Florida, which provides management training in areas such as leadership development, labor-management relations, and facilitator skills, has experienced a 9-percent funding decrease since fiscal year 1993. At a time when FAA’s overall staffing has decreased from 56,000 in fiscal year 1993 to around 47,600 in fiscal year 1996, these decreases have not been reflected in the center’s costs or level of activity. An FAA contractor study completed in April 1995 showed that co-locating the center with the FAA Academy in Oklahoma City would result in cost savings of a half million dollars or more per year. Specifically, the study estimated that FAA could save between $3.4 million and $6.3 million over the next 10 years by transferring the center functions to the FAA Academy. The study also identified such intangibles as adverse employment impacts in the Palm Coast area that could be considered in making a relocation decision. FAA management currently supports retention of the center. In reviewing this study, we have identified potential additional savings that could increase the savings from relocating this facility to as much as $1 million annually. For example, the study estimated that easier commuting access to Oklahoma City would save $2.5 million in staff time over the 10-year period, an amount that was not included in the study’s overall savings estimate. The study also did not consider reducing or eliminating center staff who duplicate functions already available at the FAA Academy, such as course registration and evaluation. In an era of constrained budgets where funding shortfalls for essential technical training have become a reality, FAA must find ways to make the best use of all available training resources. Moving the center’s functions to the FAA Academy should be seriously considered—particularly since FAA’s 10-year lease on the center facility expires in August 1997. Mr. Chairman, this concludes our statement. We would be pleased to respond to questions at this time. Aviation Safety: Data Problems Threaten FAA Strides on Safety Analysis System (GAO/AIMD-95-27, Feb. 8, 1995). FAA Technical Training (GAO/RCED-94-296R, Sept. 26, 1994). Aircraft Certification: New FAA Approach Needed to Meet Challenges of Advanced Technology (GAO/RCED-93-155, Sept. 16, 1993). FAA Budget: Important Challenges Affecting Aviation Safety, Capacity, and Efficiency (GAO/T-RCED-93-33, Apr. 26, 1993). Aviation Safety: Progress on FAA Safety Indicators Program Slow and Challenges Remain (GAO/IMTEC-92-57, Aug. 31, 1992). Aviation Safety: Commuter Airline Safety Would Be Enhanced With Better FAA Oversight (GAO/T-RCED-92-40, Mar. 17, 1992). Aviation Safety: FAA Needs to More Aggressively Manage Its Inspection Program (GAO/T-RCED-92-25, Feb. 6, 1992). Aviation Safety: Problems Persist in FAA’s Inspection Program (GAO/RCED-92-14, Nov. 20, 1991). Serious Shortcomings in FAA’s Training Program Must Be Remedied (GAO/T-RCED-90-91, June 21, 1990, and GAO/T-RCED-90-88, June 6, 1990). Staffing, Training, and Funding Issues for FAA’s Major Work Forces (GAO/T-RCED-90-42, Mar. 14, 1990). Aviation Safety: FAA’s Safety Inspection Management System Lacks Adequate Oversight (GAO/RCED-90-36, Nov. 13, 1989). Aviation Training: FAA Aviation Safety Inspectors Are Not Receiving Needed Training (GAO/RCED-89-168, Sept. 14, 1989). FAA Staffing: Recruitment, Hiring, and Initial Training of Safety-Related Personnel (GAO/RCED-88-189, Sept. 2, 1988). Aviation Safety: Measuring How Safely Individual Airlines Operate (GAO/RCED-88-61, Mar. 18, 1988). Aviation Safety: Needed Improvements in FAA’s Airline Inspection Program Are Underway (GAO/RCED-87-62, May 19, 1987). FAA Work Force Issues (GAO/T-RCED-87-25, May 7, 1987). Department of Transportation: Enhancing Policy and Program Effectiveness Through Improved Management (GAO/RCED-87-3, Apr. 13, 1987). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed the Federal Aviation Administration's (FAA) safety inspection program. GAO noted that: (1) in 1991, FAA created its Safety Performance Analysis System (SPAS) to focus its inspection resources on the pilots, aircraft, and facilities that pose the greatest risk; (2) poor data quality jeopardizes the success of SPAS; (3) FAA officials have not fully responded to prior recommendations of adopting a strategy to improve data quality by the end of 1995; (4) FAA inspectors have performed inspections without the appropriate or up-to-date credentials; (5) FAA has had trouble training its inspectors because it does not offer the necessary courses and has limited aircraft-specific training and decreased the frequency of inspector flight training; (6) between fiscal year (FY) 1993 and FY 1996, funding for technical training decreased 42 percent; and (7) FAA expects a $20-million shortfall for technical training it identified as essential for FY 1996. |
The President is responsible for appointing individuals to positions throughout the federal government. In some instances, the President makes these appointments using authorities granted by law to the President alone. Other appointments are made with the advice and consent of the Senate via the nomination and confirmation of appointees. Presidential appointments with Senate confirmation are often referred to with the abbreviation PAS. This report identifies, for the 113 th Congress, all nominations to full-time positions requiring Senate confirmation in 40 organizations in the executive branch (27 independent agencies, 6 agencies in the Executive Office of the President [EOP], and 7 multilateral organizations) and 4 agencies in the legislative branch. It excludes appointments to executive departments and to regulatory and other boards and commissions, which are covered in other CRS reports. Information for this report was compiled using the Senate nominations database of the Legislative Information System (LIS) at http://www.lis.gov/nomis/ , the Congressional Record (daily edition), the Weekly Compilation of Presidential Documents , telephone discussions with agency officials, agency websites, the United States Code , and the 2012 Plum Book ( United States Government Policy and Supporting Positions ). Related Congressional Research Service (CRS) reports regarding the presidential appointments process, nomination activity for other executive branch positions, recess appointments, and other appointments-related matters may be found at http://www.crs.gov . During the 113 th Congress, President Barack Obama submitted 69 nominations to the Senate for full-time positions in independent agencies, agencies in the EOP, multilateral agencies, and legislative branch agencies. Of these nominations, 34 were confirmed, 34 were returned to the President, and 1 was withdrawn. Table 1 summarizes the appointment activity. The length of time a given nomination may be pending in the Senate varies widely. Some nominations are confirmed within a few days, others are not confirmed for several months, and some are never confirmed. For each nomination covered by this report and confirmed in the 113 th Congress, the report provides the number of days between nomination and confirmation ("days to confirm"). The mean (average) number of days elapsed between nomination and confirmation was 123.9. The median number of days elapsed was 104.0. Under Senate Rules, nominations not acted on by the Senate at the end of a session of Congress (or before a recess of 30 days) are returned to the President. The Senate, by unanimous consent, often waives this rule—although not always. This report measures the "days to confirm" from the date of receipt of the resubmitted nomination, not the original. Agency profiles in this report are organized in two parts: (1) a table listing the organization's full-time PAS positions as of the end of the 113 th Congress and (2) a table listing appointment action for vacant positions during the 113 th Congress. As mentioned earlier, data for these tables were collected from several authoritative sources. As noted, some agencies had no nomination activity during this time. In each agency profile, the first of the two tables identifies, as of the end of the 113 th Congress, each full-time PAS position in the organization and its pay level. For most presidentially appointed positions requiring Senate confirmation, pay levels fall under the Executive Schedule, which, as of January 2014, ranged from level I ($201,700) for Cabinet-level offices to level V ($147,200) for lower-ranked positions. The second table, the appointment action table, provides, in chronological order, information concerning each nomination. It shows the name of the nominee, position involved, date of nomination, date of confirmation, and number of days between receipt of a nomination and confirmation, if confirmed. It also notes actions other than confirmation (i.e., nominations returned to or withdrawn by the President). The appointment action tables with more than one nominee to a position also list statistics on the length of time between nomination and confirmation. Each nomination action table provides the average days to confirm in two ways: mean and median. Although the mean is a more familiar measure, it may be influenced by outliers, or extreme values, in the data. The median, by contrast, does not tend to be influenced by outliers. In other words, a nomination that took an extraordinarily long time might cause a significant change in the mean, but the median would be unaffected. Examining both numbers offers more information with which to assess the central tendency of the data. Appendix A provides two tables. Table A-1 relists all appointment action identified in this report and is organized alphabetically by the appointee's last name. Table entries identify the agency to which each individual was appointed, position title, nomination date, date confirmed or other final action, and duration count for confirmed nominations. In the final two rows, the table includes the mean and median values for the "days to confirm" column. Table A-2 provides summary data on the appointments identified in this report and is organized by agency type, including independent executive agencies, agencies in the EOP, multilateral organizations, and agencies in the legislative branch. The table summarizes the number of positions, nominations submitted, individual nominees, confirmations, nominations returned, and nominations withdrawn for each agency grouping. It also includes mean and median values for the number of days taken to confirm nominations in each category. Appendix B provides a list of department abbreviations. Appendix A. Summary of All Nominations and Appointments to Independent and Other Agencies Appendix B. Agency Abbreviations | The President makes appointments to positions within the federal government, either using the authorities granted by law to the President alone or with the advice and consent of the Senate. This report identifies all nominations that were submitted to the Senate for full-time positions in 40 organizations in the executive branch (27 independent agencies, 6 agencies in the Executive Office of the President [EOP], and 7 multilateral organizations) and 4 agencies in the legislative branch. It excludes appointments to executive departments and to regulatory and other boards and commissions, which are covered in other reports. Information for each agency is presented in tables. The tables include full-time positions confirmed by the Senate, pay levels for these positions, and appointment action within each agency. Additional summary information across all agencies covered in the report appears in the appendix. During the 113th Congress, the President submitted 69 nominations to the Senate for full-time positions in independent agencies, agencies in the EOP, multilateral agencies, and legislative branch agencies. Of these 69 nominations, 34 were confirmed, 1 was withdrawn, and 34 were returned to him in accordance with Senate rules. For those nominations that were confirmed, a mean (average) of 123.9 days elapsed between nomination and confirmation. The median number of days elapsed was 104.0. Information for this report was compiled using the Senate nominations database of the Legislative Information System (LIS) at http://www.lis.gov/nomis/, the Congressional Record (daily edition), the Weekly Compilation of Presidential Documents, telephone discussions with agency officials, agency websites, the United States Code, and the 2012 Plum Book (United States Government Policy and Supporting Positions). This report will not be updated. |
Facebook Inc. will begin fact-checking photographs and videos posted on the social media platform, seeking to close a gap that allowed Russian propagandists to promote false news during the last U.S. presidential election.
The company said Thursday it will use technology and human reviewers to try to staunch what it called in a statement “misinformation in these new visual formats.” Previously, the company’s efforts had been focused on rooting out false articles and links.
... ||||| By Antonia Woodford, Product Manager
We know that people want to see accurate information on Facebook, so for the last two years, we’ve made fighting misinformation a priority. One of the many steps we take to reduce the spread of false news is working with independent, third-party fact-checkers to review and rate the accuracy of content. To date, most of our fact-checking partners have focused on reviewing articles. However, we have also been actively working to build new technology and partnerships so that we can tackle other forms of misinformation. Today, we’re expanding fact-checking for photos and videos to all of our 27 partners in 17 countries around the world (and are regularly on-boarding new fact-checking partners). This will help us identify and take action against more types of misinformation, faster.
How does this work?
Similar to our work for articles, we have built a machine learning model that uses various engagement signals, including feedback from people on Facebook, to identify potentially false content. We then send those photos and videos to fact-checkers for their review, or fact-checkers can surface content on their own. Many of our third-party fact-checking partners have expertise evaluating photos and videos and are trained in visual verification techniques, such as reverse image searching and analyzing image metadata, like when and where the photo or video was taken. Fact-checkers are able to assess the truth or falsity of a photo or video by combining these skills with other journalistic practices, like using research from experts, academics or government agencies.
As we get more ratings from fact-checkers on photos and videos, we will be able to improve the accuracy of our machine learning model. We are also leveraging other technologies to better recognize false or misleading content. For example, we use optical character recognition (OCR) to extract text from photos and compare that text to headlines from fact-checkers’ articles. We are also working on new ways to detect if a photo or video has been manipulated. These technologies will help us identify more potentially deceptive photos and videos to send to fact-checkers for manual review. Learn more about how we approach this work in an interview with Tessa Lyons, Product Manager on News Feed.
How do we categorize false photos and videos?
Based on several months of research and testing with a handful of partners since March, we know that misinformation in photos and videos usually falls into three categories: (1) Manipulated or Fabricated, (2) Out of Context, and (3) Text or Audio Claim. These are the kinds of false photos and videos that we see on Facebook and hope to further reduce with the expansion of photo and video fact-checking.
(See more details on these examples from the fact-checkers’ debunking articles: Animal Politico, AFP, France 24, and Boom Live).
What’s different about photos and videos?
People share millions of photos and videos on Facebook every day. We know that this kind of sharing is particularly compelling because it’s visual. That said, it also creates an easy opportunity for manipulation by bad actors. Based on research with people around the world, we know that false news spreads in many different forms, varying from country to country. For example, in the US, people say they see more misinformation in articles, whereas in Indonesia, people say they see more misleading photos. However, these categories are not distinct. The same hoax can travel across different content types, so it’s important to build defenses against misinformation across articles, as well as photos and videos. ||||| Today, Facebook announced that it’s expanding photo and video fact-checking capabilities to all of its third-party fact-checking partners. Product Manager Tessa Lyons sat down with us to explain how the company is using technology, along with human reviewers, to find and take action on misinformation in these new visual formats — and where the company needs to keep investing.
Facebook has been tackling article-based misinformation for a while now, but photos and videos are a relatively new frontier. What’s the difference between those two types of misinformation?
We started our work on misinformation with articles because that’s what people in the US were telling us was the most prevalent form of false news they were seeing, and also because that was the way that financially motivated bad actors were making money off of misinformation. What they’d do is they’d share articles that contain misinformation — and people would be surprised by the headlines because they were false. So they’d click on those articles and land on websites where those bad actors were monetizing their impressions with ads.
So it was spammy, “gotcha” content.
Right, they were financially motivated spammers. So we focused on articles to go after and disrupt those financial incentives and to respond to what we were hearing from people. But we know from our research in countries around the world — and in the US — that misinformation isn’t limited to articles. Misinformation can show up in articles, in photos, in videos. The same false claim can appear as an article headline, as text over a photo or as audio in the background of a video. In order to fight misinformation, we have to be able to fact-check it across all of these different content types.
You mentioned people in the US said they saw more misinformation in articles, rather than in other formats. Is that true around the world?
The degree to which there’s misinformation in articles, photos or videos varies country to country — in part because the amount of photos or videos versus articles in people’s News Feed varies country by country. In some countries, articles make up a greater proportion of people’s News Feed than in others. Visual content might also lean more toward photos than videos in some countries, or vice versa.
Why is that?
Well first and foremost, News Feed is personalized. So what you see in your News Feed is personal to you and you might see more articles, photos or videos based on the people you’re friends with, the Pages you follow, and the way you interact with the stories in your News Feed. But we know there are some things that make the News Feed experience more similar for people in some countries. So for example, in countries where many people are worried about their bandwidth, people might be less inclined to click on videos. So people in those countries might see fewer videos in their News Feed overall — which means less video-based misinformation.
There are other differences in the media landscape and literacy rates that impact how photo and video misinformation is interpreted too. In countries where the media ecosystem is less developed or literacy rates are lower, people might be more likely to see a false headline on a photo, or see a manipulated photo, and interpret it as news, whereas in countries with robust news ecosystems, the concept of “news” is more tied to articles.
Can you use the same technology to catch all those different types of misinformation?
Yes and no. When we fight misinformation, we use a combination of technology and human review by third-party fact-checkers. When it comes to articles, we use technology to, first, predict articles that are likely to contain misinformation and prioritize those for fact-checkers to review. Second, once we have a rating from a fact-checker, we use technology to find duplicates of that content. We’ve been doing this with links for a while; for example, a fact-checker in France debunked the claim that you can save a person having a stroke by using a needle to prick their finger and draw blood. This allowed us to identify over 20 domains and over 1,400 links spreading that same claim. Now, we’ll apply a similar technology to identify duplicates of photos and videos that have been debunked by fact-checkers so that we can make the most of each rating from fact-checking partners.
While there are similarities in the technology, there are also important differences in how we identify articles versus photos and videos.
Okay — let’s talk about that first stage: using technology to predict what’s likely to be false.
When we’re predicting articles that are likely to contain misinformation, we use signals like feedback from our community, telling us that a link they’re seeing is false news. We look at whether the comments on the post include phrases that indicate readers don’t believe the content is true. We also look at things like whether the Pages that shared the content have a history of sharing things that have been rated false by fact-checkers. Signals like these apply to articles, photos and videos.
So a lot of the work we’ve done to predict potentially false links also helps us with photos and videos. However, there are also some differences. When we talk about misinformation in photos and videos, we break it into three categories: manipulated, taken out of context, or includes a false text or audio claim. So let’s take each of those. Manipulated: You can have a photo that’s been manipulated to suggest that a person was in the picture when they weren’t. Or if someone is taking an action in a photo — like holding something — you could manipulate it to make it look like they’re holding something else.
So like, when you see a photo of a shark swimming on the highway.
Right, that’s a great example of a manipulated photo. For manipulated videos, the term “deepfakes” is something a lot of people in the misinformation space have been talking about — where, for example, you can make it look like a public figure was saying things that they never actually said, but in the video their mouth would actually be moving, and it would sound like their voice. So the shark example would be what we classify as a manipulated piece of media.
The second category is things taken out of context. So, a photo of a conflict zone that’s being shared in a way that suggests it’s happening in a different time or place. Same with video.
The third category is false audio or text claims. Just as you can have false headlines or text in an article, that claim could be overlaid on a photo or spoken by someone in a video. The same way that someone could make a false claim in an article, they could also make a false claim in the caption on a photo or while speaking in a video.
So those are the three categories that we think about. And there are different ways we can use technology to predict each of them, and our ability to do so is at different levels of development.
So how far along is the technology to predict misinformation in each of those three categories?
We’ve started making progress in identifying things that have been manipulated. But figuring out whether a manipulated photo or video is actually a piece of misinformation is more complicated; just because something is manipulated doesn’t mean it’s bad. After all, we offer filters on Facebook Stories and in some ways that’s a manipulation, but that’s obviously not the kind of stuff we’re going after with our fact-checking work. But we are now able to identify different types of manipulations in photos, which can be a helpful signal that maybe something is worth having fact-checkers take a look at.
Understanding if something has been taken out of context is an area we’re investing in but have a lot of work left to do, because you need to understand the original context of the media, the context in which it’s being presented, and whether there’s a discrepancy between the two. To examine a photo of a war zone, figure out its origin, and then assess whether that context is accurately represented by the caption now circulating with that photo, we still need a degree of human review. And that’s why we rely on fact-checkers to leverage their journalistic expertise and situational understanding.
For photos or videos that make false text or audio claims, we can extract the text using optical character recognition (OCR) or audio transcription, and see if that text includes a claim made that matches something fact-checkers have debunked. If it does, we’ll surface it to fact-checkers so they can verify the claim is a match. At the moment, we’re more advanced with using OCR on photos than we are with using audio transcription on videos.
So what about the other side, finding duplicates of false claims?
Well, the way you find duplicates of articles is through natural language processing, which is a machine learning technique that can be used to find duplicates of text with slight variations. For photos, we’re pretty good at finding exact duplicates, but often times we’ll see that someone will add small changes to the photo which adds a layer of complexity; the more a photo shifts from its original form, the harder it is for us to detect and enforce against. So we need to continue to invest in technology that will help us identify very near duplicates that have been changed in small ways. ||||| Getty Images
In the quest to weed out fake news on social media, Facebook is expanding its fact-checking efforts in 17 countries to include photos and videos as well as articles, the company said in a blog post Thursday.
In the post, product manager Antonia Woodford said Facebook has built a machine-learning model that flags possibly false content for fact-checkers to look at. And she said all the company's 27 fact-checking partners in the countries will now be increasing the scope of what they look at.
"Many of our third-party fact-checking partners have expertise evaluating photos and videos and are trained in visual verification techniques, such as reverse image searching and analyzing image metadata, like when and where the photo or video was taken," Woodford said.
This comes as social media giants like Facebook and Twitter have had to grapple with how their networks can be used to spread fake news and misinformation and even influence elections. Last week, Facebook COO Sheryl Sandberg went to Capitol Hill along with Twitter CEO Jack Dorsey to answer questions from lawmakers during a Senate intelligence committee hearing.
On Wednesday, Facebook CEO Mark Zuckerberg said the company has learned a lot since Russian meddling in the 2016 election and that it's "developed sophisticated systems that combine technology and people to prevent election interference on our services."
Zuckerberg kicked off 2018 with an open letter pledging to fix the social network's many problems. In addition to fake news and election interference, reports surfaced in April that digital consultancy Cambridge Analytica had misused the personal information of up to 87 million Facebook users. The scandal touched off a series of apologies, an overhaul of Facebook privacy settings and an expensive investigation into its relationships with app developers.
First published Sept. 13, 8:28 a.m. PT.
Update, 12:10 p.m.: Adds more background about Facebook and Cambridge Analytica.
Cambridge Analytica: Everything you need to know about Facebook's data mining scandal.
Tech Enabled: CNET chronicles tech's role in providing new kinds of accessibility. | Facebook wants you to be able to believe your eyes. To that end, the social media platform will begin fact-checking images and videos posted to the site, the Wall Street Journal reports. Heretofore, Facebook has focused mainly on weeding out articles that include false information. But, Facebook product manager Tessa Lyons says in a statement, "The same false claim can appear … as text over a photo or as audio in the background of a video. In order to fight misinformation, we have to be able to fact-check it across all of these different content types." Social media posts including doctored photos and other images were a big part of Russian agents' attempts to influence the 2016 presidential election, per CNBC. In a blog post, Facebook product manager Antonia Woodford says Facebook has created a machine-learning model to flag possibly false content through the use of "engagement signals, including feedback from people on Facebook." If an image or video doesn't seem right, Facebook sends it to fact-checkers for review, she says. As the midterm elections approach, CNBC reports, Facebook has been bolstering its fact-checking efforts and has detected "coordinated inauthentic behavior." On Wednesday, CEO Mark Zuckerberg said Facebook has "developed sophisticated systems … to prevent election interference" based on lessons learned from Russian meddling in 2016, according to Cnet. |
PORTLAND, Ore. (AP) — Someone opened fire on a group of young people outside an alternative high school Friday, sending three people to the hospital in what is believed to be a gang-related attack, Portland police said.
Portland Police near Rosemary Anderson High School, where a shooting took place Friday, Dec. 12, 2014. A shooter wounded two boys and a girl outside the high school Friday in what may be a gang-related... (Associated Press)
Officials direct youth at the scene of a shooting at Rosemary Anderson High School in North Portland Friday, Dec. 12, 2014. A shooter wounded two boys and a girl outside the high school Friday in what... (Associated Press)
Rosemary Anderson High School can be seen in North Portland where a shooting occurred on Dec. 12, 2014. A shooter wounded two boys and a girl outside the high school Friday in what may be a gang-related... (Associated Press)
Family members greet their children who were escorted out of Rosemary Anderson High School in North Portland after a shooting on Dec. 12, 2014. A shooter wounded two boys and a girl outside a U.S. high... (Associated Press)
Police guide students from Rosemary Anderson High School following a shooting at the school, in Portland, Ore., Friday Dec., 12, 2014. A shooter wounded two boys and a girl outside the U.S. high school... (Associated Press)
Police investigators, parents, students and faculty members stand outside Rosemary Anderson High School following a shooting at the alternative school in Portland, Ore., Friday Dec., 12, 2014. A shooter... (Associated Press)
A group gathers at the entrance of Rosemary Anderson High School following a shooting at the alternative school, in Portland, Ore., Friday Dec., 12, 2014. A shooter wounded two boys and a girl outside... (Associated Press)
An unidentified student and a parent leave Rosemary Anderson High School in Portland, Ore., Friday Dec., 12, 2014. A shooter wounded two boys and a girl outside the U.S. high school Friday in what is... (Associated Press)
A student is evacuated by police from Rosemary Anderson High School in Portland, Ore., Friday Dec., 12, 2014. A shooter wounded two boys and a girl outside the U.S. high school Friday in what is believed... (Associated Press)
Ralena Gaska, left, and her mother Deinda Gaska leave Rosemary Anderson High School in Portland, Ore., Friday Dec., 12, 2014. A shooter wounded two boys and a girl outside the U.S. high school Friday... (Associated Press)
Ralena Gaska, left, and her mother Deinda Gaska leave Rosemary Anderson High School in Portland, Ore., Friday Dec., 12, 2014. A shooter wounded two boys and a girl outside the U.S. high school Friday... (Associated Press)
Two unidentified students discuss the shooting as they leave Rosemary Anderson High School in Portland, Ore., Friday Dec., 12, 2014. A shooter wounded two boys and a girl outside the U.S. high school... (Associated Press)
Officials speak to people at the scene of a shooting at Rosemary Anderson High School in North Portland Friday, Dec. 12, 2014. A shooter wounded two boys and a girl outside the high school Friday in... (Associated Press)
A group of students leave Rosemary Anderson High School in Portland, Ore., Friday Dec., 12, 2014. A shooter wounded two boys and a girl outside the U.S. high school Friday in what is believed to be a... (Associated Press)
Parents and teachers gather outside Rosemary Anderson High School following a shooting at the alternative school, in Portland, Ore., Friday Dec., 12, 2014. A shooter wounded two boys and a girl outside... (Associated Press)
Two unidentified students discuss the shooting as they wait outside Rosemary Anderson High School in Portland, Ore., Friday Dec., 12, 2014. A shooter wounded two boys and a girl outside the U.S. high... (Associated Press)
Portland Mayor Charlie Hales arrives near Rosemary Anderson High School, where a shooting took place Friday, Dec. 12, 2014. A shooter wounded two boys and a girl outside the high school Friday in what... (Associated Press)
Terence Lewis, right, with Alexis Lewis, center, and his mother leave Rosemary Anderson High School in Portland, Ore., Friday Dec., 12, 2014. A shooter wounded two boys and a girl outside the U.S. high... (Associated Press)
The victims are students at Rosemary Anderson High School or in related job training programs, police Sgt. Pete Simpson said. A 16-year-old girl was in critical condition, and two males — ages 17 and 20 — were in serious condition, police said. A fourth person — a 19-year-old woman — was grazed by a bullet but not hospitalized.
The shooting was reported after noon and happened at a street corner outside the school, Simpson said.
Witnesses told police there may have been a dispute outside the school, but police said they didn't know who was involved.
"We don't know what led up to the shooting," Simpson said. "There was some kind of dispute."
The assailant and two other people fled, and the wounded students went to the school for help, he said.
A nearby high school and community college were put on lockdown.
Preliminary information suggests the shooter has gang ties, Simpson said. The Police Bureau's gang unit was deployed in the investigation.
Sierra Smith, a 17-year-old student, told The Oregonian she saw one of the male victims being helped by a teacher inside the school.
"He was laying on the ground. He had blood coming out of his stomach," she said. "It was scary."
Another student, Oliviann Danley, 16, told the newspaper she saw a boy run into the school and yell, "Oh my god, did I just get shot?"
Rosemary Anderson High School serves at-risk students who were expelled or dropped out, or who are homeless or single parents. According to the school's website, 190 students annually are enrolled at the north Portland location. The school also has a second location in Gresham.
Gang violence in Portland isn't a new phenomenon. Some of the violence occurs between rival gangs, but bystanders have also been hurt.
"We've made a lot of progress in addressing the gang problem, but we haven't eradicated it," Mayor Charlie Hales said after shooting Friday. "Today's really a sad reminder that it's still with us."
Portland police have said they saw a spike in gang crime this summer and have complained they don't have adequate resources to address the problem. Recent violence includes a man killed in a drive-by-shooting in June and another man killed in a separate shooting. A 5-year-old boy also was shot in the leg while playing at an apartment complex.
A Multnomah County report on gang activity released in June said crime in the county that includes Portland actually decreased from 2005 to 2012. As inner-city Portland gentrifies, the report said, criminal activity is shifting from northern neighborhoods to areas farther east, including the city of Gresham.
The report identified at least 133 active gangs in the county.
Dani Gonzales, 64, has lived in the neighborhood of Friday's shooting for 25 years and said it's generally safe but there has always been some gang activity.
"Kids just get silly and get crazy ideas. I don't know what goes on in their heads," Gonzales said.
There was another school shooting in the Portland area in June, but it was not gang-related. A freshman killed another boy in a locker room and a bullet grazed a teacher before the shooter went into a bathroom and died from a self-inflicted gunshot, police said.
___
Associated Press writers Terrence Petty, Gosia Wozniacka and Tim Fought contributed to this report. ||||| Rosemary Anderson High School is Portland OIC’s community-based alternative high school, in operation since 1983, and accredited by the NW Accreditation Commission.
Enrolling up to 190 students annually at our North Killingsworth location, our at-risk students have either been expelled or dropped out of public high school and many are homeless. RAHS provides open-door year round access to the “last chance” to complete a high school education, and is currently achieving a 90% graduation rate amongst students who enroll and attend classes.
Rosemary Anderson High School East launched in 2012 serving students in the Centennial and Gresham-Barlow school districts. RAHS East can enroll up to 200 students, and offers the same wrap-around care that has been effective at RAHS North over the past 30 years.
Download Oregon Department of Education Requirements ||||| Portland Police responded to a shooting Friday afternoon near Rosemary Anderson High School in North Portland. At least three people were taken to Legacy Emanuel Health Center.
Police are searching for suspects in the shooting.
The school is at 717 N. Killingsworth Court.
5:06 p.m.: Gov. John Kitzhaber tweets: I am saddened by the shooting today in Portland. My heart goes out to those involved, their families, and their communities.
3:52 p.m.: Sgt. Pete Simpson said at a 3:30 p.m. briefing that a dispute preceded Friday's shooting, but that it is not known whether it specifically involved the shooter and the victims. Simpson said that the shooters may have gang affiliations, but it's unknown whether that had anything to do with the dispute or the shooting, and it doesn't follow that the victims had anything to do with gangs. "They are victims," Simpson said.
Police say they believe there was one shooter accompanied by two others. The suspects fled on foot, Simpson said, heading north on Borthwick and east on Killingsworth. They left the scene quickly and it's unknown whether they had a car nearby. All are at large.
Simpson said he did not know how many shots were fired. The ATF and FBI have offered their assistance.
3:41 p.m.: From the Portland Police news conference: Portland police Sgt. Pete Simpson says there are four victims:
A 16-year-old female, who is in critical condition.
A 17-year-old male, who is in serious condition.
A 20-year-old male, who is in serious condition.
A 19-year-old female, who was grazed by a bullet and treated at the shooting scene.
3:37 p.m.: Another victim was identified as Taylor Zimmers, a 16-year-old junior at Rosemary Anderson High School. Ryan Zimmers, Taylor's father, said his daughter was in surgery.
Taylor's aunt, Shawn Zimmers, 40, said she heard about the shooting in a phone call from the girl's father and went to Legacy Emanuel Health Center.
"I'm just confused. All of these kids getting shot and killed. It doesn't make any sense," Shawn Zimmers said. "It's not like it was when we were kids."
Ashley Zimmers, 17, and Octavia Heaton, 15, two of Taylor's cousins, also waited outside the hospital. The girls, who do not attend Rosemary Anderson, said they were not allowed to enter the hospital to see their cousin.
The girls said they had heard that Taylor had been shot in the chest and the side.
"This is crazy, but not surprising" given the recent string of shootings nationally, Ashley Zimmers said. "This hits too close to home."
3:20 p.m.: All of Portland Public School Connect boys basketball games tonight will have extra security, a spokesman said. It's "just a precaution." Rosemary Anderson is a nonprofit school that PPS Connect contracts with. Of the 290 Rosemary system students, 216 come from PPS Connect. Students in the program can earn a GED or modified diploma. "These are students who have not found success in our traditional high schools," says PPS Connect spokesman Jon Isaacs.
3 p.m.: Students from Rosemary Anderson High School form a large prayer circle at Legacy Emanuel.
Oliviann Danley, 16, said she saw a boy run into the school, holding coat open and yell: 'Oh my God, did I just get shot?'
2:51 p.m.: Tanisha Franklin identified one of the shooting victims as Labraye Franklin. She said the 17-year-old is her nephew and a student at Rosemary Anderson High School.
Tanisha Franklin and Karin Williams, another of Labraye's aunts, both identified the teenager as being among the shooting victims. Both had been waiting outside Legacy Emmanuel Medical Center on Friday afternoon for word on his condition.
Amid all the confusion, both had heard that Labraye suffered wounds in different parts of his body.
"When I first got the phone call," Tanisha Franklin said, "I just started praying, 'Please let him be all right.'"
"This is just crazy," she said. "I work at a school and with all these shootings, I get more and more scared every day."
2:26 p.m.: Jacal Hill, a 17-year-old senior, said she was among a mass of students lined up waiting for the school doors to unlock for lunch. Once outside, she had just turned the street corner when she said she heard shots behind her. Scared, she hid behind a car. Now safe, she said, her thoughts are of her friends. "I just gotta say: Stop the violence."
At Legacy Emanuel, a spokeswoman said the three victims are being treated, but had no condition updates.
2:19 p.m.: Parents are waiting as students trickle out of Rosemary Anderson. One boy is telling his dad that he heard a "pow ... pow... pow... pow."
2:11 p.m.: The shooting near Rosemary Anderson High School is the latest in Portland that police believe have ties to gangs.
2:10 p.m.: Ralena Gaska, 14, would have attended Reynolds High School this year, but her mom opted against it after the school shooting there in June. Ralena said she was in the cafeteria when she realized something was wrong. At first she thought it was a fight, but teachers quickly ushered students into classrooms. "Everyone was scared, she said.
DeNida Gaska, Ralena's mother, said it is troubling that shootings occur in and around schools. "What happened last year at the end of the school year, I said, 'No.' ... And it happened again."
2:09 p.m.: Derrick Foxworth, former Portland police chief and head of security at Portland Community College, said the school was locked down from 12:24 to 1:29 p.m. and classes continued. The lockdown was done as a precaution and, he said, "we're back to normal operation."
2:06 p.m.: Sierra Smith, 17, said she was in a government history class when the shooting occurred. Later, Smith said she saw one of the victims inside the school being helped by a teacher.
"There was young boy," Smith said. "He was laying on the ground. He had blood coming out of his stomach."
"It was scary," she said
2:03 p.m.: A woman who says she's the aunt of one of the shooting victims says her 17-year-old nephew was shot in the back and his girlfriend was shot in the ankle, reported Everton Bailey Jr. "I'm frustrated. I'm sad. The world today is just out of control," said Karin Williams. She said she and other family members have not been allowed into Legacy Emanuel.
The Portland Community College lockdown alarm has stopped and students are walking around the campus again.
1:54 p.m.: Tracy Mendoza in a nearby business said, "We heard the gunshots. We dove under the desks." She said she heard five gunshots. She went on to say this of the kids who were shot outside Rosemary Anderson High School, "They were just standing on the corner at lunchtime."
1:36 p.m.: Lunch at Rosemary Anderson starts at 12:10 p.m., students said. The first police calls came in at 12:14 p.m. Students who were outside on a warm, sunny day ran into the school. Police Chief Mike Reese is on his way to Legacy Emanuel Medical Center.
Police are investigating the shooting as gang-related.
1:32: p.m.: Portland Police Sgt. Pete Simpson said "Obviously we're thankful we don't have any loss of life." The three shooting victims ran into school conscious, breathing. Also, Portland Fire Lt. Rich Tyler said firefighters from the nearby Station 24 were able to provide medical care to the victims inside the school.
At North Kerby and Killingsworth Court, parents are being reunited with their students.
1:18 p.m.: Gresham police said they received a call after the North Portland school shooting of a threat of a shooting at Rosemary Anderson High School East in Gresham, but so far, have not found the threat credible. An investigation is still ongoing.
1:15 p.m.: In July, Maxine Bernstein wrote about a street mural students were painting to promote nonviolence. It is at the corner of Borthwick and Killingsworth Court, the scene of the shooting.
Here is some background on Rosemary Anderson High School, a community-based alternative school, has two campuses that serve at-risk students who've been unsuccessful in traditional high schools. The campus on North Killingsworth Court enrolls up to 190 students each year and has been in operation for nearly 31 years.
The school serves multiple districts, and typically students attending it have had disciplinary and educational problems at more than one traditional high school, Portland Public Schools spokesperson Jon Isaacs said. The school is not seen overseen by Portland Public Schools.
In 2009, Anna Griffin, then a columnist for The Oregonian, wrote about Rosemary Anderson High School and the challenges its students face.
1:11 p.m.: Lockdown lifted at Jefferson High School and Portland Community College. Portland police says the shooting victims are teenagers, working to confirm that they are students.
At this time, police don't know how the shooter left the scene. Officers have cleared the school and the area is now safe and secure.
Here's what is being said on Twitter: Tweets from https://twitter.com/Oregonian/lists/npdxshooting-12-12-14
1:05 p.m.: The FBI has put agents at the scene of the shooting as part of its regular task force work, as needed by Portland police, spokeswoman Beth Anne Steele said. The federal agency referred all news information to the police bureau.
1:03 p.m.: Portland police Sgt. Pete Simpson says investigators believe the shooter is affiliated with a Portland gang.
12:58 p.m.: Portland police Sgt. Pete Simpson confirmed that the shooting was outside the school at North Borthwick Avenue and Killingsworth Court. He also said:
The victims were two males and one female. They all ran inside the school outside Rosemary Anderson High School immediately after the shooting.
There is currently not an active shooter at the scene.
Police swept through the school to make sure there were no additional victims.
The shooter left the scene.
12:52 p.m.: Tamara King, who lives near the school with her husband and 4-year-old child, was standing on the corner of North Killingsworth Court when she heard five shots in rapid progression. After hearing the shots, she saw at least four children scatter, diving under a car on North Albina Avenue to avoid gunfire. She called 911 at 12:13 p.m., she said. A short time later she saw two kids being loaded into the back of an ambulance.
12:51 p.m.:
I'm on scene of the shooting in North Portland. Neighbor Tamara King heard 5 shots in rapid succession. Called 912 pic.twitter.com/LR5V1evWxP — Andrew Theen (@andrewtheen) December 12, 2014
12:49 p.m.:
Police found bullet casings outside the school ..near N Killingsworth Court and Forthwick/possibly 3 suspects pic.twitter.com/fyKHo1zdbg — Maxine Bernstein (@maxoregonian) December 12, 2014
12:44 p.m.: Portland police say parents of Rosemary Anderson High School students should respond to North Killingsworth Court and Kerby Avenue.
12:36 p.m.: Rosemary Anderson High School serves at-risk students who have been expelled or dropped out of public high school. Rosemary Anderson is an alternative high school with a student body of approximately 130 students, most of whom have struggled to succeed at other Portland high schools. The school is renown for its commitment to attend to these students until they reach the age of 25. "We pretty much force the kids to go to college," said Joe McFerrin, the school's president.
12:30 p.m.: Both Portland Community College's Cascade campus and Jefferson High School are on lockdown.
-- The Oregonian | A gunman opened fire today on several young people outside an alternative high school in Portland, Oregon, wounding four and fleeing on foot with two other people, police tell the AP. The victims are all students at Rosemary Anderson High School or in affiliated job programs: a 16-year-old girl in critical condition; two males, 17 and 20, in serious condition; a 20-year-old male in serious condition; and a 19-year-old female who was lightly grazed and treated on the scene. The father of a fifth victim, a 16-year-old girl, tells the Oregonian that his daughter is in surgery. Police suspect the shooting is gang-related and say a dispute may have erupted before the bullets started flying. Eye-witness accounts are piecemeal so far, with one student saying she saw a male victim lying on the ground with "blood coming out of his stomach. It was scary." Another says a male student ran into the school holding open his coat and yelling, "Oh my god, did I just get shot?" Friends and family waiting outside the Legacy Emanuel Health Center, where victims are in surgery, are talking about the shock of gun violence. "It doesn't make any sense," says the 40-year-old aunt of a victim. "It's not like it was when we were kids." Portland police say gang violence rose over the summer but law enforcement lacks resources to deal with it. Rosemary Anderson High School serves students from troubled backgrounds, including those who are homeless, have single parents, dropped out, or were expelled. |
Justice Stevens's position on the death penalty has transformed during his tenure on the Court. Although Stevens initially supported the imposition of the death penalty in accordance with adequately protective state enacted guidelines, over the next 35 years the Justice has voted to narrow the application of the death penalty as he has become more skeptical of the punishment's underlying rationale and the states' ability to protect the rights of capital defendants. In 2008, Justice Stevens questioned the continuing constitutionality of the death penalty in his concurring opinion in Baze v. Rees . Shortly before Justice Stevens's appointment, the U.S. Supreme Court, in Furman v. Georgia , established what amounted to a moratorium on the imposition of the death penalty. The 1972 decision invalidated the capital punishment systems of Georgia and Texas along with the systems of "no less than 39 states and the District of Columbia," holding that the inherently arbitrary and discriminatory nature of the states' application of the death penalty violated both the prohibition against cruel and unusual punishment of the Eighth Amendment and the due process guarantees of the Fourteenth Amendment. Following the Furman case, capital punishment essentially ceased to exist anywhere in the United States. However, less than six months after Justice Stevens took his seat on the Supreme Court, and four years after Furman , Stevens cast an essential, if not deciding, vote in favor of reviving the death penalty. In the Gregg cases, the Supreme Court, in a series of decisions, upheld a number of re-enacted state capital punishment schemes that had been tailored to remedying the constitutional deficiencies identified in Furman . Greg g, and its companion cases issued that same day, have been characterized as representing the "resurrection" of capital punishment. At the time, Justice Stevens had confidence that the states could indeed provide adequate procedural safeguards—with guidance from a continued re-examination of capital sentencing procedures by the Court—that would successfully eliminate the constitutional concerns associated with the states' earlier use of the death penalty. In the decades following Gregg , Justice Stevens's death penalty jurisprudence was governed by his belief in "fundamental fairness" and the notion that the death penalty, as the ultimate punishment, must be treated differently from any other lesser form of punishment. The "qualitative difference" between death and any other form of punishment, wrote Justice Stevens, leads to a "corresponding difference in the need for reliability in the determination that death is the appropriate punishment in a specific case." In case after case, Stevens has voted to narrow the application of the death penalty through limiting the class of individuals that was eligible for the punishment, or by increasing procedural protections for capital defendants. Justice Stevens, often in dissent, has voted to prohibit judges from overriding a jury's decision on the imposition of the death penalty; limit the use of emotional victim impact statements; ensure adequate legal representation for capital defendants; prohibit the use of the death penalty on the mentally retarded; prohibit the use of the death penalty on minors; alter the composition of jury pools in capital cases to include those who object on moral grounds to the imposition of the death penalty; prohibit substantially delayed executions; overturn convictions that "present an unacceptable risk that race played a decisive role in [the defendant's] sentencing"; and prohibit states from punishing the crime of rape, or rape of a child, with the death penalty. Perhaps the opinion that is most representative of Justice Stevens's attempts at narrowing the application of the death penalty, and one in which Stevens was able to draw five other justices to his position, was his majority opinion in the 2002 case Atkins v. Virginia . A landmark case, Atkins v . Virginia highlights two key aspects of Justice Stevens's death penalty jurisprudence. The opinion illustrates the Justice's reliance on state-by-state trends in assessing societal views on what qualifies as "cruel and unusual." Additionally, the opinion discusses Stevens's growing skepticism of the accepted justifications for capital punishment. In Atkin s , the Court considered the constitutionality of imposing the death penalty on the mentally retarded. The mentally retarded defendant in the case had been found guilty of abduction, armed robbery, and capital murder and sentenced to death by a jury under Virginia law. Writing for the majority, Justice Stevens overturned the sentence on the grounds that the punishment was disproportionate to the crime and therefore in violation of the prohibition on cruel and unusual punishment of the Eighth Amendment. Justice Stevens began by summarizing the Court's Eighth Amendment jurisprudence, noting that at its core, the amendment's prohibition on excessive punishment demands that "punishment for crime should be graduated and proportioned to the offense." Known as the proportionality principle, whether a punishment is excessive in relation to the crime is judged not by the views that prevailed when the Bill of Rights was adopted, but by the "evolving standards of decency that mark the progress of a maturing society." Stevens went on to cite long-standing Court precedent in concluding that the "clearest and most reliable objective evidence of contemporary values is the legislation enacted by the country's legislature." The Court, both before and after Atkins , has utilized the actions of the state legislatures as a barometer of society's acceptance of the death penalty in a given scenario. In considering how state legislatures have approached the issue of imposing the death penalty on the mentally retarded, Justice Stevens identified a clear trend towards prohibiting the practice. Although a majority of states had not yet prohibited the use of capital punishment on the mentally retarded, Stevens noted "it is not so much the number of these states that is significant, but the consistency of the direction of change." After showing the clear trend among the states towards prohibiting the practice over the previous decade, Stevens concluded that imposing the death penalty on the mentally retarded had "become truly unusual, and it is fair to say that a national consensus has developed against it." As a Justice, Stevens typically placed greater weight on current trends, rather than simply deferring to the position adopted by a majority of state legislatures. Stevens's opinion also determined that the social purposes served by the death penalty—retribution and deterrence—were not adequate to justify the execution of a mentally retarded criminal. Retribution, argued Stevens, is directly associated with culpability. As the "severity of the appropriate punishment necessarily depends on the culpability of the defendant," the Court has reserved the imposition of the death penalty for only the most serious crimes and the most culpable defendants. Stevens concluded that the lesser degree of culpability, due to cognitive and behavioral impairments associated with the mentally retarded defendant, "surely does not merit that form of retribution." As to deterrence, Stevens noted that the diminished ability of the mentally retarded to "engage in logical reasoning, or to control impulses," defeated the purpose of creating a deterrent as it was less likely that mentally retarded individuals could "process the information of the possibility of execution as a penalty and, as a result, control their conduct based upon that information." Stevens then concisely summarized his "narrowing" view of the death penalty, developed over more than 30 years of experience on the Court, in concluding: "[T]hus pursuant to our narrowing jurisprudence, which seeks to ensure that only the most deserving of execution are put to death, an exclusion for the mentally retarded is appropriate." In 2008, Justice Stevens abandoned his three-decade endeavor of attaining a narrowed, fair, and non-discriminatory capital punishment system. Rather, Stevens's position on the death penalty came full circle in Baze v. Rees as he cited back to the Court's decision in Furman in asserting that capital punishment was "patently excessive and cruel and unusual punishment violative of the Eighth Amendment." In Baze , the Court heard an Eighth Amendment challenge to Kentucky's multi-drug lethal injection procedure. Although a majority of the Court voted to uphold Kentucky's method of execution, including Justice Stevens, it was Stevens's concurring opinion that drew the most attention. Picking up where he left off in Atkins , Stevens questioned the accepted rationales underlying the death penalty. His opinion openly attacked the legitimacy of the deterrence and retribution rationales, arguing that the value of both had dwindled and must now be "called into question." With respect to deterrence, Stevens noted that "despite 30 years of empirical research in the area, there remains no reliable statistical evidence that capital punishment in fact deters potential offenders." As to retribution, he pointed out that the relatively painless methods of execution required under the Eighth Amendment "actually undermine the very premise on which public approval of the retribution rationale is based"—that the offender suffer a punishment comparable to the suffering experienced by his victim. In confronting the Court's decisions, and his vote in the Greg g cases, Justice Stevens admitted that the Court "relied heavily on our belief that adequate procedures were in place that would avoid the danger of discriminatory application identified … in Furman ." Stevens then pointed to three key failures of the Court's death penalty jurisprudence. First, juries in capital cases do not represent a fair cross section of the community; rather, eliminating jurors who oppose the death penalty on moral grounds "is really a procedure that has the purpose and effect of obtaining a jury that is biased in favor of conviction." Second, the emotional impact of capital offenses and the often disturbing facts associated with those crimes leads to a greater risk of error in capital cases because "the interest in making sure the crime does not go unpunished may overcome residual doubt concerning the identity of the offender." Third, Justice Stevens highlighted the continued risk of a discriminatory application of the death penalty, noting that the Court has continued to allow race to play an "unacceptable role" in capital cases. Given what he considered the now unpersuasive rationales for the death penalty, and the inability of the Court and the states to cooperatively establish adequate procedural protections in capital cases, Justice Stevens, relying on his "own experience" and "extensive exposure" to death penalty cases, quoted the Court's decision in Furman in unequivocally concluding that the death penalty represents "the pointless and needless extinction of life with only marginal contributions to any discernible social or public purposes. A penalty with such negligible returns to the state [is] patently excessive and cruel and unusual punishment violative of the Eighth Amendment." Notwithstanding his clear position that the death penalty itself was unconstitutional, Stevens voted to uphold the Kentucky lethal injection statute. Citing a "respect [for] precedents that remain a part of our law," Stevens deferred to the Court's long-standing framework for evaluating the death penalty in light of the Eighth Amendment, and joined the Court in concluding that, under the existing framework, the evidence failed to show that the Kentucky statute was unconstitutional. From casting a vote that resurrected the use of the death penalty to becoming the only Justice thus far on the Roberts Court to formally question the death penalty's per se constitutionality, Justice Stevens's position on capital punishment has undergone significant changes during his tenure on the Court. However, his jurisprudence has been consistently guided by his belief in "fundamental fairness" and his recognition that, due to its special risks and irrevocable nature, "death is different." Although Stevens initially had hopes that the Court, working in cooperation with the states, could correct the imperfections of capital punishment, after more than 30 years of experience with death penalty cases, and a declining acceptance of the proffered justifications for the death penalty, Justice Stevens ultimately has questioned whether sentencing a defendant to death can ever be consistent with the Eighth Amendment's prohibition on "cruel and unusual" punishment. With his retirement imminent, the Court faces the loss of its most vocal death penalty opponent. | Justice Stevens's position on the death penalty has undergone a thorough transformation during his tenure on the Court. Although Stevens initially supported the imposition of the death penalty in accordance with adequately protective state enacted guidelines, over the next 35 years the Justice has voted to narrow the application of the death penalty as he has become more skeptical of the punishment's underlying rationale and the states' ability to protect the rights of capital defendants. In 2008, Justice Stevens's death penalty jurisprudence may have culminated with his concurring opinion in Baze v. Rees, in which the Justice unequivocally expressed his ultimate conclusion that the death penalty is itself unconstitutional. |
BuzzFeed / Getty Images
In the final three months of the US presidential campaign, the top-performing fake election news stories on Facebook generated more engagement than the top stories from major news outlets such as the New York Times, Washington Post, Huffington Post, NBC News, and others, a BuzzFeed News analysis has found. During these critical months of the campaign, 20 top-performing false election stories from hoax sites and hyperpartisan blogs generated 8,711,000 shares, reactions, and comments on Facebook. Within the same time period, the 20 best-performing election stories from 19 major news websites generated a total of 7,367,000 shares, reactions, and comments on Facebook. (This analysis focused on the top performing link posts for both groups of publishers, and not on total site engagement on Facebook. For details on how we identified and analyzed the content, see the bottom of this post. View our data here.) Up until those last three months of the campaign, the top election content from major outlets had easily outpaced that of fake election news on Facebook. Then, as the election drew closer, engagement for fake content on Facebook skyrocketed and surpassed that of the content from major news outlets.
BuzzFeed News
"I’m troubled that Facebook is doing so little to combat fake news," said Brendan Nyhan, a professor of political science at Dartmouth College who researches political misinformation and fact-checking. "Even if they did not swing the election, the evidence is clear that bogus stories have incredible reach on the network. Facebook should be fighting misinformation, not amplifying it."
A Facebook spokesman told BuzzFeed News that the top stories don't reflect overall engagement on the platform. "There is a long tail of stories on Facebook," the spokesman said. "It may seem like the top stories get a lot of traction, but they represent a tiny fraction of the total." He also said that native video, live content, and image posts from major news outlets saw significant engagement on Facebook. Of the 20 top-performing false election stories identified in the analysis, all but three were overtly pro-Donald Trump or anti-Hillary Clinton. Two of the biggest false hits were a story claiming Clinton sold weapons to ISIS and a hoax claiming the pope endorsed Trump, which the site removed after publication of this article. The only viral false stories during the final three months that were arguably against Trump's interests were a false quote from Mike Pence about Michelle Obama, a false report that Ireland was accepting American "refugees" fleeing Trump, and a hoax claiming RuPaul said he was groped by Trump.
BuzzFeed News
BuzzFeed News
This new data illustrates the power of fake election news on Facebook, and comes as the social network deals with criticism that it allowed false content to run rampant during the 2016 presidential campaign. CEO Mark Zuckerberg said recently it was "a pretty crazy idea" to suggest that fake news on Facebook helped sway the election. He later published a post saying, "We have already launched work enabling our community to flag hoaxes and fake news, and there is more we can do here."
This week BuzzFeed News reported that a group of Facebook employees have formed a task force to tackle the issue, with one saying that "fake news ran wild on our platform during the entire campaign season." The Wall Street Journal also reported that Google would begin barring fake news websites from its AdSense advertising program. Facebook soon followed suit. These developments follow a study by BuzzFeed News that revealed hyperpartisan Facebook pages and their websites were publishing false or misleading content at an alarming rate — and generating significant Facebook engagement in the process. The same was true for the more than 100 US politics websites BuzzFeed News found being run out of the Former Yugoslav Republic of Macedonia. This new analysis of election content found two false election stories from a Macedonian sites that made the top-10 list in terms of Facebook engagement int he final three months. Conservative State published a story that falsely quoted Hillary Clinton as saying, “I would like to see people like Donald Trump run for office; they’re honest and can’t be bought.” The story generated over 481,000 engagements on Facebook. A second false story from a Macedonia site falsely claimed that Clinton was about to be indicted. It received 149,000 engagements on Facebook. All the false news stories identified in BuzzFeed News' analysis came from either fake news websites that only publish hoaxes or from hyperpartisan websites that present themselves as publishing real news. The research turned up only one viral false election story from a hyperpartisan left-wing site. The story from Winning Democrats claimed Ireland was accepting anti-Trump "refugees" from the US. It received over 810,000 Facebook engagements, and was debunked by an Irish publication. (There was also one post from an LGBTQ site that used a false quote from Trump in its headline.) The other false viral election stories from hyperpartisan sites came from right-wing publishers, according to the analysis.
Ending the Fed
One example is the remarkably successful, utterly untrustworthy site Ending the Fed. It was responsible for four of the top 10 false election stories identified in the analysis: Pope Francis endorsing Donald Trump, Hilary Clinton selling weapons to ISIS, Hillary Clinton being disqualified from holding federal office, and the FBI director receiving millions from the Clinton Foundation. These four stories racked up a total of roughly 2,953,000 Facebook engagements in the three months leading up to Election Day.
Ending the Fed gained notoriety in August when Facebook promoted its story about Megyn Kelly being fired by Fox News as a top trending item. The strong engagement the site has seen on Facebook may help explain how one of its stories was featured in the Trending box. The site, which does not publicly list an owner or editor, did not respond to a request for comment from BuzzFeed News. Like several other hyperpartisan right-wing sites that scored big Facebook hits this election season, Ending the Fed is a relatively new website. The domain endingthefed.com was only registered in in March. Yet according to BuzzFeed News' analysis, its top election content received more Facebook engagement than stories from the Washington Post and New York Times. For example, the top four election stories from the Post generated roughly 2,774,000 Facebook engagements — nearly 180,000 fewer than Ending the Fed's top four false posts. A look at Ending the Fed's traffic ranking chart from Alexa also gives an indication of the massive growth it experienced as the election drew close:
A similar spike occurred for Conservative State, a site that was only registered in September. It saw its traffic rank on Alexa spike almost instantly:
Alexa estimates that nearly 30% of Conservative State's traffic comes from Facebook, with 10% coming from Google. Along with unreliable hyperpartisan blogs, fake news sites also received a big election traffic bump in line with their Facebook success. The Burrard Street Journal scored nearly 380,000 Facebook engagements for a fake story about Obama saying he will not leave office if Trump is elected. It was published in September, right around the time Alexa notched a noticeable uptick in its traffic ranking:
That site was only registered in April of this year. Its publisher disputes the idea that its content is aimed at misleading readers. "The BS Journal is a satire news publication and makes absolutely no secret of that or any attempt to purposely mislead our readers," he told BuzzFeed News. Large news sites also generated strong Facebook engagement for links to their election stories. But to truly find the biggest election hits from these 19 major sites, it's necessary to go back to early 2016. The three biggest election hits for these outlets came back in February, led by a contributor post on the Huffington Post's blog about Donald Trump that received 2,200,000 engagements on Facebook. The top-performing election news story on Facebook for the 19 outlets analyzed was also published that month by CBS News. It generated an impressive 1.7 million shares, engagements, and comments on Facebook. Overall, a significant number of the top-performing posts on Facebook from major outlets were opinion pieces, rather than news stories. The biggest mainstream hit in the three months prior to the election came from the Washington Post and had 876,000 engagements. Yet somehow Ending the Fed — a site launched just months earlier with no history on Facebook and likely a very small group of people running it — managed to get more engagement for a false story during that same period. “People know there are concerned employees who are seeing something here which they consider a big problem,” a Facebook manager told BuzzFeed News this week. “And it doesn’t feel like the people making decisions are taking the concerns seriously.” ||||| This is a set of web collections curated by Mark Graham using the Archive-IT service of the Internet Archive. They include web captures of the ISKME.org website as well as captures from sites hosted by IGC.org.These web captures are available to the general public.For more information about this collection please feel free to contact Mark via Send Mail ||||| SAN FRANCISCO (Reuters) - Alphabet Inc’s Google (GOOGL.O) and Facebook Inc (FB.O) on Monday announced measures aimed at halting the spread of “fake news” on the internet by targeting how some purveyors of phony content make money: advertising.
The Google logo adorns the entrance of Google Germany headquarters in Hamburg, Germany July 11, 2016. REUTERS/Morris Mac Matzen
Google said it is working on a policy change to prevent websites that misrepresent content from using its AdSense advertising network, while Facebook updated its advertising policies to spell out that its ban on deceptive and misleading content applies to fake news.
The shifts comes as Google, Facebook and Twitter Inc (TWTR.N) face a backlash over the role they played in the U.S. presidential election by allowing the spread of false and often malicious information that might have swayed voters toward Republican candidate Donald Trump.
The issue has provoked a fierce debate within Facebook especially, with Chief Executive Mark Zuckerberg insisting twice in recent days that the site had no role in influencing the election.
Facebook’s steps are limited to its ad policies, and do not target fake news sites shared by users on their news feeds.
“We do not integrate or display ads in apps or sites containing content that is illegal, misleading or deceptive, which includes fake news,” Facebook said in a statement, adding that it will continue to vet publishers to ensure compliance.
Google’s move similarly does not address the issue of fake news or hoaxes appearing in Google search results. That happened in the last few days, when a search for ‘final election count’ for a time took users to a fake news story saying Trump won the popular vote. Votes are still being counted, with Democratic candidate Hillary Clinton showing a slight lead.
Nor does Google suggest that the company has moved to a mechanism for rating the accuracy of particular articles.
Rather, the change is aimed at assuring that publishers on the network are legitimate and eliminating financial incentives that appear to have driven the production of much fake news.
“Moving forward, we will restrict ad serving on pages that misrepresent, misstate, or conceal information about the publisher, the publisher’s content, or the primary purpose of the web property,” Google said in a statement.
The company did not detail how it would implement or enforce the new policy.
MACEDONIA NEWS
AdSense, which allows advertisers to place text ads on the millions of websites that are part of Google’s network, is a major source of money for many publishers.
A report in BuzzFeed News last month showed how tiny publishers in Macedonia were creating websites with fake news - much of it denigrating Clinton - which were widely shared on Facebook.
That sharing in turn led people to click on links which brought them to the Macedonian websites, which could then make money on the traffic via Google’s AdSense.
Facebook has been widely blamed for allowing the spread of online misinformation, most of it pro-Trump, but Zuckerberg has rejected the notion that Facebook influenced the outcome of the election or that fake news is a major problem on the service.
“Of all the content on Facebook, more than 99 percent of what people see is authentic,” he wrote in a blog post on Saturday. “Only a very small amount is fake news and hoaxes.”
Google has long had rules for its AdSense program, barring ads from appearing next to pornography or violent content. Work on the policy update announced on Monday began before the election, a Google spokeswoman said.
The company uses a combination of humans and artificial intelligence to review sites that apply to be a part of AdSense, and sites continue to be monitored after they are accepted, a former Google employee who worked on ad systems said. Google’s artificial intelligence systems learn from sites that have been removed from the program, speeding the removal of similar sites.
The issue of fake news is critical for Google from a business standpoint, as many advertisers do not want their brands to be touted alongside dubious content. Google must constantly hone its systems to try to stay one step ahead of unscrupulous publishers, the former employee said.
Google has not said whether it believes its search algorithms, or its separate system for ranking results in the Google News service, also need to be modified to cope with the fake news issue.
Fil Menczer, a professor of informatics and computing at Indiana University who has studied the spread of misinformation on social media, said Google’s move with AdSense was a positive step.
“One of the incentives for a good portion of fake news is money,” he said. “This could cut the income that creates the incentive to create the fake news sites.”
However, he cautioned that detecting fake news sites was not easy. “What if it is a site with some real information and some fake news? It requires specialized knowledge and having humans (do it) doesn’t scale,” he said. ||||| Twitter, Google, Facebook are changing their policies to prevent bullying and improve accuracy. (Reuters)
What do the Amish lobby, gay wedding vans and the ban of the national anthem have in common? For starters, they’re all make-believe — and invented by the same man.
Paul Horner, the 38-year-old impresario of a Facebook fake-news empire, has made his living off viral news hoaxes for several years. He has twice convinced the Internet that he’s British graffiti artist Banksy; he also published the very viral, very fake news of a Yelp vs. “South Park” lawsuit last year.
[This is how Facebook’s fake-news writers make money]
But in recent months, Horner has found the fake-news ecosystem growing more crowded, more political and vastly more influential: In March, Donald Trump’s son Eric and his then-campaign manager, Corey Lewandowski, even tweeted links to one of Horner’s faux-articles. His stories have also appeared as news on Google.
In light of concerns that stories like Horner’s may have affected the presidential election, and in the wake of announcements that both Google and Facebook would take action against deceptive outlets, Intersect called Horner to discuss his perspective on fake news. This transcript has been edited for clarity, length and — ahem — bad language.
[The only true winners of this election are trolls]
You’ve been writing fake news for a while now — you’re kind of like the OG Facebook news hoaxer. Well, I’d call it hoaxing or fake news. You’d call it parody or satire. How is that scene different now than it was three or five years ago? Why did something like your story about Obama invalidating the election results (almost 250,000 Facebook shares, as of this writing) go so viral?
Honestly, people are definitely dumber. They just keep passing stuff around. Nobody fact-checks anything anymore — I mean, that’s how Trump got elected. He just said whatever he wanted, and people believed everything, and when the things he said turned out not to be true, people didn’t care because they’d already accepted it. It’s real scary. I’ve never seen anything like it.
You mentioned Trump, and you’ve probably heard the argument, or the concern, that fake news somehow helped him get elected. What do you make of that?
My sites were picked up by Trump supporters all the time. I think Trump is in the White House because of me. His followers don’t fact-check anything — they’ll post everything, believe anything. His campaign manager posted my story about a protester getting paid $3,500 as fact. Like, I made that up. I posted a fake ad on Craigslist.
(Twitter via Mediaite)
Why? I mean — why would you even write that?
Just ’cause his supporters were under the belief that people were getting paid to protest at their rallies, and that’s just insane. I’ve gone to Trump protests — trust me, no one needs to get paid to protest Trump. I just wanted to make fun of that insane belief, but it took off. They actually believed it.
Consider these points before sharing a news article on Facebook. It could be fake. (Monica Akhtar/The Washington Post)
I thought they’d fact-check it, and it’d make them look worse. I mean that’s how this always works: Someone posts something I write, then they find out it’s false, then they look like idiots. But Trump supporters — they just keep running with it! They never fact-check anything! Now he’s in the White House. Looking back, instead of hurting the campaign, I think I helped it. And that feels [bad].
You think you personally helped elect Trump?
I don’t know. I don’t know if I did or not. I don’t know. I don’t know.
Early on the morning of Nov. 9, 2016, Republican President-elect Donald Trump addressed supporters in New York, declaring victory over Democrat Hillary Clinton. Here are key moments from that speech. (Sarah Parnass/The Washington Post)
I guess I’m curious, if you believed you might be having an unfair impact on the election — especially if that impact went against your own political beliefs — why didn’t you stop? Why keep writing?
I didn’t think it was possible for him to get elected president. I thought I was messing with the campaign, maybe I wasn’t messing them up as much as I wanted — but I never thought he’d actually get elected. I didn’t even think about it. In hindsight, everyone should’ve seen this coming — everyone assumed Hillary [Clinton] would just get in. But she didn’t, and Trump is president.
[Facebook has repeatedly trended fake news since firing its human editors]
Speaking of Clinton — did you target fake news at her supporters? Or Gary Johnson’s, for that matter? (Horner’s Facebook picture shows him at a rally for Johnson.)
No. I hate Trump.
Is that it? You posted on Facebook a couple weeks ago that you had a lot of ideas for satirizing Clinton and other figures, but that “no joke . . . in doing this for six years, the people who clicked ads the most, like it’s the cure for cancer, is right-wing Republicans.” That makes it sound like you’ve found targeting conservatives is more profitable.
Yeah, it is. They don’t fact-check.
But a Trump presidency is good for you from a business perspective, right?
It’s great for anybody who does anything with satire — there’s nothing you can’t write about now that people won’t believe. I can write the craziest thing about Trump, and people will believe it. I wrote a lot of crazy anti-Muslim stuff — like about Trump wanting to put badges on Muslims, or not allowing them in the airport, or making them stand in their own line — and people went along with it!
Facebook and Google recently announced that they’d no longer let fake-news sites use their advertising platforms. I know you basically make your living from those services. How worried are you about this?
This whole Google AdSense thing is pretty scary. And all this Facebook stuff. I make most of my money from AdSense — like, you wouldn’t believe how much money I make from it. Right now I make like $10,000 a month from AdSense.
[Google’s top news link for ‘final election results’ goes to a fake news site with false numbers]
I know ways of getting hooked up under different names and sites. So probably if they cracked down, I would try different things. I have at least 10 sites right now. If they crack down on a couple, I’ll just use others. They could shut down advertising on all my sites, and I think I’d be okay. Plus, Facebook and AdSense make a lot of money from [advertising on fake news sites] for them to just get rid of it. They’d lose a lot of money.
But if it did really go away, that would suck. I don’t know what I would do.
Thinking about this less selfishly, though — it might be good if Facebook and Google took action, right? Because the effects you’re describing are pretty scary.
Yeah, I mean — a lot of the sites people are talking about, they’re just total BS sites. There’s no creativity or purpose behind them. I’m glad they’re getting rid of them. I don’t like getting lumped in with Huzlers. I like getting lumped in with the Onion. The stuff I do — I spend more time on it. There’s purpose and meaning behind it. I don’t just write fake news just to write it.
So, yeah, I see a lot of the sites they’re listing, and I’m like — good. There are so many horrible sites out there. I’m glad they’re getting rid of those sites.
I just hope they don’t get rid of mine, too.
1 of 74 Full Screen Autoplay Close Skip Ad × Here’s what President-elect Donald Trump has been doing since the election View Photos He has been holding interviews and meetings as he prepares to enter the White House. Caption He has been holding interviews and meetings as he prepares to enter the White House. Jan. 19, 2017 President-elect Donald Trump and his wife, Melania, visit the Lincoln Memorial before the “Make America Great Again” concert. Jabin Botsford/The Washington Post Buy Photo Wait 1 second to continue.
Liked that? Try these: | No, the pope didn't endorse Donald Trump, and, no, Hillary Clinton didn't sell weapons to ISIS. But those fake stories and others like them spread more widely on Facebook than actual news stories before the election, a new BuzzFeed analysis reveals. Specifically, the top 20 fake election stories racked up 8.7 million shares, reactions, and comments in the final three months of the election versus 7.4 million for stories from the likes of the New York Times and the Washington Post. The trend accelerated as Election Day drew near, and all but three of the 20 top performers were pro-Trump or anti-Clinton stories. "I'm troubled that Facebook is doing so little to combat fake news," Dartmouth political science professor Brendan Nyhan tells BuzzFeed. That may be changing. While Mark Zuckerberg initially dismissed the idea that fake news might have played a role in election results, he subsequently acknowledged that Facebook could do more about the problem. Since then, both Facebook and Google have moved to restrict such stories via ads, including Google barring fake websites from using its AdSense advertising program, reports Reuters. The Washington Post, meanwhile, interviews Paul Horner, one of the leading purveyors of fake stories, who says that "people are definitely dumber. They just keep passing stuff around. Nobody fact-checks anything." And he adds this line sure to upset Clinton supporters: "I think Trump is in the White House because of me." |
BOSTON (AP) — A six-car train with passengers on board that left a suburban Boston transit station without a driver Thursday and went through four stations without stopping was tampered with, Massachusetts Gov. Charlie Baker said.
None of the approximately 50 passengers was hurt, but the train's operator suffered a minor injury when he was brushed by the train, apparently as it began to move at the Braintree station, a spokesman for the Massachusetts Bay Transportation Authority said.
The above-ground Red Line train departed Braintree Station — the southernmost stop of the line — shortly after 6 a.m. without an operator and traveled north toward Boston, a statement from the MBTA said.
MBTA operations eventually disabled the train and brought it to a stop by cutting off power to the electrified third rail, officials said. An initial investigation indicated that a safety device within the train's cab may have been tampered with.
"This train was tampered with, and it was tampered with by somebody who knew what they were doing," Baker said during an interview on Boston Herald Radio.
Baker called it an "isolated" incident and said MBTA passengers should not be concerned.
Transit personnel boarded the train after it was stopped and drove it north to the JFK/UMass stop, where passengers disembarked. The train was taken out of service and brought to a maintenance facility in Boston, where an investigation is under way, according to Joe Pesaturo, spokesman for the transit agency.
Passengers are among those being interviewed, the T said.
Kristen Setera, a spokeswoman for the Boston office of the FBI, said in an email that the agency was aware of the incident and was in contact with transit police, but provided no other information.
Pesaturo said an initial examination showed no problems with the "functionality" of the train's equipment. ||||| BOSTON (CBS) — MBTA officials say the investigation into the runaway Red Line train Thursday morning is focused primarily on operator error.
“We failed our passengers today,” said Transportation Secretary Stephanie Pollack.
Pollack said the operator was initially unable to start the train at Braintree Station and received clearance to put the train into bypass mode. The operator then exited the train, which kept rolling with about 50 passengers on board.
Related: Passengers Recall Dangerous Ride
Pollack said a full-service brake and hand brake are required to be engaged before a train goes into bypass mode, and that it was unclear if both had been engaged before the operator left the train.
Pollack said the incident “represents an unacceptable breach of our responsibility to keep our riders safe.”
The 6:08 a.m. inbound train traveled through four stations. The train was brought to a halt just past North Quincy Station, when crews powered down the third rail.
At that point, T employees boarded the train, driving it to the JFK/UMass stop to allow passengers to exit. The train was taken out of service and examined.
Pollack said that it took about nine minutes after the incident was reported to stop the train.
Train operator David Vazquez suffered minor injuries after he exited the train and was struck. He has been placed on administrative leave pending the outcome of the investigation. Vazquez, 51, has been with the MBTA for more than 25 years.
No passengers were injured.
Only one operator is on each train. The Boston Carmen’s Union released a statement saying “Creating extra precautions and having a second employee, such as a train attendant or guard, assigned to these trains could have avoided this incident.”
Pollack says the Red Line previously had two operators on each train, but “if safety procedures are followed properly, there is no safety problem with operating trains with a single operator.”
A person with knowledge of the trains told WBZ-TV’s Lauren Leamanczyk this was a “very dangerous situation for passengers.”
Initial indications were that a safety device inside the train’s cab may have been tampered with.
“At this point we believe this was an isolated incident,” Gov. Charlie Baker said in a press conference in Plymouth.
Gov. Charlie Baker Addresses Red Line Incident
Baker added that an inspection of the train found the controls had been “manipulated.”
“The discussion that’s going to take place on our end is negligence versus something else,” Baker said.
The FBI confirmed that it is aware of the incident and has been in contact with Transit Police. The Federal Transit Administration is sending an investigator to participate in the investigation being led by the Massachusetts Department of Public Utilities.
Passenger Fernanda Daly told WBZ-TV’s Beth Germano that when the lights went out on the train, riders knocked on the booth but found no conductor inside.
“The whole train started going slow, the lights went off and everything just stopped down between Quincy and JFK and we stayed there for about 30 minutes,” the female passenger said.
“It was all dark, everything was quiet. It was just us. We had no idea what was going on,” Daly said.
Some people attempted to break windows, while others attempted to pry open the doors, according to Daly.
Similar Incidents Around The Country
So how often does a public transit train take off with no one at the controls?
WBZ sent an inquiry to the Federal Transit Administration. Late Thursday, a spokesman for the agency provided two recent examples.
WBZ-TV’s Ryan Kath reports
In September 2013, an unmanned train belonging to the Chicago Transit Authority Heavy Rail left the yard and entered a live track. It ended up colliding with a train carrying passengers. The crash injured 33 passengers along with the operator of the train. The incident was blamed on a maintenance issue.
The other example provided sounds very similar to what MassDOT officials described in Boston.
This past February, an unmanned train for the Sacramento Regional Transit District Light Rail left the yard after a mechanic bypassed the deadman safety control while troubleshooting a problem. The mechanic then stepped off the train, which took off. The train derailed, then reconnected with the tracks before eventually coming to a stop. The incident resulted in $70,000 in property damage.
The big difference between those examples and what happened on the MBTA Red Line train on Thursday: neither of those unmanned trains were also carrying passengers.
WBZ NewsRadio 1030’s Bernice Corpuz reports
WBZ-TV’s Ryan Kath contributed to this report. ||||| Globe Staff
The Red Line train out of Braintree Station had already blown through three stops when the lights flickered out and the wheels slowly rolled to a stop.
Yet not a word of explanation had come from the conductor — for a very good reason.
When passengers looking for answers forced open the door to the operator’s cabin, nobody was there.
Advertisement
“We were all kind of, like: ‘What happened? Where is this guy?’ ” said Karrie Mohammed. By that time, she said, one passenger was in tears.
Get Metro Headlines in your inbox: The 10 top local news stories from metro Boston and around New England delivered daily. Sign Up Thank you for signing up! Sign up for more newsletters here
“We’re kind of — at this point — freaking out.”
In another embarrassment for the MBTA, a six-car, 420-foot Red Line train departed Braintree station Thursday morning with no one at the controls. The driver got out to deal with what he said was a signal problem, and his ride took off without him, officials said.
Governor Charlie Baker said the train controls “had been manipulated, which was why the train moved without a person controlling it.” What needs to be determined, Baker said, was whether the incident was because of negligence or something else.
Pulling out of Braintree around 6:08 a.m., the runaway train carrying about 50 people passed without stopping through the Quincy Adams, Quincy Center, and Wollaston stations on a 9-minute and more than 5-mile trip, before MBTA officials managed to stop it past North Quincy Station by cutting power to the third rail.
Advertisement
“We failed our passengers today,” Secretary of Transportation Stephanie Pollack said in an afternoon press conference that also failed to explain how, precisely, the MBTA lost its train.
Pollack said an investigation is focused on operator error but declined to give details or to name the driver at the press conference, though the T identified him as a veteran with more than 25 years of service. He suffered minor injuries when brushed by the train, T officials said. Another official with knowledge of the incident said the driver’s name is David Vazquez. No passengers were hurt, the MBTA said.
The runaway train caused significant commuter delays, which the T did not fully explain for several hours.
At 6:22 a.m., the MBTA Twitter account reported Red Line delays “due to a power issue.” No mention that officials had shut off the power to stop a runaway train with no operator. The T then tweeted an update, attributing delays to a “disabled train.”
“We dispatched the best information that we had at the time,” a T spokesman said Thursday night.
Advertisement
Bradley H. Clarke, a transit historian and president of the Boston Street Railway Association, said Red Line trains like the one that went rogue Thursday are operated in the cab by a device called a Cineston controller, which combines the accelerator, brake, and a “dead man’s” safety feature all in one lever.
“It’s called a dead-man’s controller, the idea being that if the operator dies at the wheel, he’ll relax his grip on the control handle, and the handle will pop up and stop the train,” Clarke said.
Though newer Red Line cars have a different system, he said, Cineston controllers have been in continuous use in transit systems for more than 70 years.
“They’re very reliable, very, very safe.”
Pollack declined to answer a question about whether the controller had been tampered with.
Many passengers waiting for the Red Line Thursday afternoon said they hadn’t heard about the runaway train, but their eyes grew wide when a reporter explained what had happened.
“Who would do that?” wondered Ronald Rae, waiting for a train at North Quincy station. “That’s pretty insane.”
Rider John Sweeney said the incident suggests something at the MBTA needs to be fixed, either at the highest levels or in the trenches.
“Something’s not working right, and I don’t quite know what it is,” Sweeney said.
Marjorie Donahue, waiting for a train at Wollaston Station, recalled the T’s service meltdown last winter.
“They don’t have very good reputations, and they’re making it worse,” she said. “If they can’t even handle this situation, then how are they going to handle a major blizzard?”
Officials explained Thursday that while the train was parked in Braintree, the driver reported a signal problem and requested permission to put his train into “bypass mode,” which allows the train to move even if it has not received the right signal. Trains occasionally operate in this mode, Pollack said, saying that it is safe under the proper procedures.
To enter bypass mode, the driver had to leave the train to throw a toggle switch, said Jeff Gonneville, the MBTA’s chief operating officer. That was when the train left without him. Gonneville said MBTA procedures require operators to set two brakes before leaving the train. Pollack would not say whether the brakes were set, saying that was a matter for the ongoing investigations.
The MBTA, the Transit Police, the Department of Public Utilities, and the Federal Transit Administration are investigating, she said.
Stuart Spina, a member of the T Riders Union and a transportation researcher, said the bypass is a common move requested by subway operators to avoid having to sit excessively at a station in the event of a signal failure.
“It’s a pretty regular procedure, but this is the first time I’ve ever heard of a mishap like this happening,” he said.
Spina said he was mystified how the train could have accelerated without the operator in the cab, especially on a flat section of track.
“You need a person to push the throttle to move it,” he said. “That’s really the freakiest part of the whole thing, is how on earth could the train start moving?”
Moments after the train made its getaway, the driver reported the incident to an MBTA official at Braintree station, who immediately notified the MBTA’s Operations Control Center.
“We knew within 60 seconds, give or take 10 seconds,” Gonneville told reporters Thursday.
While the train was in bypass mode, the system’s automatic collision avoidance features would not have worked, he said. The T cleared other trains on the track, and then cut the power, he said.
Pollack said that trains in bypass mode are not supposed to be able to exceed 25 miles per hour, and that the investigation will try to determine how fast the train was going.
Toward the rear of the train, Sarah Sweeney, commuting to her job as a dental assistant downtown, had been scrolling through Facebook on her phone when the train stopped. With the lights out and the cold creeping in, the regular riders around her thought it was just another day on the oft-troubled line.
“We were actually joking about wishing we had coffee,” she said. “It just seemed like a normal Red Line problem. Luckily, no one in my car panicked, because I’m a panicker.”
Once the train stopped near North Quincy, T personnel boarded it and brought it to JFK/UMass Station, where passengers got off.
The T workers who entered Sweeney’s car were more upset than she was, she said. One “came crashing through,” asking whether everyone was OK.
The train was taken out of service and brought to a Red Line maintenance facility in South Boston.
The head of the MBTA’s largest labor union said the runaway train could have been stopped sooner if the MBTA had two workers aboard — as was the standard until a few years ago.
“If there was a second employee on the train, they would have . . . been equipped with the knowledge and ability to bring this train to a safe stop,” said James O’Brien, president of the Boston Carmen’s Union.
Waiting for the Red Line Thursday afternoon at Quincy Center, student Alex Feng, 16, put his head in his hand when he learned what had happened on the line that morning.
“I’ve got to tell my friends about that, because it’s absolutely insane,” said Feng, who said he takes the line every day to school.
Feng said he often sees trains zoom past with drivers but no passengers, but never the other way around.
More on recent troubles at MBTA
Andy Rosen, Nicole Dungca, Laura Krantz, Matt Rocheleau, and Steve Annear of the Globe staff and Globe correspondent Alexandra Koktsidis contributed to this report. Mark Arsenault can be reached at mark.arsenault@globe.com . Follow him on Twitter @bostonglobemark | About 50 Boston commuters got the creepiest ride of their lives Thursday morning when their train took off without a driver. MBTA authorities were eventually able to stop the train, but not before it went through four stations, reports the Boston Globe. None of the passengers were hurt. How it happened is still unclear, but Gov. Charlie Baker says it was no accident: "This train was tampered with, and it was tampered with by somebody who knew what they were doing," he told Boston Herald Radio, per the AP. Things seemed to go awry when the sole operator got out at the Braintree station to check on some kind of signal issue, and the train left the station without him. The 51-year-old operator was brushed by another train at Braintree and suffered minor injuries. A woman aboard the train tells WBZ that passengers knocked on the conductor's booth after the lights went out, only to find it empty. "The whole train started going slow," she recalls. "It was all dark, everything was quiet. It was just us. We had no idea what was going on." MBTA officials shut down power to the track's third rail, and the train finally came to a halt outside the North Quincy station after 6am. The six-car train is now out of service, with initial reports suggesting that somebody tampered with a safety device within the driver's cab, allowing it to move forward without an operator. The FBI says only that it is "aware of the incident." |
Section 103(a) of the Patent Act provides one of the statutory bars for patentability of inventions: a patent claim will be considered invalid if "the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains." In other words, for the subject matter of an alleged invention or discovery to be patentable, it must be "nonobvious" at the time of its creation. The nonobviousness requirement is met if the subject matter claimed in a patent application is beyond the ordinary abilities of a person of ordinary skill in the art in the appropriate field. In the landmark 1966 case Graham v. John Deere Co . of Kansas City, the Supreme Court established an analytic framework for courts to determine "nonobviousness." The so-called Graham test describes several factors that must be assessed: While the ultimate question of patent validity is one of law ... the § 103 condition, which is but one of three conditions, each of which must be satisfied, lends itself to several basic factual inquiries. Under § 103, the scope and content of the prior art are to be determined; differences between the prior art and the claims at issue are to be ascertained; and the level of ordinary skill in the pertinent art resolved. Against this background, the obviousness or nonobviousness of the subject matter is determined. Such secondary considerations as commercial success, long felt but unsolved needs, failure of others, etc., might be utilized to give light to the circumstances surrounding the origin of the subject matter sought to be patented. As indicia of obviousness or nonobviousness, these inquiries may have relevancy. While a single prior art reference could form the basis of a finding of nonobviousness, multiple prior art references are often involved in the analysis. In such a situation, the U.S. Court of Appeals for the Federal Circuit (Federal Circuit) had developed an approach in which an invention would be considered obvious only if there was an explicit or implicit "teaching, suggestion, or motivation" that would lead a person of ordinary skill to combine multiple prior art references to produce an invention. Such a "teaching, suggestion, or motivation" (TSM) could have come from either (1) the references themselves, (2) knowledge of those skilled in the art, or (3) the nature of a problem to be solved, leading inventors to look to references relating to possible solutions to that problem. Because § 103 of the Patent Act requires that an invention's obviousness be determined from the standpoint of a person having ordinary skill in the art "at the time the invention was made," the TSM test was designed, in part, to defend against "the subtle but powerful attraction of a hindsight-based obviousness analysis." The patents at issue in KSR International v. Teleflex pertain to an adjustable pedal system (APS) for use with automobiles having electronic throttle-controlled engines. Teleflex Inc. holds an exclusive license for the patent on this device that allows a driver to adjust the location of a car's gas and break pedal so that it may reach the driver's foot. KSR International Co. also manufactures an adjustable pedal assembly. Initially, KSR supplied APS for cars with engines that use cable-actuated throttle controls; thus, the APS that KSR manufactured included cable-attachment arms. In mid-2000, KSR designed its APS to incorporate an electronic pedal position sensor in order for it to work with electronically controlled engines, which are being increasingly used in automobiles. In 2002, Teleflex filed a patent infringement lawsuit against KSR after KSR had refused to enter into a royalty arrangement, asserting that this new design came within the scope of its patent claims. In defense, KSR argued that Teleflex's patents were invalid because they were obvious under § 103(a) of the Patent Act—that someone with ordinary skill in the art of designing pedal systems would have found it obvious to combine an adjustable pedal system with an electronic pedal position sensor for it to work with electronically controlled engines. The U.S. District Court for the Eastern District of Michigan agreed with KSR that the patent was invalid for obviousness, granting summary judgment in favor of KSR. The court determined that there was "little difference between the teachings of the prior art and claims of the patent-in-suit." Furthermore, the court opined that "it was inevitable" that APS would be combined with an electronic device to work with electronically controlled engines. Teleflex appealed the decision to the Federal Circuit. The appellate court vacated the district court's ruling, after finding that the district court had made errors in its obviousness determination. Specifically, the Federal Circuit noted that the district court had improperly applied the TSM test by not adhering to it more strictly—the district court had reached its obviousness ruling "without making findings as to the specific understanding or principle within the knowledge of a skilled artisan that would have motivated one with no knowledge of [the] invention to make the combination in the manner claimed." The Federal Circuit explained that district courts are "required" to make such specific findings pursuant to Federal Circuit case law establishing the TSM standard. In regard to the patent in the case, the appellate court found that the prior art in adjustable pedal design had been focused on solving the "constant ratio problem" (described as when "the force required to depress the pedal remains constant irrespective of the position of the pedal on the assembly"); whereas the motivation behind the patented invention licensed to Teleflex was "to design a smaller, less complex, and less expensive electronic pedal assembly." In the Federal Circuit's view, unless the "prior art references address the precise problem that the patentee was trying to solve," the problem would not motivate a person of ordinary skill in the art to combine the prior art teachings—here, the placement of an electronic sensor on an adjustable pedal. The Supreme Court granted certiorari on June 26, 2006, to review the KSR case, in which the central question before the Court was whether the Federal Circuit had erred in crafting TSM as the sole test for obviousness under § 103(a) of the Patent Act. On April 30, 2007, the Court unanimously reversed the Federal Circuit's judgment, holding that the TSM test for obviousness was incompatible with § 103 and Supreme Court precedents. Associate Justice Anthony Kennedy, delivering the opinion of the Court, explained that the proper framework for a court or patent examiner to employ when determining an invention's obviousness is that set forth in the Court's 1966 opinion Graham v. John Deere Co. of Kansas City. That analytical framework provides "an expansive and flexible approach" to the question of obviousness that the "rigid" and "mandatory" TSM formula does not offer. Justice Kennedy observed that the Graham approach, as further developed in three subsequent Supreme Court cases decided within ten years of that case, is based on several instructive principles for determining the validity of a patent based on the combination of elements found in the prior art: When a work is available in one field of endeavor, design incentives and other market forces can prompt variations of it, either in the same field or a different one. If a person of ordinary skill can implement a predictable variation, it is likely obvious under § 103 and unpatentable. If a technique has been used to improve one device, and a person of ordinary skill in the art would recognize that it would improve similar devices in the same way, using the technique is obvious unless its actual application is beyond his or her skill. Justice Kennedy then provided additional guidance for courts in following these principles. To determine whether there was an apparent reason to combine the known elements in the manner claimed by the patent at issue, courts should explicitly engage in an analysis that considers the following elements: the interrelated teachings of multiple patents, the effects of demands known to the design community or present in the marketplace, and the background knowledge possessed by a person having ordinary skill in the art. He further explained that a court should not solely take into account the "precise teachings" of the prior art, but rather can consider the "inferences and creative steps" that a person of ordinary skill in the art would likely use. The Federal Circuit's TSM test, and its mandatory application, is contrary to Graham and its progeny because it limits the obviousness analysis and is too formalistic, Justice Kennedy argued. In addition, he believed that the TSM test hindered the ability of courts and patent examiners to rely upon "common sense." In dicta, the Court's opinion appears to imply that the TSM test could have contributed to issued patents or unsuccessful challenges to the validity of certain patents that do not reflect true innovation: "Granting patent protection to advances that would occur in the ordinary course without real innovation retards progress and may, for patents combining previously known elements, deprive prior inventions of their value or utility." Finally, Justice Kennedy criticized the Federal Circuit for "overemphasizing the importance of published articles and the explicit content of issued patents." However, Justice Kennedy allowed that TSM provides "a helpful insight"—that a patent comprised of several elements is not obvious just because each of those elements was, independently, known in the prior art. This "essence" of the TSM test is not necessarily inconsistent with the Graham analysis, and thus he predicted that the Federal Circuit has likely applied the TSM test on many occasions in ways that accord with the Graham principles. It is the Federal Circuit's rigid application of its TSM rule, however, that the Court deemed was problematic in this case. Justice Kennedy identified four specific legal errors committed by the Federal Circuit. First, the appellate court had held that courts and patent examiners should look only to the problem the patentee was trying to solve, rather than other problems addressed by the patent's subject matter. Second, the appellate court had assumed that a person of ordinary skill trying to solve a particular problem will be led only to those elements of prior art designed to solve the same problem; however, "common sense teaches ... that familiar items may have obvious uses beyond their primary purposes, and in many cases a person of ordinary skill will be able to fit the teachings of multiple patents together like pieces of a puzzle." The third error of the lower court was its erroneous conclusion that a patent claim cannot be proved obvious by showing that the combination of elements was "obvious to try"; instead, Justice Kennedy noted, "the fact that a combination was obvious to try might show that it was obvious under § 103." The final error was the Federal Circuit's adherence to "rigid preventative rules" to avoid the risk of hindsight bias on the part of courts and patent examiners, because such rules "deny factfinders recourse to common sense." As to the specific patent claim at issue in this case, the Court adopted the obviousness analysis of the district court and expressly held that the claim "must be found obvious" in light of the prior art. The KSR decision potentially may generate litigation over the validity of some patents issued and upheld under the Federal Circuit's TSM standard; the uncertainty over the enforceability of certain patents thus has ramifications for lawsuits between alleged patent infringers and patent holders, as well as between patentees and their licensees (for example, a patent licensee may want to challenge the validity of the patent to avoid paying royalties or even the imposition of an injunction). While the KSR Court rejected TSM as the sole test for obviousness, the Court did not expressly invalidate it either. Instead, the Supreme Court explained that courts and patent examiners, in evaluating a patent's claimed subject matter for obviousness under § 103, must use common sense, ordinary skill, and ordinary creativity in applying the Graham factors and principles to the specific facts of the case. | The Patent Act provides protection for processes, machines, manufactures, and compositions of matter that are useful, novel, and nonobvious. Of these three statutory requirements, the nonobviousness of an invention is often the most difficult to establish. To help courts and patent examiners make the determination, the U.S. Court of Appeals for the Federal Circuit developed a test called "teaching, suggestion, or motivation" (TSM). This test provided that a patent claim is only proved obvious if the prior art, the nature of the problem to be solved, or the knowledge of those skilled in the art, reveals some motivation or suggestion to combine the prior art teachings. In KSR International Co. v. Teleflex Inc. (550 U.S. ___ , No. 04-1350, decided April 30, 2007), the U.S. Supreme Court held that the TSM test, if it is applied by district courts and patent examiners as the sole means to determine the obviousness of an invention, is contrary to Section 103 of the Patent Act and to Supreme Court precedents that call for an expansive and flexible inquiry, including Graham v. John Deere Co. of Kansas City, 383 U.S. 1 (1966). |
Randy Morehouse, the maintenance and operations supervisor for the Corning Union Elementary School District, points to one of the bullet holes at the Rancho Tehama Elementary School, Wednesday, Nov. 15,... (Associated Press)
Randy Morehouse, the maintenance and operations supervisor for the Corning Union Elementary School District, points to one of the bullet holes at the Rancho Tehama Elementary School, Wednesday, Nov. 15, 2017, from gunman Kevin Janson Neal's shooting rampage at Rancho Tehama Reserve, Calif., Tuesday.... (Associated Press)
RANCHO TEHAMA RESERVE, Calif. (AP) — Police on Wednesday called the deadly shooting rampage in California a clear case of "a madman on the loose" while defending their decision not to arrest the man for previously violating a court order prohibiting him from having guns.
At a tense news conference, police conceded that neighbors had repeatedly complained about Kevin Neal firing hundreds of rounds from his house among other erratic and violent behavior.
Tehama County Assistant Sheriff Phil Johnston said authorities responded to neighbors' calls several times, but the 44-year-old Neal wouldn't open the door, so they left.
"He was not law enforcement friendly. He would not come to the door," Johnston said. "You have to understand we can't anticipate what people are going to do. We don't have a crystal ball."
On Tuesday, Neal shot and killed five people and wounded at least eight others at different locations around the rural community of Rancho Tehama Reserve. Police later shot and killed him.
Asked about Neal's motive, Johnston responded: "Madman on the loose. The case is remarkably clear. We will move forward and we will start the healing process."
The evidence that emerged Wednesday, however, along with residents' statements raised questions about whether lawlessness was occasionally tolerated.
Neal was also known to have violent squabbles with his neighbors and his wife.
Police found the bullet-riddled body of Neal's wife stuffed under the floorboards of their home. They believe her slaying was the start of the rampage.
"We are confident that he murdered her," Johnson said.
Neal then shot two of his neighbors in an apparent act of revenge before he went looking for random victims at different locations that included the community's elementary school. All those killed were adults but authorities have said that children were among the wounded.
At the time of the attack, Neal was out of custody on bail after being charged in January with stabbing one of the neighbors he later killed in the rampage.
After the January assault, a judge barred Neal from having guns, according to court records.
The records also show that Neal was charged with illegally firing a weapon and possessing an illegal assault rifle on Jan. 31.
He was charged with five felonies and two misdemeanors. As part of a protective order that barred him from "owning, possessing, purchasing or attempting to purchase firearms," Neal was ordered to stay away from the two female neighbors he had threatened.
The neighbor he was accused of stabbing obtained a restraining order against him in February, writing to the court that Neal fired guns to scare people in her house and alleging that he was "very unpredictable and unstable" and that he had "anger issues," according to court documents.
The gunman's sister, Sheridan Orr, said her brother had struggled with mental illness throughout his life and at times had a violent temper.
She said Neal had "no business" owning firearms.
At Wednesday's news conference, Johnston initially said Neal "was not prohibited from owning firearms" but later acknowledged the protective order against him.
Records show Neal certified that he surrendered his weapons in February, but Johnston said Wednesday they had recovered two homemade assault rifles and two handguns registered to someone else.
Laurie Levenson, a Loyola Law School professor, said police officers don't need to be eyewitnesses to take action when a person is suspected of violating a restraining order.
"You can have probable cause even if officers don't see a gun or hear shots," she said. "They do not have to see the suspect with the weapon if all the circumstantial evidence indicates that he is violating the orders."
Levenson said officers don't even need a warrant to search a suspect's home if they believe the caller and the evidence they are hearing and collecting indicate the suspect is firing a gun.
"If an officer believes there is someone with a weapon who is not entitled to have a weapon, the law permits the officer to go in," she said.
During the rampage that lasted 25 minutes, Johnston said the gunman spent about six minutes shooting into Rancho Tehama Elementary School before driving off to keep shooting elsewhere.
Police said surveillance video shows the shooter unsuccessfully trying to enter the school after quick-thinking staff members locked the outside doors and barricaded themselves inside when they heard gunshots.
Witnesses reported hearing gunshots and children screaming at the school, which has about 100 students from kindergarten through fifth grade.
A heroic school custodian rushed children inside, yelling "get into the classrooms" before Neal could reach them, said Corning Union Elementary School District maintenance department head Randy Morehouse.
"At that point he was able to get everyone inside so there was no one left on the blacktop," Morehouse said. "He's an absolute hero."
The shooter "tried and tried and tried and tried to get into the kindergarten door," he said.
Six-year-old Alejandro Hernandez was in his classroom when one of Neal's bullets came through the window and hit him in the chest.
His aunt, Rosa A. Monroy, said he was at University of California, Davis, Medical Center in Sacramento awaiting surgery on his foot. It's not clear when they will operate on the more serious wound to his upper chest and right arm, she said.
"We're just hoping to hear for the best," she tearfully told a crowd of dozens of people that gathered for a vigil to honor the victims on Wednesday night. "I just pray that we can all be strong together."
The rampage ended when a patrol car rammed the stolen vehicle Neal was driving and police killed him in a shootout.
Dillon Elliott said he witnessed the rolling gun battle from a coffee shop and parking lot as the gunman and deputies sped by.
"All hell broke loose. I mean I've never heard gunshots like that before," he said.
Elliott's parents have lived in the sparsely populated area of rolling woodlands dotted with grazing cattle about 130 miles (209 kilometers) north of Sacramento since 1986. He moved away in 2001.
"There's hardly any police presence out here," he said. "In all the time we've been out here there has been almost, I would say almost zero police presence. Every so often you'll see them if it's super bad."
He said his father, who was on the homeowners' association board, was threatened in the late '80s and early '90s during a dispute with a neighbor and deputies never responded.
"It's almost like they think we're lawless out here and they just don't care," he said.
___
Gecker reported from San Francisco. Associated Press writers Paul Elias, Janie Har and Olga Rodriguez also contributed from San Francisco. ||||| Kevin Janson Neal’s deadly rampage in Tehama County apparently began Monday night when he shot his wife to death with multiple rounds, then hid her body under the floor of their ramshackle trailer, authorities said.
Armed with two semiautomatic rifles he had made illegally, Neal set out early Tuesday on a shooting spree that left a total of six people dead – including Neal – and eight people injured, seven of them children, Assistant Sheriff Phil Johnston said at a news conference Wednesday.
Calling Neal, 44, “a madman on the loose,” Johnston said the gunman drove the streets of Rancho Tehama firing randomly at homes and structures. Johnston asked residents to check on their neighbors to ensure that they all are safe after the violent outbreak.
“I don’t know what his motive was,” Johnston said. “I think he had a desire to kill as many people as he could, and whether or not he had a desire to die at the hands of police I don’t know.”
Digital Access for only $0.99 For the most comprehensive local coverage, subscribe today.
Johnston said there was a history of domestic violence calls to the Neal home, as well as calls for deputies to respond to shots being fired. He said Neal was “not law enforcement friendly” and would not come to the door when deputies knocked. At least twice, he said, deputies placed the home under surveillance in hopes that he would emerge, but he never did.
Authorities believe Neal killed his wife Monday night by shooting her to death, then hid the body under the floor and covered it up. Neal called his mother that night in North Carolina, telling her “it’s all over now,” the Associated Press reported.
Johnston did not disclose Neal’s wife’s name, but Neal’s sister, Sheridan Orr of Cary, N.C., identified her as the former Barbara Glisan.
Orr said her brother suffered from delusions and other mental health issues for years; his problems had reached the point that the Monday night phone to his mom call hardly raised a red flag with family members back in North Carolina.
“We had calls like that for 20 years,” Orr said. “You’d get immune.”
Johnston said the first call for help came into the sheriff’s dispatch center at 7:54 a.m. By 8:19 a.m. law enforcement officers had confronted Neal and killed him.
“Everything took place between these times,” he said. “It’s not very long, (but) when you’re out in the field and this is going on, this seems like forever.”
Neal was driving and firing randomly as he approached Rancho Tehama Elementary School. The sound of gunshots prompted school officials to lock down the facility before Neal arrived. It was a move that Johnston said likely saved many lives.
“It’s monumental that the school went on lockdown,” Johnston said. “I really, truly believe that we would have had a horrific bloodbath at that school.
“I can’t say how important that is.”
Neal apparently became frustrated at his inability to get into the school, and left after a time, Johnston said.
SHARE COPY LINK Tehama County Assistant Sheriff Phil Johnston gives details on the shooting that ended with five deaths including the suspected shooter. Tuesday, Nov. 14, 2017.
Johnston said authorities found three victims at two separate scenes on Bobcat Lane, one apparently Neal’s wife, and the second a woman who was the victim of an assault by Neal that landed him in jail in January on a charge of assault with a deadly weapon.
Another victim, a woman, was found along a roadway at a separate crime scene, and the fifth shooting victim was the father of a student at the school.
Johnston said four of the seven children injured were hurt at the school, with one of them shot and in critical condition at UC Davis Medical Center. The others were hurt by flying glass or debris, he said.
Neal was not supposed to own weapons. A protective order issued as a result of his January assault arrest required him to turn in any firearms in his possession, Johnston said.
Tehama County Superior Court records show he was charged in the January incident with assault, false imprisonment, battery and other charges in connection with an attack on two women in his neighborhood. He was accused of firing shots at the two women, stabbing one of them, and “holding them hostage for a period of time,” District Attorney Gregg Cohen said Wednesday. Neal was released on $160,000 bail.
Cohen said in a video news release Wednesday that the protective order was issued in late February, after Neal was released on bail and the two women he was accused of assaulting filed a complaint. “The two victims were scared and concerned (about) Neal attacking them,” Cohen said. “Neal harassed them repeatedly since being out on bail by repeatedly calling the California Department of Forestry, or Cal Fire, and claiming that he smelled smells, believing them to be manufacturing methamphetamine.”
SHARE COPY LINK Gregg Cohen, District Attorney of Tehama County, gives details on Kevin Janson Neal, the man responsible for the mass shooting that took place in Rancho Tehama on November 14, 2017.
The sheriff said Wednesday he did not yet know whether any firearms were turned in after the protective order was issued, but said the weapons Neal used Tuesday were made illegally at his home and unregistered. Neal used multi-round clips for the semiautomatic rifles, Johnston said, and also was armed with two handguns that were not registered to him.
“A number of magazine firearm clips were found (at the school) so I believe during his assault of the school he lost some of his ammunition,” Johnston said. “I think he became frustrated, ‘I’m here too long.’
“There’s no doubt that he didn’t want to give up. So he elected to find other targets.”
Neal’s shooting spree included firing at homes and passing vehicles. “He chased people with his vehicle shooting at them,” Johnston said.
Among those who were shot was a woman driving her children to school in a Ford F-250 pickup truck. Johnston said she had a concealed carry permit and pulled out her handgun, but was not able to fire before Neal fled after “dumping eight or so rounds into the side panel of her driver’s door.”
Another victim was Jessie Sanders, who said Wednesday that he was at the school when the shooter opened fire.
Sanders said he yelled at the gunman to shoot at him instead of the classrooms, and Neal turned and fired, grazing Sanders’ right forearm. “That man smiled at me and started shooting at me,” Sanders said. “I got shot telling that guy to stop shooting at kids, shoot at me.
“He missed me with a lot of bullets, but one of them got me.”
Sanders estimated that the gunman fired at least 60 rounds at the school.
“I don’t know how come I didn’t get shot with more, but it’s better my life than all those kids he was shooting at,” Sanders said. “Who does that to women and children?”
Sanders was interviewed Wednesday on Bobcat Lane, where he was passing by Neal’s home with Hailey Suzanne Poland, who Neal had stabbed in the confrontation in January as she was out walking the neighborhood with Diana Lee Steel.
“Nice day out, you go for a walk, and he just decided to come out and he shot at us and didn’t hit us,” Poland said. “But he wouldn’t let us go, he held us captive right in front of his house, and he proceeded to go after her. Me protecting her, he stabbed me in the process, almost went through the pancreas.
“This went on for like 15 minutes, fighting this guy, 15 very long minutes.”
Poland said Neal bailed out of jail within two hours of being arrested in that incident.
“He used to be a nice guy, but literally with the snap of a finger he’d go crazy,” she said.
There were no overt signs of the violence Wednesday. Neal’s home – a trailer with junk, cars and tools scattered around the front – sat without any sign of law enforcement or crime scene tape. The same was true at the school, which remained closed.
Cohen, the DA, said Neal had been arrested in North Carolina in 1989 for disorderly conduct and obstructing a peace officer, in 1992 for possession of marijuana with intent to sell, and in 2006 for assault with a deadly weapon. He was also arrested in California in 2013 on a hit-and-run charge.
But in each case, he wasn’t convicted of a crime, Cohen said.
Neal was remembered by his family Wednesday as a decent but troubled man who somehow went astray.
“We’re feeling horrible for those people out there,” said his uncle Ed. “We’re not this kind of people.”
Speaking from the family’s home in Raleigh, N.C., the uncle said the family hadn’t received official word from the Tehama County authorities until Wednesday morning, when a sheriff’s deputy called.
The uncle said Neal, who was raised in North Carolina, was intelligent but struggled all his life with dyslexia. He attended East Carolina University for a while, studying music. He moved to California around a decade ago to work as an airplane mechanic, but the job opportunity apparently didn’t pan out and Neal was drifting through various odd jobs, including fixing up old cars and selling them.
“Delivering pizzas, whatever,” his uncle said.
His sister, Orr, told The Bee that her brother had suffered emotional problems for years. As early as the eighth grade he was placed in a drug rehabilitation facility in North Carolina for two weeks in the belief that his problems were drug related, she said.
As he grew older, she said Neal felt hemmed in by his surroundings in North Carolina and moved to rural California as a way of escaping his old life. “He always had a passion for the mountains and California and the wide open spaces,” she said. “They seemed really happy in the beginning. They had many years there, where they were hiking the mountains.”
But the situation turned dark again when new neighbors moved in. He told family members back in North Carolina that the new neighbors were cooking methamphetamine and he called the police several times to investigate.
“He said the neighbors were cooking meth and the fumes were affecting their health, and their dogs,” Orr said. Neal also shot videos of the neighbors driving by his house, giving him the finger, she said.
“A small thing would set him off exponentially,” she said. “It continued to escalate into this neighborhood feud.”
Orr said family members had begged him for years to get medications for his mental illness, but he steadfastly refused. “He refused to do anything because he didn’t want the government to get his secret information,” she said.
Orr said her mother would commiserate over the phone with Barbara about getting help for Neal and possibly having him committed to an institution.
“Their options were limited,” Orr said. “It’s impossible to commit a grown man for more than 24 or 36 hours and they were afraid that when he got out, he would be more enraged.”
Family members also weren’t sure what to make of his claims about the neighbors’ drug activities. “He did have grandiose ideas and delusions,” Orr said.
His mother, who was identified only as Anne, told the Associated Press that her son was at the end of his rope because of the feud. “Mom, it’s all over now,” he told her in a phone call Monday, according to AP. “I have done everything I could do and I am fighting against everyone who lives in this area.”
Neighbors said they had complained to the Sheriff’s Department that Neal was firing off rounds of ammunition in his neighborhood.
“My understanding is they took all his weapons (after the January arrest),” his uncle Ed said. “Where in the name of Christ did he get all of that stuff?”
Court records show one of the charges from January included the illegal possession “of an assault weapon” described as an AR-15 Bushmaster rifle.
Federal officials in the U.S. Attorney’s office in Sacramento have cracked down in recent years on individuals selling “ghost guns,” typically AR-15 semiautomatic rifles that can be constructed through kits or “blanks” ordered for as little as $100 over the internet and built by drilling them out and assembling the parts.
Building such weapons, which do not carry serial numbers or other identifying features, is legal, but selling or trading them is a felony.
“You can build them in your shop or build them in your garage,” Johnston said.
He added that even with calls from neighbors about Neal firing weapons in the past, deputies received little cooperation when they responded.
“We would receive calls that he was shooting,” Johnston said. “No deputies observed it. This is why they tried to do surveillance to catch him, and that’s all I can say about that.
“We tried to make contact with him using other avenues, but quite honestly the neighbors up there weren’t real forthcoming, either, and they also had firearms and frequently shot, also.” | The man who killed five people in a shooting rampage in California on Tuesday was banned by court order from owning firearms—and police are being criticized for failing to take action after neighbors in Rancho Tehama Reserve complained that he had been firing hundreds of rounds. At a press conference Wednesday, Tehama County Assistant Sheriff Phil Johnston said Kevin Janson Neal refused to cooperate with investigators, the Sacramento Bee reports. "He was not law enforcement friendly. He would not come to the door," Johnston said. "You have to understand, we can't anticipate what people are going to do. We don't have a crystal ball." Neal was out on bail after being charged with assault in January. Johnston said Neal, who was killed in an exchange of gunfire with police, used two unregistered homemade assault rifles and two handguns that were registered to somebody else. He said Neal lost some of his ammunition clips at a local elementary school, where he spent several minutes trying to get into classrooms before leaving to seek other targets. The Rancho Tehama Elementary School was locked down just before Neal arrived. District maintenance department head Randy Morehouse tells the AP that a school custodian managed to rush children inside just in time. "He was able to get everyone inside so there was no one left on the blacktop," Morehouse says. "He's an absolute hero." (Police say Neal's first victim was his wife.) |
ISLAMABAD (AP) — A Saudi-led coalition targeting Shiite rebels in Yemen has asked Pakistan to contribute soldiers, Pakistan's defense minister said Monday, raising the possibility of a ground offensive in the country.
Yemenis stand amid the rubble of houses destroyed by Saudi-led airstrikes in a village near Sanaa, Yemen, Saturday, April 4, 2015. Since their advance began last year, the Shiite rebels, known as Houthis... (Associated Press)
Defense Minister Khawaja Muhammad Asif made the comments as Pakistan's parliament debates whether to contribute militarily to the campaign against the rebels, known as Houthis. Pakistan previously offered its verbal support for the mission, but hasn't offered any military support.
Days of Saudi-led airstrikes have yet to halt the Houthi advance across Yemen, the Arab world's poorest country, fuelling speculation that there could be a ground operation launched in Yemen. Saudi Arabia and other coalition members have not ruled it out.
Saudi Arabia also asked for aircraft and naval ships to aid in the campaign, Asif said. He said Saudi officials made the request during his visit to Jeddah last week.
"I want to reiterate that this is Pakistan's pledge to protect Saudi Arabia's territorial integrity," Asif said. "If there's a need be, God willing, Pakistan will honor its commitment."
The Saudi-led campaign entered its 12th day Monday, targeting the rebels who took over the capital, Sanaa, in September and eventually forced President Abed Rabbo Mansour Hadi to flee. The rebels and allied forces are now making a push for Yemen's second-largest city, Aden, declared a temporary capital by Hadi before he fled abroad.
Muslim-majority Pakistan has close ties to Saudi Arabia, which is home to Islam's two holiest sites, Mecca and Medina. Pakistan also has a sizeable Shiite minority, complicating the debate over engagement in a conflict that is increasingly pitting Sunnis against Shiites.
The debate in parliament will aim to decide whether their country can afford to join the conflict in Yemen when it is already at war with Islamic and sectarian militants allied with groups like al-Qaida and Islamic State. Pakistan already has nearly 300 troops in Saudi Arabia taking part in joint exercises and most Pakistanis back the idea of protecting Islam's holiest sites from attack.
The Houthis have been backed by security forces loyal to Yemen's ousted President Ali Abdullah Saleh — whose loyalists control elite forces and large combat units in the country's military.
Yemen-based Al-Qaida in the Arabian Peninsula, considered among the most active and dangerous branch of global militant organization, has benefited from the crisis. The chaos also has disrupted a U.S.-led drone strike program targeting suspected militants there. ||||| Oakland man killed in Yemen, family says
Photo: Mohammed Alazzani Image 1 of / 1 Caption Close Image 1 of 1 Oakland resident Jamal al-Labani died this week in Yemen, his family said Saturday. Photo: Mohammed Alazzani Oakland resident Jamal al-Labani died this week in Yemen, his... Oakland man killed in Yemen, family says 1 / 1 Back to Gallery
Jamal al-Labani, an Oakland resident who was visiting Yemen, became a victim of the violence that has plagued the Mideast country when he died after being struck by shrapnel while walking home in the port city of Aden, his family said Saturday.
Al-Labani, an American citizen in his 40s, and his nephew were killed by rebel tank fire Tuesday, said his cousin, Mohammed Alazzani, 27, who lives in San Leandro.
His family described al-Labani as a quiet but caring man who was part-owner of an Oakland gas station and had lived in the city for more than a decade.
“He was very kind and he was a really hard-working guy,” Alazzani said. Alazzani was notified of the death by family members in Yemen.
The country has been gripped with violence as Shiite Houthi rebels battle government forces that are being backed by a Saudi-led air strike campaign. The United Nations reported that more than 500 people have died in the the past two weeks, including many civilians and children.
Cease fire sought
On Saturday, the Red Cross called for an immediate 24-hour cease-fire.
The Oakland resident traveled to the country in February to visit his wife and 2-year-old daughter in hopes of bringing both of them back to the United States. In recent weeks, al-Labani had unsuccessfully attempted to leave the country. He has two teenage boys from a previous marriage who live in Fresno.
Alazzani said the U.S. government could have done more to aid al-Labani’s attempt to leave Yemen.
“If the U.S. government acted somehow last week, we could have saved this life,” he said.
In a press briefing Friday, Marie Harf, a State Department spokeswoman, said the U.S. does not have plans to evacuate American citizens now in the country. Given the unpredictable nature of the situation in Yemen, civilian lives could be put at greater risk if military assets were sent to attempt an evacuation, she added.
The U.S. Embassy in Sanaa, the country’s capital, was closed in February and Americans were urged to avoid traveling to the country and to leave when it was safe to do so.
Harf said the government is not abandoning American citizens, citing 10 years of travel warnings against visiting the country. “But you have to balance what options we have for a possible evacuation against the security situation, against what is feasible, against what kind of assets could do this and what the risk is to those assets,” she said.
Evacuations urged
The Council on American-Islamic Relations and the Asian Americans Advancing Justice-Asian Law Caucus have recently called for the evacuation of American citizens in Yemen.
“With many other nations mounting efforts to evacuate their citizens, it is unclear why the United States chooses to leave its citizens to their own devices in an increasingly deadly combat situation,” Zahra Billoo, executive director of the Bay Area chapter of the Council on American-Islamic Relations, said in a statement.
A Chinese warship recently helped evacuate more than 200 foreigners, including German and Canadian citizens, whose governments had reportedly requested help from China to get them out of Yemen.
Alazzani said he hopes the U.S. government will now turn its attention to the remaining American citizens stuck in the country.
“The main message for us is we need our government to react immediately — the longer we wait, the worse it gets. If we lost one person,” he said, “at least we can save others.”
Hamed Aleaziz is a San Francisco Chronicle staff writer. E-mail: haleaziz@sfchronicle.com Twitter: @haleaziz ||||| The Bay Area’s Yemeni community is calling on the American government to do more to stem the violence in their country after an Oakland man was killed during a recent wave of violence in Yemen. Nannette Miranda reports from Hayward. (Published Sunday, April 5, 2015)
The Bay Area’s Yemeni community is calling on the American government to do more to stem the violence in their country after an Oakland man was killed amid escalating tensions in the Middle Eastern nation.
Jamal al-Labani , who lived in Oakland for about 15 years, was looking forward to bringing his new family to the Bay Area when that dream was shattered.
In a Hayward hall on Mission Boulevard Saturday, friends and family members mourned the loss of al-Labani, an American citizen who went to Yemen in February to try and bring his pregnant wife and 2-year-old daughter to the United States. Al-Labani has two teenage sons from a previous marriage living in Fresno.
On Tuesday, as al-Labani was trying to make it home to safety in the port town of Aden, he and a nephew were killed by shrapnel during heavy rebel tank fire, his family says.
Impact of Asian Americans on American Politics
Asian Americans make up the fastest growing population in the country. NBC Bay Area’s political analyst Larry Gerston joins us live to discuss how this community's growth is having an impact on American politics. (Published Sunday, April 5, 2015)
“He had been trying to leave the country the past three weeks, and things are getting worse and worse. Airports are pretty much closed. There’s no way for him to escape,” said his cousin Mohammed Alazzani.
Al-Labani, who co-owned a Westco gas station on MacArthur Boulevard in Oakland, was known for his great smile and his kindness.
“Even his customers actually cried. You see tears in his customers," Alazzani said. "He’s really generous. Even if customers are short money, he will let them go."
Video Parents Rally in San Francisco For Missing Mexican Students
Now, members of the Yemeni community in Oakland and San Francisco are worried about their relatives in their homeland.
A Saudi-led coalition wants the return of Yemen’s president, who fled the country last week. But Houthi rebels have overrun much of the country. The Council on American Islamic Relations is calling on Washington to remove U.S. citizens.
“Our big focus right now is getting Americans out of Yemen and seeking the government’s assistance to do so,” said Council on American-Islamic Relations Executive Director Zahra Billoo.
Video CA Drought Prompts Possible New Water Restrictions
But the State Department says it has no plans to intervene, saying civilian lives could be at greater risk if they sent the military.
Alazzani thinks his cousin would be alive today if the U.S. had acted earlier.
"If we acted or did something last week, we could have probably saved him,” he said.
Other countries have been pulling their citizens out of Yemen. The American Red Cross on Saturday called for a 24-hour cease-fire. ||||| (CNN) Jamal al-Labani had hoped to bring his pregnant wife and 2-year-daughter back to the United States from war-torn Yemen.
But the gas station owner never made it on a flight back to his home Hayward, California.
Family members have identified him as a victim killed in mortar strike last week in the southern Yemeni city of Aden.
He is believed to be the first U.S. citizen killed in the current violence in Yemen.
Early Tuesday evening, the 45-year-old al-Labani was on his way back from mosque prayers when he was hit in the back by shrapnel from a mortar shell, his family said. He died minutes later.
'Things got worse and worse'
Violence quickly escalated in Yemen soon afer al-Labani arrived in February.
"When he got (to Aden), after a few weeks he noticed things were starting to get bad and then the (U.S.) Embassy closed ," his cousin Mohammed Alazzani told CNN.
For the past three weeks, al-Labani had told family members he was concerned about not being able to evacuate as the situation deteriorated in the country, according to his cousin.
More than 200 people have been killed in Aden in the past 11 days, according to Naef Al Bakri, Aden's deputy governor.
Two days before al-Labani was killed, he told his family the last option was to try to cross the border into Oman and fly to Egypt, but he never made it.
"The airports got closed and things got worse and worse," Alazzani told CNN by phone. "People were hoping things would get better, but they only got worse and worse."
Advocacy group: Trapped Americans need help
Yemen has been rocked by violence and political turmoil for months. Houthi rebels -- minority Shiites who have long complained of being marginalized in the majority Sunni country -- forced Yemeni President Abdu Rabu Mansour Hadi from power in January, placing him under house arrest and taking over Sanaa, the country's capital.
Hadi escaped in February, fled to the southern city of Aden and said he remained President. He fled to Saudi Arabia last month as the rebels and their military allies advanced on Aden.
Now the violence is intensifying as Saudi Arabia and other Arab nations target the rebels in Yemen with airstrikes.
Yemeni-Americans are trapped in the conflict, but haven't gotten enough help from the U.S. government, the Council on American-Islamic Relations told CNN Sunday.
Zahra Billoo, a spokeswoman for the advocacy group, said it's helping al-Labani's family and the families of other Yemeni-Americans.
"All of these other governments, Russia, China, Ethiopia, India ... they have all been evacuating their citizens. So to say that it's impossible for the U.S. to evacuate their citizens is difficult to grasp," Billoo said.
Responding to the criticism, the U.S. State Department told CNN that there are no current plans to evacuate private U.S. citizens from Yemen.
"We encourage all U.S. citizens to shelter in a secure location until they are able to depart safely. U.S. citizens wishing to depart should do so via commercial transportation options when they are available," a spokesman for the State Department told CNN in a statement. "Additionally, some foreign governments may arrange transportation for their nationals and may be willing to offer assistance to others."
Yemeni-American advocates think more could be done.
"There have been travel warnings to Yemen for a few years now. What's not clear is, are they saying 'Be cautious' or 'Don't go at all'?" Billoo asked. "It still it doesn't sit well with many of us civil rights lawyers who believe that U.S. citizenship should be the ultimate protection."
Fierce fighting and power blackouts
Fierce fighting continued across Yemen on Sunday amid an electrical blackout in parts of the country and political moves that could further fracture the already divided military.
Intense airstrikes hit Sanaa overnight. Senior security officials in the Yemeni capital said the airstrikes targeted the military intelligence headquarters and the Defense Ministry's central command, military bases and missile depots.
The blasts at the military compounds, which are inside the city, shattered the windows of many homes nearby.
Meanwhile, some 16 million Yemenis living in provinces under control of Houthi rebels, including Sanaa, remained without power after an electrical blackout that began Saturday night.
In the country's south, the Houthis remain in control of Aden's port and other strategic holdings, including the state broadcaster.
The International Committee of the Red Cross said Sunday that Saudi Arabia has signed off on the delivery of medical supplies and personnel to Yemen, where the organization had warned that time was running out to save those wounded in airstrikes and ground fighting ||||| ADEN Southern Yemeni militias backed by warplanes from a Saudi-led coalition attacked Houthi fighters across several provinces in south Yemen on Monday, driving the Shi'ite rebel forces from some of their positions, witnesses and militia sources said.
The southern fighters' gains came on the 12th day of an air campaign by Saudi Arabia and mainly Gulf Arab allies trying to stem advances by the Iran-allied Houthis, who control the capital Sanaa and have advanced on the southern city of Aden.
The fighting has killed hundreds of people, cut off water and electricity supplies and led the United Nations children's agency UNICEF to warn that Yemen is heading towards a humanitarian disaster.
Saudi Arabia, the main Sunni Arab power in the Gulf, launched the air campaign on March 26 to try to contain the Shi'ite Houthis and restore President Abd-Rabbu Mansour Hadi, who has fled Aden for refuge in Riyadh.
The International Committee of the Red Cross and UNICEF plan to fly aid planes into Yemen on Tuesday, but the missions have been delayed as they seek clearance from Arab states waging the air strikes and hunt for planes prepared to fly to Yemen.
In Aden, Houthi forces gathered at the edge of the main port area on Monday but pulled out of two residential quarters on its fringes, residents told Reuters. Around 60 people were killed in heavy fighting in the area on Sunday, they said.
Explosions shook Aden's suburbs as residents reported a foreign warship shelling Houthi positions on the outskirts.
Military momentum is hard to judge in a disjointed conflict playing out across hundreds of miles of mountains, deserts and coastal positions, but in the southern provinces surrounding Aden the Houthis' foes said they had made gains.
Residents in Dhalea, north of Aden, said air strikes hit a local government compound on the northern edge of the town and a military base on its outskirts which were both taken over by Houthis. They said buildings were on fire and reported loud explosions.
Militia fighters said coalition planes also dropped supplies - the first time they had done so outside Aden - including mortars, rocket-propelled grenades, rifles, ammunition, telecommunications equipment and night goggles.
Southern militias reported cutting off two roads in Abyan province, east of Aden, leading to the port city, after clashes with the Houthis.
Residents near al-Anad air base, once home to U.S. military personnel fighting a covert drone war with al Qaeda in Yemen, said dozens of Houthi and allied fighters were withdrawing north after the site was bombed by coalition jets.
HEAVY ADEN FIGHTING
Saudi Arabia has taken the lead in military operations against the Houthis, backed by air forces from its Gulf allies the United Arab Emirates, Bahrain, Kuwait and Qatar. It says it also has support from Jordan, Egypt, Sudan, Morocco and Pakistan.
Pakistan has yet to spell out what support it will provide, and its parliament was meeting on Monday to discuss what the defense minister said was a request from Riyadh for military aircraft, warships and soldiers.
Street fighting and heavy shelling have torn through Aden for several days. The city is the last bastion of support for the Saudi-backed Hadi, though it is unclear whether the southern fighters are battling for him or for local territory.
Food, water and electricity shortages have mounted throughout the country but especially in Aden, where combat has shut ports and cut land routes from the city.
"How are we supposed to live without water and electricity?" pleaded Fatima, a housewife walking through the city streets with her young children.
She clutched a yellow plastic jerry can, like dozens of other residents on the streets and in queues seeking water from public wells or mosque faucets after supplies at home dried up.
The International Committee of the Red Cross, which for days blamed the Saudi-led coalition for delays, told Reuters on Monday that Saudi Arabia had granted permission for an aid shipment late on Saturday but problems in chartering planes would likely delay the aid's arrival until Tuesday.
"We are still working on getting the plane to Sanaa. It's a bit difficult with the logistics because there are not that many companies or cargo planes willing to fly into a conflict zone," said Marie Claire Feghali, a Red Cross spokesperson.
The ICRC is aiming to get 48 tonnes of medical supplies into Yemen by plane. It is also trying to get staff by boat from Djibouti to Aden, but fighting has complicated efforts.
"Today fighting was taking place in Aden port so the security situation isn't getting any better," said another ICRC spokeswoman, Sitar Jabeen.
At least eight people were killed in an air strike before dawn in the suburbs of the northern city of Saadah, home of the Houthi movement which spread from its mountain stronghold to take over the capital Sanaa six months ago.
A Houthi spokesman said the dead included women and children.
Local officials said strikes also hit air defense and coastal military units near the Red Sea port of Hodaida, and targets on the outskirts of Aden. They also hit a bridge on the road south to Aden, apparently trying to block the Houthis from sending reinforcements to their fighters in the city.
The United Nations said on Thursday that more than 500 people had been killed in two weeks of fighting in Yemen, while the Red Cross has appealed for an immediate 24-hour pause in fighting to allow aid into the country.
(Additional reporting by Noah Browning in Dubai, Mohammad Ghobari in Cairo, Katharine Houreld in Islamabad and Stephanie Nebehay in Geneva; Writing by Dominic Evans; Editing by Giles Elgood) | The escalating Yemen conflict is believed to have killed hundreds of people in the port city of Aden over the last 11 days—including an American citizen. Family members say Oakland, Calif., resident Jamal al-Labani was killed last week when he was hit in the back from mortar shrapnel as he walked down a street, CNN reports. Friends and relatives in California say the 45-year-old, who is believed to be the first American killed in the conflict, went to the country in February to bring his pregnant wife and 2-year-old daughter back to the US, relatives say, but found himself trapped as the situation deteriorated, reports NBC Bay Area. A nephew tells the San Francisco Chronicle that al-Labani, who had lived in Oakland for more than a decade and co-owned a gas station there, "was very kind and he was a really hard-working guy." A coalition led by Saudi Arabia bombed Shiite rebels for the 12th day today, and Aden residents reported a foreign warship shelling rebel positions on the outskirts of the city today, reports Reuters. The airstrikes have failed to halt the advance of the Houthi rebels, and Pakistan says Saudi Arabia has asked it to contribute troops for a possible ground offensive, the AP reports. (Another American says he's imprisoned at a military base in Yemen and isn't sure he will make it out alive.) |
In accordance with scientific custom and/or statutory mandates, several offices within EPA have used peer review for many years to enhance the quality of science within the agency. In May 1991, the EPA Administrator established a panel of outside academicians to, among other things, enhance the stature of science at EPA and determine how the agency can best ensure that sound science is the foundation for the agency’s regulatory and decision-making processes. In March 1992, the expert panelrecommended that, among other things, EPA establish a uniform peer review process for all scientific and technical products used to support EPA’s guidance and regulations. In response, EPA issued a policy statement in January 1993 calling for peer review of the major scientific and technical work products used to support the agency’s rulemaking and other decisions. However, the Congress, GAO, and others subsequently raised concerns that the policy was not being consistently implemented throughout EPA. The congressional concern resulted in several proposed pieces of legislation that included prescriptive requirements for peer reviews. Subsequently, in June 1994 the EPA Administrator reaffirmed the central role of peer review in the agency’s efforts to ensure that its decisions rest on sound science and credible data by directing that the agency’s 1993 peer review policy be revised. The new policy retained the essence of the prior policy and was intended to expand and improve the use of peer review throughout EPA. Although the policy continued to emphasize that major scientific and technical products should normally be peer reviewed, it also recognized that statutory and court-ordered deadlines, resource constraints, and other constraints may limit or preclude the use of peer review. According to the Executive Director of the Science Policy Council, one of the most significant new features of the 1994 action was the Administrator’s directive to the agency’s Science Policy Council to organize and guide an agencywide program for implementing the policy. The policy and procedures emphasize that peer review is not the same thing as other mechanisms that EPA often uses to obtain the views of interested and affected parties and/or to build consensus among the regulated community. More specifically, EPA’s policy and procedures state that peer review is not peer input, which is advice or assistance from experts during the development of a product; stakeholders’ involvement, which is comments from those people or organizations (stakeholders) that have significant financial, political, or other interests in the outcome of a rulemaking or other decision by EPA; or public comment, which is comments obtained from the general public on a proposed rulemaking and may or may not include the comments of independent experts. While each of these activities serves a useful purpose, the policy and procedures point out that they are not a substitute for peer review. For example, as noted in EPA’s Standard Operating Procedures, public comments on a rulemaking do not necessarily solicit the same unbiased, expert views as are obtained through peer review. In order to accommodate the differences in EPA’s program and regional offices, the policy assigned responsibility to each program and regional office to develop standard operating procedures and to ensure their use. To help facilitate agencywide implementation, EPA’s Science Policy Council was assigned the responsibility of assisting the offices and regions in developing their procedures and identifying products that should be considered for peer review. The Council was also given the responsibility for overseeing the agencywide implementation of the policy by promoting consistent interpretation, assessing agencywide progress, and developing revisions to the policy, if warranted. However, EPA’s policy specifies that the Assistant and Regional Administrators for each office are ultimately responsible for implementing the policy, including developing operating procedures, identifying work products subject to peer review, determining the type and timing of such reviews, and documenting the process and outcome of each peer review conducted. Our objectives, scope, and methodology are fully described in appendix I. Two years after EPA established its peer review policy, implementation is still uneven. EPA acknowledges this problem and provided us with a number of examples to illustrate the uneven implementation. At our request, the Science Policy Council obtained information from EPA program and regional offices and provided us with examples in which, in their opinion, peer review was properly conducted; cases in which it was conducted but not fully in accordance with the policy; and cases in which peer review was not conducted at all. The following table briefly summarizes the cases they selected; additional information on these nine cases is provided in appendix II. According to the Executive Director of the Science Policy Council, this unevenness can be attributed to several factors. First, some offices within EPA have historically used peer review, while others’ experience is limited to the 2 years since the policy was issued. For example, in accordance with scientific custom, the Office of Research and Development (ORD) has used peer review for obtaining critical evaluations of certain work products for more than 20 years. Additionally, statutes require that certain work products developed by EPA be peer reviewed by legislatively established bodies. For example, criteria documents developed by ORD for the National Ambient Air Quality Standards must receive peer review from EPA’s Science Advisory Board (SAB), and pesticide documents must receive peer review from the Scientific Advisory Panel. In contrast, some EPA regional offices and areas within some EPA program offices have had little prior experience with peer review. In addition to these offices’ varying levels of experience with peer review, the Science Policy Council’s Executive Director and other EPA officials said that statutory and court-ordered deadlines, budget constraints, and difficulties associated with finding and obtaining the services of qualified, independent peer reviewers have also contributed to peer review not being consistently practiced agencywide. A report by the National Academy of Public Administration confirmed that EPA frequently faces court-ordered deadlines. According to the Academy, since 1993 the courts have issued an additional 131 deadlines that EPA must comply with or face judicial sanctions. Also, as explained to us by officials from EPA’s Office of Air and Radiation (OAR), just about everything EPA does in some program areas, such as Clean Air Act implementation, is to address either legislative or court-ordered mandates. Others have attributed EPA’s problems with implementing peer review in the decision-making process to other factors. For example, in its March 1995 interim report on EPA’s research and peer review program within the Office of Research and Development, the National Academy of Sciences’ National Research Council noted that, even in EPA’s research community, knowledge about peer review could be improved. The Council’s interim report pointed out that “although peer review is widely used and highly regarded, it is poorly understood by many, and it has come under serious study only in recent years.” Although we agree that the issues EPA and others have raised may warrant further consideration, we believe that EPA’s uneven implementation is primarily due to (1) confusion among agency staff and management about what peer review is, what its significance and benefits are, and when and how it should be conducted and (2) ineffective accountability and oversight mechanisms to ensure that all products are properly peer reviewed by program and regional offices. Although the policy and procedures provide substantial information about what peer review entails, we found that some EPA staff and managers had misperceptions about what peer review is, what its significance and benefits are, and when and how it should be conducted. For example, officials from EPA’s Office of Mobile Sources (OMS) told the House Commerce Committee in August 1995 that they had not had any version of the mobile model peer reviewed. Subsequently, in April 1996, OMS officials told us they recognize that external peer review is needed and that EPA plans to have the next iteration of the model peer reviewed. However, when asked how the peer review would be conducted, OMS officials said they plan to use the public comments on the revised model they receive as the peer review. As EPA’s policy makes clear, public comments are not the same as nor are they a substitute for peer review. We found a similar misunderstanding about what peer review entails in a regional office we visited. The region prepared a product that assesses the impacts of tributyl tin—a compound used since the 1960s in antifouling paints for boats and large ships. Although regional staff told us that this contractor-prepared product had been peer reviewed, we found that the reviews were not in accordance with EPA’s peer review policy. The draft product received some internal review by EPA staff and external review by contributing authors, stakeholders, and the public; however, it was not reviewed by experts previously uninvolved with the product’s development nor by those unaffected by its potential regulatory ramifications. When we pointed out that—according to EPA’s policy and the region’s own peer review procedures—these reviews are not a substitute for peer review, the project director said that she was not aware of these requirements. In two other cases we reviewed, there was misunderstanding about the components of a product that should be peer reviewed. For example, in the Great Waters study—an assessment of the impact of atmospheric pollutants in significant water bodies—the scientific data were subjected to external peer review, but the study’s conclusions that were based on these data were not. Similarly, in the reassessment of dioxin—a reexamination of the health risks posed by dioxin—the final chapter summarizing and characterizing dioxin’s risks was not as thoroughly peer reviewed. More than any other, this chapter indicated EPA’s conclusions based on its reassessment of the dioxin issue. In both cases, the project officers did not have these chapters peer reviewed because they believed that the development of conclusions is an inherently governmental function that should be performed exclusively by EPA staff. However, some EPA officials with expertise in conducting peer reviews disagreed, maintaining that it is important to have peer reviewers comment on whether or not EPA has properly interpreted the results of the underlying scientific and technical data. In addition to the uncertainty surrounding the peer review policy, we also noted problems with EPA’s accountability and oversight mechanisms. EPA’s current oversight mechanism primarily consists of a two-part reporting scheme: Each office and region annually lists (1) the candidate products nominated for peer review during the upcoming year and (2) the status of products previously nominated. If a candidate product is no longer scheduled for peer review, the list must note this and explain why peer review is no longer planned. Agency officials said this was the most extensive level of oversight to which all program and regional offices could agree when the peer review procedures were developed. Although this is an adequate oversight mechanism for tracking the status of previously nominated products, it does not provide upper-level managers with sufficient information to ensure that all products warranting peer review have been identified. This, when taken together with the misperceptions about what peer review is and with the deadlines and budget constraints that project officers often operate under, has meant that the peer review program to date has largely been one of self-identification, allowing some important work products to go unlisted. According to the Science Policy Council’s Executive Director, reviewing officials would be much better positioned to determine if the peer review policy and procedures are being properly and consistently implemented if, instead, EPA’s list contained all major products along with what peer review is planned and, if none, the reasons why not. The need for more comprehensive accountability and oversight mechanisms is especially important given the policy’s wide latitude in allowing peer review to be forgone in cases facing time and/or resource constraints. As explained by EPA’s Science Policy Council’s Executive Director, because so much of the work that EPA performs is in response to either statutory or court-ordered mandates and the agency frequently faces budget uncertainties or limitations, an office under pressure might argue for nearly any given product that peer review is a luxury the office cannot afford in the circumstances. However, as the Executive Director of EPA’s Science Advisory Board told us, not conducting peer review can sometimes be more costly to the agency in terms of time and resources. He told us of a recent rulemaking by the Office of Solid Waste concerning a new methodology for delisting hazardous wastes in which the program office’s failure to have the methodology appropriately peer reviewed resulted in important omissions, errors, and flawed approaches in the methodology, which will now take from 1 to 2 years to correct. The SAB also noted that further peer review of individual elements of the proposed methodology is essential before the scientific basis for this rulemaking can be established. EPA has recently taken a number of steps to improve the peer review process. Although these steps should prove helpful, they do not fully address the underlying problems discussed above. In June 1996, EPA’s Deputy Administrator directed the Science Policy Council’s Peer Review Advisory Group and ORD’s National Center for Environmental Research and Quality Assurance to develop an annual peer review self-assessment and verification process to be conducted by each office and region. The self-assessment will include information on each peer review completed during the prior year as well as feedback on the effectiveness of the overall process. The verification will consist of the signature of headquarters, laboratory, or regional directors to certify that the peer reviews were conducted in accordance with the agency’s policy and procedures. If the peer review did not fully conform to the policy, the division director or the line manager will explain significant variances and actions needed to limit future significant departures from the policy. The self-assessments and verifications will be submitted and reviewed by the Peer Review Advisory Group to aid in its oversight responsibilities. According to the Deputy Administrator, this expanded assessment and verification process will help build accountability and demonstrate EPA’s commitment to the independent review of the scientific analyses underlying the agency’s decisions to protect public health and the environment. These new accountability and oversight processes should take full effect in October 1996. ORD’s National Center for Environmental Research and Quality Assurance has also agreed to play an expanded assistance and oversight role in the peer review process. Although the details had not been completed, the Center’s Director told us that his staff will be available to assist others in conducting peer reviews and will try to anticipate and flag the problems that they observe. In addition, the Center recently developed an automated Peer Review Panelist Information System—a registry with information on identifying and contacting potential reviewers according to their areas of expertise. Although the system was designed to identify potential reviewers of applications for EPA grants, cooperative agreements, and fellowships, the Center’s Director stated that the registry (or similarly designed ones) could also be used to identify potential peer reviewers for EPA’s technical and scientific work products. Recognizing that confusion remains about what peer review entails, the Office of Water recently drafted additional guidance that further clarifies the need for, use of, and ways to conduct peer review. The Office has also asked the Water Environment Federation to examine its current peer review process and to provide recommendations on how to improve it. The Federation has identified the following areas of concern, among others, where the program should be improved: (1) the types of, levels of, and methodologies for peer review; (2) the sources and selection of reviewers; (3) the funding/resources for peer review; and (4) the follow-up to, and accountability for, peer review. Similarly, OAR’s Office of Mobile Sources proposed a Peer Review/Scientific Presence Team in March 1996 to help OMS personnel better understand the principles and definitions involved in the peer review process. In addition to promoting greater understanding, this team would also help identify products and plan for peer review, as well as facilitate and oversee the conduct of peer reviews for OMS’ scientific and technical work products. The Office of Solid Waste and Emergency Response recently formed a team to support the Administrator’s goal of sound science through peer review. The team was charged with strengthening the program office’s implementation of peer review by identifying ways to facilitate good peer review and addressing barriers to its successful use. In May 1996, the team developed an implementation plan with a series of recommendations that fall into the following broad categories: (1) strengthening early peer review planning; (2) improving the ability of the Assistant Administrator to manage peer review activities; (3) providing guidance and examples to support the staff’s implementation of peer review; and (4) developing mechanisms to facilitate the conduct of peer reviews. EPA’s Region 10 formed a Peer Review Group with the responsibility for overseeing the region’s reviews. In March 1996, the group had a meeting with the region’s senior management, where it was decided to later brief mid-level managers on the importance of peer review and their peer review responsibilities. Agreement was also reached to have each of the region’s offices appoint a peer review contact who will receive training from the Peer Review Group and be responsible for managing some peer reviews and for coordinating other major peer review projects. The above agencywide and office-specific efforts should help address the confusion about peer review and the accountability and oversight problems we identified. However, the efforts aimed at better informing staff about the benefits and use of peer review are not being done fully in all offices and would be more effective if done consistently throughout the agency. Similarly, the efforts aimed at improving the accountability and oversight of peer review fall short in that they do not ensure that each office and region has considered all relevant products for peer review and that the reasons are documented when products are not selected. Despite some progress, EPA’s implementation of its peer review policy remains uneven 2 years after it became effective. Confusion remains about what peer review entails and how it differs from the mechanisms that EPA uses to obtain the views of interested and affected parties. Furthermore, the agency’s accountability and oversight mechanism provides too much leeway for managers to opt out of conducting peer reviews without having to justify or document such decisions. The annual listing of only those products that have been selected for peer review has not enabled upper-level managers to see what products have not been nominated for peer review nor the reasons for their exclusion. A more useful tool would be to have the list contain all planned major products with detailed information about the managers’ decisions about peer review. For example, if peer review is planned, the list would contain—as the current procedures already require—information on the type and timing of it. More significantly, if the managers elect to not conduct peer review on individual products, the list would provide an explanation of why the products are not being nominated. This process would provide upper-level managers with the necessary information to determine whether or not all products have been appropriately considered for peer review. We acknowledge that there are other difficulties in properly conducting peer reviews. However, we believe that as EPA strengthens the implementation of its peer review policy and gains more widespread experience with the process, the agency will be better positioned to address these other issues. To enhance the quality and credibility of its decision-making through the more widespread and consistent implementation of its peer review policy, we recommend that the Administrator, EPA, do the following: Ensure that staff and managers are educated about the need for and benefits of peer review; the difference between peer review and other forms of comments, such as peer input, stakeholders’ involvement, and public comment; and their specific responsibilities in implementing the policy. Expand the current list of products nominated for peer review to include all major products, along with explanations of why individual products are not nominated for peer review. We provided copies of a draft of this report to the Administrator of EPA for review and comment. In responding to the draft, EPA officials stated that the report was clear, instructive, and fair. The officials also provided us with some technical and presentational comments that we have incorporated as appropriate. We conducted our review from February 1996 through August 1996 in accordance with generally accepted government auditing standards. A detailed discussion of our scope and methodology appears in appendix I. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 15 days after the date of this letter. At that time, we will send copies to the Administrator of EPA and other interested parties. We will also make copies available to others upon request. Please call me at (202) 512-6111 if you or your staff have any questions. Major contributors to this report are listed in appendix III. The Chairmen of the Senate Small Business Committee; the Subcommittee on Clean Air, Wetlands, Private Property, and Nuclear Safety, Senate Committee on Environment and Public Works; and the Subcommittee on Energy Production and Regulation, Senate Committee on Energy and Natural Resources, asked us to assess the Environmental Protection Agency’s (EPA) (1) progress in implementing its peer review policy and (2) efforts to improve the peer review process. To assess the status of EPA’s implementation of its peer review policy, we reviewed relevant documents and discussed the agency’s use of peer review with officials from EPA’s Science Policy Council; Office of Air and Radiation (Washington, DC, Durham, NC, and Ann Arbor, MI); Office of Water; Office of Program Planning and Evaluation; Office of Solid Waste and Emergency Response; and Office of Prevention, Pesticides, and Toxic Substances (Washington, DC); Office of Research and Development (Washington, DC and Research Triangle Park, NC); and EPA Region 10 (Seattle, WA). We also interviewed and obtained documents from officials with the National Academy of Sciences; the Water Environment Federation; the National Environmental Policy Institute; and the Natural Resources Defense Council. We reviewed a selection of scientific and technical products to obtain examples of how EPA’s program and regional offices were implementing the peer review policy. We asked officials from EPA’s Science Policy Council and Science Advisory Board to identify products that, in their opinion, fell into the following categories: (1) those that fully complied with the policy; (2) those that received some level of peer review but did not fully comply with the policy; and (3) those that should have received but did not receive peer review. We then interviewed the officials responsible for the products to determine how decisions were made about the products’ peer review. To assess EPA’s efforts to improve the peer review process, we reviewed relevant documents and discussed the agency’s recent, ongoing, and planned improvements with officials from EPA’s Science Policy Council; Science Advisory Board; and the program and regional offices identified above. We conducted our review from February though August 1996 in accordance with generally accepted government auditing standards. At our request, the Science Policy Council obtained information from EPA program and regional offices and provided us with examples illustrating the current uneven implementation of EPA’s peer review policy. This list was further augmented by the Executive Director of the Science Advisory Board. Although these products are not necessarily a representative sample, the Executive Director of EPA’s Science Policy Council stated that these cases provide good illustrations of how the level of peer review within EPA remains uneven. We have grouped the cases below according to whether (1) EPA’s peer review policy was followed, (2) the policy was not fully followed, or (3) a peer review was not conducted but should have been. In January 1993, EPA Region 10 received a petition from a local environmental group to designate the Eastern Columbia Plateau Aquifer System as a “Sole-Source Aquifer” under the Safe Drinking Water Act. The technical work product was entitled Support Document for Sole Source Aquifer Designation of the Eastern Columbia Plateau Aquifer System. Under the act, EPA may make this designation if it determines that the aquifer is the principal or sole source for the area’s drinking water. Once so designated, EPA would then review federally assisted projects in the area to determine if these activities could contaminate the aquifer. In August 1994, EPA prepared a draft document that presented the technical basis for the designation. Technical questions were raised by commentors that prompted EPA to convene a panel of experts to review the document. The panel was given a list of specific technical issues to address, the draft document, and the supporting materials. The peer review panel convened July 26-27, 1995, to discuss their views. The peer reviewers were chosen by asking several “stakeholder” organizations, including local governments, an environmental organization, and the United States Geological Survey, to nominate respected scientists with expertise in areas such as hydrogeology. From more than 15 nominees, a selection committee of EPA staff from outside Region 10 chose 6 peer review panel members. Although one stakeholder group expressed dissatisfaction that their candidate was not chosen for the panel, they eventually agreed that the panel fairly and objectively reviewed the support document. In July 1995, EPA received the peer review panel’s report and is still in the process of responding to the panel’s comments and those received from the public. Waste Technologies Industries (WTI) began limited operation of a hazardous waste incinerator in East Liverpool, Ohio, in April 1993. Although permitted for operation under the Clean Air Act, the Clean Water Act, and the Resource Conservation and Recovery Act, the facility became the focus of national attention and controversy due to several concerns. For example, it was being built near populated areas and an elementary school, and the public was skeptical about industries’ management of commercial incinerators, the ability of government agencies to regulate them, and whether the existing laws and regulations are sufficient to protect public health and the environment. The WTI site was chosen, in part, because of its proximity to steel mills, chemical plants, and other industries generating hazardous waste suitable for incineration. When fully operational, this site will incinerate over 100,000 tons of hazardous wastes annually. The original permit for WTI had been based solely on the modeled effects of direct inhalation exposures and had not included other exposure scenarios, such as indirect exposure through the food chain. Because of such risk assessment omissions and the controversy associated with the facility, EPA decided to conduct an on-site risk assessment of the cumulative human health and ecological risks associated with the operations of this facility, as well as such risks from accidents at the facility, and to publish its findings prior to the full operation of the WTI site. According to the Senior Science Advisor for the Office of Solid Waste and Emergency Response, peer review was envisioned early in the process and occurred at several stages, including peer review of the agency’s approach to addressing these issues and peer review of the entire report, including the conclusions and recommendations. She also said that about $120,000, or nearly 20 percent of all extramural funds that EPA spent on this over 3-year effort, went to cover peer review costs. EPA began to assess the risks of dioxin in the early 1980s, resulting in a 1985 risk assessment that classified the chemical as a probable human carcinogen, primarily on the basis of animal studies available at that time. The implications of additional advances in the early 1990s were uncertain: some maintained that dioxin’s risks were not as great as earlier believed, while others made the opposite argument. Given the growing controversy, in April 1991 EPA decided to work closely with the broader scientific community to reassess the full range of dioxin risks. The draft product, which was released for public comment in September 1994, contained an exposure document and a health effects document. The last chapter of the health effects document characterized the risks posed from dioxin by integrating the findings of the other chapters. “The importance of this . . . demands that the highest standards of peer review extend to the risk characterization itself. Although it can be argued that this is in fact being carried out by this SAB Committee, submitting the risk characterization chapter for external peer review prior to final review by the SAB would serve to strengthen the document, and assure a greater likelihood of its acceptance by the scientific community-at-large. It is recommended strongly that: a) the risk characterization chapter undergo major revision; and b) the revised document be peer reviewed by a group of preeminent scientists, including some researchers from outside the dioxin “community” before returning to the SAB.” Members of Congress also criticized EPA’s risk characterization document and its lack of peer review. In the House and Senate reports on the fiscal year 1996 appropriations bill for EPA, concerns were raised that the draft document “does not accurately reflect the science on exposures to dioxins and their potential health effects . . . EPA selected and presented scientific data and interpretations . . . dependent upon assumptions and hypotheses that deserve careful scrutiny . . . and inaccuracies and omissions . . . were the result of the Agency’s failure to consult with and utilize the assistance of the outside scientific community . . .” The committees directed EPA to respond to the SAB’s concerns and consult with scientists in other agencies in rewriting the risk characterization chapter. The House committee also restricted EPA from developing any new rules that raise or lower dioxin limits on the basis of the risk reassessment. As of July 1996, EPA was in the process of responding to the committees’, SAB’s, and the public’s comments. The risk characterization chapter is being subjected to a major revision and will be peer reviewed by external scientific experts prior to referral back to the SAB. The SAB will then be asked to evaluate EPA’s response to their suggestions and the adequacy of the additional peer review conducted on the draft report. Section 112(m) of the Clean Air Act Amendments of 1990 required EPA to determine if atmospheric inputs of pollutants into the Great Waters warrants further reductions of atmospheric releases and to report the agency’s findings to the Congress 3 years after the act’s enactment. The Great Waters program includes the Great Lakes, Lake Champlain, Chesapeake Bay, and the coastal waters. EPA made its first report to the Congress in May 1994. The scientific and technical data in this report, Deposition of Air Pollutants to the Great Waters: First Report to Congress, were peer reviewed by 63 reviewers. The reviewers represented a number of different perspectives, including academia, industry, environmental groups, EPA offices, other federal and state agencies, and Canadian entities. According to the Great Waters Program Coordinator, the reviewers were given copies of all the report chapters, except the conclusions and recommendation chapter, so that they could prepare for a peer review workshop. The reviewers then met to discuss the report and provide EPA with their views. EPA expended a great deal of effort to ensure that the science in the report was peer reviewed; however, the program coordinator said the agency did not have the conclusions and recommendations chapter peer reviewed. The decision not to peer review this chapter was based on the belief by those directing the program that these were the agency’s opinions based on the information presented and thus an inherently governmental function not subject to peer review. However, others within EPA believe that nothing should be withheld from peer review and said that the conclusions should have been peer reviewed to ensure that they were indeed consistent with the scientific content. Residential unit pricing programs involve charging households according to the amount, or number of units, of garbage that they produce. In accordance with the principle that the polluter pays, unit pricing provides a financial incentive for reducing municipal waste generation and enhancing recycling. EPA’s Office of Policy, Planning and Evaluation (OPPE) used a cooperative agreement to have an assessment prepared of the most significant literature on unit pricing programs to determine the degree to which unit pricing programs meet their stated goals. The paper, which was completed in March 1996, highlights those areas where analysts generally agree on the outcomes associated with unit pricing, as well as those areas where substantial controversy remains. Unit pricing is still voluntary in the United States, according to the project officer; however, he said EPA believes that the more information that municipalities have readily available as they make long-term solid waste landfill decisions, the more likely these local governments are to employ some form of unit pricing as a disincentive to the continued unrestrained filling of landfills. The OPPE project director had the report internally peer reviewed by three EPA staff knowledgeable about unit pricing. The report was not externally peer reviewed, he said, because it is designed to be used only as a reference guide by communities that are considering implementing some type of unit pricing program to reduce waste, and because EPA does not intend to use the report to support any regulatory actions. The Alaska Juneau (AJ) Gold Mine project was a proposal by the Echo Bay, Alaska, company to reopen the former mine near Juneau. The proposal entailed mining approximately 22,500 tons of ore per day and, after crushing and grinding the ore, recovering gold through the froth flotation and carbon-in-leach (also called cyanide leach) processes. After the destruction of residual cyanide, the mine tailings would be discharged in a slurry form to an impoundment that would be created in Sheep Creek Valley, four miles south of downtown Juneau. An environmental impact statement was prepared on the proposal in 1992. Because the project would require permits for fill materials and discharging wastewater into surface waters, EPA’s regional staff developed a model to predict the environmental ramifications of the proposal. According to regional staff, a careful analysis of the proposal was important because the issues in this proposal could potentially set a precedent for similar future proposals. EPA went through three iterations of the model. The first model was presented in a report entitled A Simple Model for Metals in the Proposed AJ Mine Tailings Pond. The report was reviewed by an engineer in EPA’s Environmental Research Laboratory and a firm that worked for the City and Borough of Juneau. The second model was a customized version of one developed by EPA’s Research Laboratory. After receiving comments from the firm representing Echo Bay, ORD laboratories, the Corps of Engineers, and others, EPA decided to also use another model to evaluate the proposal’s potential environmental effects. In 1994, EPA prepared a technical analysis report on the proposal. The report received peer review by several of the same individuals who commented on the models, as well as others. Although the reviewers had expertise in the subject matter, several were not independent of the product’s development or its regulatory and/or financial ramifications. Based partially on the model’s predictions, it became evident that EPA would withhold permit approval for the project. Accordingly, Echo Bay developed an alternative design for its project. In May 1995, EPA hired a contractor to prepare a supplemental environmental impact statement that will assess the revised project’s ecological effects. The agency plans to have the impact statement peer reviewed. Under the Resource Conservation and Recovery Act (RCRA), EPA is not only responsible for controlling hazardous wastes but also for establishing procedures for determining when hazardous wastes are no longer a health and/or ecological concern. As such, EPA’s Office of Solid Waste (OSW) developed a new methodology for establishing the conditions under which wastes listed as hazardous may be delisted. This methodology was presented in an OSW report, Development of Human Health Based and Ecologically Based Exit Criteria for the Hazardous Waste Identification Project (March 3, 1995), which was intended to support the Hazardous Waste Identification Rule. The intent of this rule is to establish human health-based and ecologically based waste constituent concentrations—known as exit criteria—for constituents in wastes below which listed hazardous wastes would be reclassified and become delisted as a hazardous waste. Such wastes could then be handled as a nonhazardous solid waste under other provisions of RCRA. OSW’s support document describes a proposed methodology for calculating the exit concentrations of 192 chemicals for humans and about 50 chemicals of ecological concern for five types of hazardous waste sources; numerous release, transport, and exposure pathways; and for biological effects information. “The Subcommittee is seriously concerned about the level of scientific input and the degree of professional judgment that, to date, have been incorporated into the methodology development. It was clear to the Subcommittee that there has been inadequate attention given to the state-of-the-science for human and ecological risk assessment that exists within EPA, let alone in the broader scientific community, in the development of the overall methodology, the identification of individual equations and associated parameters, the selection of models and their applicability, and the continual need for sound scientific judgment.” The SAB also noted that further peer review of individual elements of the proposed methodology is essential before the scientific basis can be established. The SAB concluded that the methodology at present lacks the scientific defensibility for its intended regulatory use. According to SAB’s Executive Director, this is a case where the program office’s decision to not conduct a peer review of the key supporting elements of a larger project resulted in extra cost and time to the agency, as well as missed deadlines. He pointed out that the experience on this one effort had now, he believed, caused a cultural change in the Office of Solid Waste, to the extent that they now plan to have peer consultation with the SAB on several upcoming lines of effort. Mobile 5A, also known as the mobile source emissions factor model, is a computer program that estimates the emissions of hydrocarbons, carbon monoxide, and nitrogen oxide for eight different types of gasoline-fueled and diesel highway motor vehicles. The first mobile model, made available for use in 1978, provided emissions estimates only for tailpipe exhaust emissions from passenger cars. Since that time, major updates and improvements to the mobile model have resulted in the addition of emissions estimates for evaporative (nontailpipe exhaust) emissions and for uncorrected in-use deterioration due to tampering or poor maintenance, according to the OMS Emission Inventory Group Manager. Also, other categories of vehicles, such as light-duty trucks and motorcycles, have been added over the years, she said. The development of the next generation model, Mobile 6, is currently under way. As with other models, the mobile model exists because precise information about the emissions behavior of the approximately 200 million vehicles in use in the United States is not known, according to the Group Manager. The primary use of the mobile model is in calculating the estimated emissions reductions benefits of various actions when applied to the mobile sources in an area. For example, the mobile model can estimate the impact of participating in a reformulated gasoline program, or of using oxygenated fuels in an area, or of requiring periodic inspection and maintenance of selected vehicle categories. In essence, the mobile model is one of the primary tools that EPA, states, and localities use to measure the estimated emissions reduction effectiveness of the pollution control activities called for in State Implementation Plans. None of the previous mobile models has been peer reviewed. However, EPA has obtained external views on the model through stakeholders’ workshops and experts’ meetings; one of the largest of these meetings involved over 200 stakeholders, according to OMS officials. The agency recognizes that these workshops and meetings are not a substitute for peer review and, in a reversal of the agency’s views of 10 months ago, EPA now plans to have Mobile 6 peer reviewed, they said. Several constraints, such as the limited number of unbiased experts available to do peer review in some fields and the resources for compensating reviewers, still have to be overcome, they added. Tributyl tin (TBT) is a compound used since the 1960s as an antifouling ingredient for marine paints. In the 1970s, antifouling paints were found to adversely affect the environment. Although restrictions were placed on TBT by the United States and a number of other countries in the 1980s, elevated levels of TBT continue to be found in marine ecosystems. In light of the uncertain human health and environmental effects of TBT, an interagency group consisting of EPA Region 10 officials, the Washington State Departments of Ecology and Natural Resources, the National Oceanographic and Atmospheric Administration, the U.S. Army Corps of Engineers, and others was formed to derive a marine/estuarine sediment effects-based cleanup level (or screening level) for TBT. In April 1996, a contractor-prepared report was issued with recommended screening levels; EPA regional staff served as the project managers and made significant contributions to the revisions to and final production of the report. Although an EPA project manager maintains that the report was peer reviewed, the reviews did not meet the requirements of EPA’s peer review policy nor the region’s standard operating procedures for conducting peer reviews. While the report was reviewed by members of the interagency group, other experts who provided input to the report, the affected regulated community, and the general public, there was not an independent review by experts not associated with preparing the report or by those without a stake in its conclusions and recommendations. When we explained to the project manager why EPA’s Science Policy Council characterized the report as not having received peer review, the project manager acknowledged that she was not familiar with either EPA’s peer review policy or the region’s standard operating procedures. EPA is currently in the process of responding to the comments it has received. James R. Beusse, Senior Evaluator Philip L. Bartholomew, Staff Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the Environmental Protection Agency's (EPA): (1) progress in implementing its peer review policy; and (2) efforts to improve the peer review process. GAO found that: (1) although EPA has made progress in implementing its peer review policy, after nearly 2 years, implementation remains uneven; (2) while GAO found cases in which the peer review policy was followed, GAO also found cases in which important aspects of the policy were not followed or peer review was not conducted at all; (3) two primary reasons for this unevenness are: (a) confusion among agency staff and management about what peer review is, what its significance and benefits are, and how and when it should be conducted; and (b) inadequate accountability and oversight mechanisms to ensure that all relevant products are properly peer reviewed; (4) EPA officials readily acknowledged this uneven implementation and identified several of the agency's efforts to improve the peer review process; (5) because of concern about the effectiveness of the existing accountability and oversight mechanisms for ensuring proper peer review, EPA's Deputy Administrator recently established procedures to help build accountability and demonstrate EPA's commitment to the independent review of the scientific analyses underlying the agency's decisions; (6) these efforts are steps in the right direction; however, educating all staff about the merits of and procedures for conducting peer review would increase the likelihood that peer review is properly implemented agencywide; and (7) furthermore, by ensuring that all relevant products have been considered for peer review and that the reasons for those not selected have been documented, EPA's upper-level managers will have the necessary information to ensure that the policy is properly implemented. |
The New Markets Tax Credit (NMTC) was enacted by the Community Renewal Tax Relief Act of 2000 ( P.L. 106-554 ) to provide an incentive to stimulate investment in low-income communities (LIC). The original allocation authority eligible for the NMTC program was $15 billion from 2001 to 2007. Congress, subsequently, has increased the total allocation authority to $61 billion and extended the program through 2019. Qualified investment groups apply to the U.S. Department of the Treasury's Community Development Financial Institutions Fund (CDFI) for an allocation of the New Markets Tax Credit. The investment group, known as a Community Development Entity (CDE), seeks taxpayers to make qualifying equity investments in the CDE. The CDE then makes equity investments in low-income communities and low-income community businesses, all of which must be qualified. After the CDE is awarded a tax credit allocation, the CDE is authorized to offer the tax credits to private equity investors in the CDE. The tax credit value is 39% of the cost of the qualified equity investment and is claimed over a seven-year credit allowance period. In each of the first three years of the investment, the investor receives a credit equal to 5% of the total amount paid for the stock or capital interest at the time of purchase. For the final four years, the value of the credit is 6% annually. Investors must retain their interest in a qualified equity investment throughout the seven-year period. The 114 th Congress extended the NMTC program authorization with the Protecting Americans from Tax Hikes (PATH) Act (Division Q of P.L. 114-113 ), which extended the NMTC authorization through 2019 at $3.5 billion per year. The process by which the NMTC affects eligible low-income communities involves multiple agents and steps. Figure 1 illustrates the key agents in the NMTC process. The multiple steps and agents are designed to ensure that the tax credit achieves its primary goal: encouraging investment in low-income communities. For example, the Treasury Department's CDFI reviews NMTC applicants submitted by CDEs, issues tax credit authority to those CDEs deemed most qualified, and plays a significant role in program compliance. A CDE is a domestic corporation or partnership that is an intermediary vehicle for the provision of loans, investment funding, or financial counseling in low-income communities (LICs). To become certified as a CDE, an organization must submit an application to the CDFI that demonstrates that it meets three criteria: (1) it is a domestic corporation or partnership duly organized under the laws of the jurisdiction in which it is incorporated, (2) it has a primary mission of serving low-income communities, and (3) it maintains accountability to residents of these low-income communities. A CDE may demonstrate meeting the third criterion by filling at least 20% of either its advisory or its governing board positions with representatives of low-income communities. Only CDEs may apply for the NMTC. Upon receipt of NMTC allocation, CDEs attract investors using the credits. While both for-profit and nonprofit CDEs may apply for the NMTC, only for-profit CDEs may pass the NMTC on to investors. To ensure that projects are selected on economic merit, nonprofit CDEs awarded NMTCs must transfer their allocations to for-profit subsidiaries prior to offering NMTCs to investors. As Figure 1 illustrates, CDEs play a critical role in a properly functioning NMTC process. CDEs are the intermediaries between the potential low-income community investments and the CDFI during the application process. CDEs also present investors with investment opportunities and provide the CDFI the majority of its compliance data. Under the tax code's NMTC provisions, only eligible investments in qualifying low-income communities are eligible for the NMTC. Qualifying low-income communities include census tracts that have at least one of the following criteria: (1) a poverty rate of at least 20%; (2) is located in a metropolitan area, a median family income below 80% of the greater of the statewide or metropolitan area median family income; or (3) is located outside a metropolitan area, a median family income below 80% of the median statewide family income. As defined by the criterion above, about 39% of the nation's census tracts covering nearly 36% of the U.S. population are eligible for the NMTC. Additionally, designated targeted populations may be treated as low-income communities. Further, the definition of a low-income community includes census tracts with low populations and census tracts within high migration rural counties. As a result of the definition of qualified low-income communities, virtually all of the country's census tracts are potentially eligible for the NMTC. All taxable investors are eligible to receive the NMTC. As noted above, investors receiving the credit can claim the NMTC over a seven-year period, starting on the date of the investment and on each anniversary, at a rate of 5% for each of the first three years and a rate of 6% for each of the next four years, for a total of 39%. Once the investor begins claiming the NMTC, the credit can be recaptured if the CDE (1) ceases to be a CDE, (2) fails to use substantially all of the proceeds for eligible purposes, or (3) redeems the investment principal. Almost all qualified equity investments (QEI) in low-income communities or serving low-income populations could be eligible to receive the NMTC. These eligible investments are referred to as qualified low-income community investments (QLICIs). QLICIs are categorized in four ways: (1) loans or investments to qualified active low-income community businesses (QALICB), (2) the provision of financial counseling, (3) loans or investments in other CDEs, and (4) the purchase of loans from other CDEs. All QLICIs, including QALICBs, are explicitly prohibited from investing in residential rental property and certain types of businesses, such as golf courses and casinos. To receive an allocation, a CDE must submit an application to the CDFI, which asks a series of standardized questions about the track record of the CDE, the amount of NMTC allocation authority being requested, and the CDE's plans for any allocation authority granted. The application covers four areas: (1) the CDE's business strategy to invest in low-income communities, (2) capitalization strategy to raise equity from investors, (3) management capacity, and (4) expected impact on jobs and economic growth in low-income communities where investments are to be made. In addition, priority points are available for addressing the statutory priorities of investing in unrelated entities and having demonstrated a track record of serving disadvantaged businesses or communities. The application is reviewed and scored to identify those applicants most likely to have the greatest community development impact and ranked in descending order of aggregate score. Tax credit allocations are then awarded based upon the aggregate ranking, until all of the allocation authority is exhausted. In each of the completed NMTC rounds, significantly more CDEs applied for allocations than were able to receive allocations. For example, in the most recent completed allocation round (2014) nearly 29% of applicants received allocations. Additionally, allocation authority of $19.9 billion was requested, compared with the $3.5 billion in allocation authority available. Prior to receiving the authority to offer tax credits to investors, every CDE allocatee must sign an allocation agreement. The allocation agreement clarifies the terms and conditions of the allocation authority, such as the total tax credit authority, service areas, authorized uses of the allocation, and CDE reporting requirements. Failing to meet the terms of the allocation agreement subjects the CDE to the potential revocation of allocation authority. Additionally, the Internal Revenue Service (IRS) monitors compliance with the tax consequences of NMTC allocations, focusing on the "substantially all" requirement. As specified in the IRS regulations, CDE allocatees must issue tax credits to investors within five years of signing their allocation agreements and invest the QEIs in QLICIs within 12 months of signing their allocation agreements. If these requirements are not satisfied, the CDE loses the authority to allocate the unused NMTC. In addition, CDEs that receive principal payments from their QLICIs have 12 months to reinvest those funds in QLICIs to avoid recapture. Once an allocatee signs its allocation agreement and receives its NMTC allocation authority, it may begin soliciting capital from investors. Column 2 of Table 1 lists the total allocation awarded to date by the NMTC, by funding round. Investors receive the right to claim NMTCs on a portion of their investment, by acquiring stock or a capital interest in a CDE with an allocation. The CDE, in turn, must invest the proceeds in qualified low-income community investments. Investors have, to date, invested roughly $42.2 billion in CDEs. Columns 3 and 4 list the remaining available NMTC allocation, in dollars and as a percentage, that have not yet been allocated to an investor, by round. Modifications to the NMTC program have been made in each Congress since the NMTC was created. In the 108 th Congress, the American Jobs Creation Act of 2004 ( P.L. 108-357 , 118 Stat. 1418) included provisions expanding the authority of the Secretary of the Treasury to treat certain other tracts and targeted populations as low-income communities. During the 109 th Congress, the Gulf Opportunity Zone Act of 2005 ( P.L. 109-135 , 119 Stat. 2577) was enacted to provide tax relief to businesses and individuals affected by Hurricanes Katrina, Wilma, and Rita. The bill, which created the Gulf Opportunity Zone (or GO Zone), provided an additional $1 billion in allocation authority to CDEs with a significant mission in the recovery and redevelopment of low-income communities in the Katrina GO Zone. Also during the 109 th Congress, the Tax Relief and Health Care Act of 2006 ( P.L. 109-432 , 120 Stat. 2922) extended the NMTC for one year, through 2008, with an additional allocation of $3.5 billion and mandated Treasury to promulgate regulations to ensure that non-metropolitan counties receive a proportional allocation of investments under the NMTC. In the 110 th Congress, legislative attention focused primarily on extending the NMTC program authorization. This attention came to fruition with the enactment of P.L. 110-343 , which extended the NMTC program authorization one year, through the end of 2009. This one-year extension was also proposed in H.R. 6049 , S. 2886 , S. 3098 , and H.R. 7060 . In addition, H.R. 2075 and S. 1239 proposed to extend the NMTC through 2013 and would have authorized allocations of $3.5 billion, indexed for inflation, for each of the years. Also introduced during the 110 th Congress, H.R. 3907 proposed to make the NMTC permanent. In the 111 th Congress, the American Recovery and Reinvestment Tax Act of 2009 P.L. 111-5 increased the NMTC allocation for 2008 and 2009, to $5 billion from $3.5 billion. For the additional 2008 allocation, the CDFI Fund distributes the additional $1.5 billion in NMTCs to applicants of the original 2008 allocation round. Also in the 111 th Congress, the Tax Relief, Unemployment Insurance Reauthorization, and Job Creation Act of 2010 ( P.L. 111-312 ) extended the NMTC authorization through 2011 at $3.5 billion per year. In the 112 th Congress the American Taxpayer Relief Act of 2012 ( P.L. 112-240 ) extended the NMTC authorization for 2012 and 2013 at $3.5 billion per year. In the 113 th Congress the Tax Increase Prevention Act of 2014 ( P.L. 113-295 ) extended the NMTC authorization for 2014 at $3.5 billion. In the 114 th Congress the Protecting Americans from Tax Hikes (PATH) Act (Division Q of P.L. 114-113 ) extended the NMTC authorization through 2019 at $3.5 billion per year. The New Markets Tax Credit Program is set to expire at the end of 2019. As Congress debates reauthorization of the NMTC, the following policy considerations could be pertinent to any consideration of these bills. The NMTC is primarily intended to encourage private capital investment in eligible low-income communities. However, the source of the investment funds has implications for the effectiveness of the program in achieving its objective. From an economic perspective, the impact of the NMTC would be greatest in the case where the investment represents new investment in the U.S. economy that would not have occurred in the absence of the program. Conversely, the impact of the NMTC is diminished to the extent the tax credit is applied to investment that would have otherwise occurred or been funded by a shift in investment from more productive alternatives. To date, only one study has empirically assessed the question of whether NMTC investment is funded through shifted investment or whether it represents new investment. The findings of the study suggest that corporate investment represented a shift in investment location. In contrast, the authors concluded that a portion of individual NMTC investment, roughly $641 million, represented new investment. Although important, understanding the source of NMTC investments alone is not sufficient to determine the effectiveness of the NMTC. A comprehensive review of the program would require an accounting of both the social and economic costs and benefits of the NMTC, an undertaking which may pose considerable challenges. For example, this would include examining the efficiency and opportunity costs of the NMTC investments, while a comprehensive accounting of the NMTC benefits would need to identify and value "spillovers" such as its effect on neighboring businesses and communities. The most comprehensive evaluation of the NMTC, to date, was conducted by the Urban Institute under contract from the CDFI Fund. While the evaluation's final report found project level activity consistent with the NMTC achieving program goals, it was unable to generalize its findings to the broader universe of NMTC activity or census tract level outcomes due to evaluation design limitations. The report noted it was an initial effort to a more robust research plan that has not yet been implemented. Others, notably GAO, have recommended that the NMTC be simplified by converting the NMTC into a grant program. This option may be able to deliver the same level of incentive with lower cost to the government, as investors do not generally "buy" tax credits at face value─allowing a smaller grant to provide a similar level of incentive. Specifically, tax credit markets historically set a price of 70 to 80 cents per dollar of tax credit, with lower valuations in recent years due to tight credit markets and decreased corporate profits. A grant option, however, likely provides a lesser incentive for investors to invest in NMTC projects, as they may not be the beneficiary of the incentive. If this occurs, improving the access to capital in low-income communities (an NMTC program goal) would be more difficult. Further, a grant program may complicate existing mechanisms designed to ensure the NMTC is used for intended purposes, as CDEs are likely less capitalized than investors and the possible removal of IRS authority to recapture benefits (as the NMTC would no longer be a tax program). An additional issue is the geographic distribution of NMTC activity. Initial concerns focused on distinctions between urban and rural NMTC activity and have been addressed through legislation. Further, NMTC activity has occurred in all 50 states, the District of Columbia, and Puerto Rico. However, the distribution of NMTC activity appears concentrated in a few states──with the 10 states with the highest activity accounting for just over 50% of all NMTC activity. In contrast, the 25 states with the least NMTC activity account for less than 14% of all NMTC activity. The current distribution of activity is not likely to reflect the distribution of low-income populations and may raise questions concerning the equity of the NMTC. Finally, the NMTC is one of several programs designed to improve conditions in low-income communities. In a 2004 assessment of the NMTC, the Office of Management and Budget noted that the goal of the NMTC overlaps that of several other tax credits and numerous programs administered by the Departments of Housing and Urban Development and Commerce. Given this overlap and the desire to target federal funds to their most productive uses, it follows that information on the performance of the NMTC relative to other programs with a similar goal would be of use. To date, however, no comparative, empirical study of this nature has been undertaken. | The New Markets Tax Credit (NMTC) is a non-refundable tax credit intended to encourage private capital investment in eligible, impoverished, low-income communities. NMTCs are allocated by the Community Development Financial Institutions Fund (CDFI), a bureau within the United States Department of the Treasury, under a competitive application process. Investors who make qualified equity investments reduce their federal income tax liability by claiming the credit. The NMTC program, enacted in 2000, is currently authorized to allocate $61 billion through the end of 2019. To date, the CDFI has made 912 awards totaling $43.5 billion in NMTC allocation authority. Demand for NMTC allocations has exceeded total allocations each award round—with the 2014 allocation round awarding $3.5 billion in allocation authority from applications requesting approximately $19.9 billion in NMTCs. The most recent program extension was made in the 114th Congress. The Protecting Americans from Tax Hikes (PATH) Act (Division Q of P.L. 114-113) extended the NMTC authorization through 2019 at $3.5 billion per year. This report will be updated as warranted by legislative changes. |
The Maya society at Ceibal, Guatemala, collapsed twice, each time profoundly transforming the political systems which had been in place before, scientists have said. Using radiocarbon dating, they have come up with one of the most precise chronologies to date of the events that led to the civilisation's demise.
The collapse of the Maya civilisation during the classic era, around the 9th century, has been well studied, in part thanks to rich hieroglyphic records. Cities were abandoned and the sophisticated culture fell into oblivion as a result of increased warfare and gradual political decline.
In contrast, even though there is evidence that another collapse occurred more than six centuries before, very little is known about it.
The study now published in PNAS looks at how both collapses unfolded, dating precisely the events leading up to it, and showing a parallel between these two moments of Maya history.
Chronology at Ceibal
Since 2005, a team of researchers has been running a project known as the Ceibal-Petexbatun Archaeological Project at the ancient Maya site of Ceibal. This site has a long and rich history of occupation spanning nearly 2,000 years, from the so-called "Preclassic period" to the "Classic period" (from 1000 BC to AD 950). Since their work there began, archaeologists have determined 154 radiocarbon dates from charcoal samples recovered at the site.
In this research, they used these dates and conducted an in-depth analysis of ceramics discovered at Ceibal to come up with a precise chronology of the site's history. This allowed them to trace the trajectories of the first, Preclassic collapse around AD 150–300 and the second, Classic collapse around AD 800–950.
In the two cases, they found that similar factors and social contexts could be blamed for causing the demise of the Maya society. The researchers established that violent warfare intensified around 75 BC and AD 735 respectively. Bloody conflicts were then followed by social unrest and the political disintegration of multiple centres across the Maya lowlands, around AD 150 and 810.
However, the outcomes of the two collapses were different and resulted in very different reorganisations of the political sphere.
In the wake of the Preclassic collapse, political power was centralised, with the development of dynasties with a divine ruler. On the contrary, following the second collapse in the Classic period, this political system based on divine and authoritarian rulers evolved toward a more decentralised organisation and structure of power, with a stronger reliance on seaborne trade. ||||| Significance Tracing political change through refined chronologies is a critical step for the study of social dynamics. Whereas coarse chronologies can give an impression of gradual change, better temporal control may reveal multiple episodes of rapid disruption comprised in that span. Precise dating through radiocarbon determinations and ceramic studies is particularly important for the study of the Preclassic collapse, which lacks calendrical dates recorded in texts. The high-precision chronology of Ceibal revealed waves of decline over the course of the Preclassic and Classic collapses in a temporal resolution that was not possible before. The emerging understanding of similarities and differences in the two cases of collapse provides an important basis for evaluating the vulnerability and resilience of Maya political systems.
Abstract The lowland Maya site of Ceibal, Guatemala, had a long history of occupation, spanning from the Middle Preclassic period through the Terminal Classic (1000 BC to AD 950). The Ceibal-Petexbatun Archaeological Project has been conducting archaeological investigations at this site since 2005 and has obtained 154 radiocarbon dates, which represent the largest collection of radiocarbon assays from a single Maya site. The Bayesian analysis of these dates, combined with a detailed study of ceramics, allowed us to develop a high-precision chronology for Ceibal. Through this chronology, we traced the trajectories of the Preclassic collapse around AD 150–300 and the Classic collapse around AD 800–950, revealing similar patterns in the two cases. Social instability started with the intensification of warfare around 75 BC and AD 735, respectively, followed by the fall of multiple centers across the Maya lowlands around AD 150 and 810. The population of Ceibal persisted for some time in both cases, but the center eventually experienced major decline around AD 300 and 900. Despite these similarities in their diachronic trajectories, the outcomes of these collapses were different, with the former associated with the development of dynasties centered on divine rulership and the latter leading to their downfalls. The Ceibal dynasty emerged during the period of low population after the Preclassic collapse, suggesting that this dynasty was placed under the influence from, or by the direct intervention of, an external power.
The processes of growth and decline of centralized polities represent a critical question in archaeological research. Particularly important moments of political changes in lowland Maya society include the decline of multiple centers at the end of the Preclassic period (around AD 150–300), the emergence of historically documented dynasties at various centers at the end of the Preclassic period and during the Early Classic period (AD 200–600), and the abandonment of many settlements at the end of the Classic period (around AD 800–950). The Classic collapse has long been an important issue in Maya archaeology (1⇓⇓⇓⇓–6). Scholars have presented various theories of its causes, including internal social problems, warfare, environmental degradation, and foreign invasions, although recent debates have focused on the effects of droughts (7⇓⇓⇓⇓–12). Scholars have more recently begun to address the Preclassic collapse, proposing droughts, the filling of lakes with eroded soils, and the intensification of warfare as its potential causes (13⇓–15). The dynasties of some Classic-period Maya centers appear to have originated in the period slightly before or around the Preclassic collapse, complicating our understanding of social dynamics during this era. The following Early Classic period witnessed the emergence of more dynasties.
To understand how these episodes of political disintegration and centralization took place, we need to trace their processes through a refined chronology. Coarse chronologies tend to make these processes appear gradual by masking short-term changes. A higher-resolution chronology may reveal a sequence of rapid transformations that are comprised within what appears to be a slow, gradual transition. Such a detailed understanding can provide critical insights into the nature of the social changes. Our intensive archaeological investigations at the center of Ceibal, Guatemala, have produced 154 radiocarbon dates, which represent the largest set of radiocarbon assays ever collected at a Maya site. Combined with a detailed ceramic sequence, this dataset presents an unprecedented opportunity to examine these critical periods of social change in the Maya area.
Ceibal Ceibal (also spelled Seibal) is the largest site located in the Pasión region of the southwestern Maya lowlands (Fig. 1). The site is known for having one of the earliest ceramic complexes in the Maya lowlands, dating to 1000 BC, and for its late florescence amid the Classic collapse. Ceibal was originally investigated from 1964 through 1968 by the landmark expedition of the Harvard Project (HP) (16⇓–18). The ceramic chronology established by Sabloff as part of this project provided a solid basis, on which we developed our current study. Located in the Pasión region, Aguateca was studied from 1990 through 2005 by T.I., D.T., and K.A., providing 11 radiocarbon dates. Bachand excavated Punta de Chimino as part of the Aguateca Project and obtained 11 radiocarbon assays (19⇓–21). We began to work at Ceibal as the Ceibal-Petexbatun Archaeological Project (CPAP) in 2005. Whereas our excavations originally focused on its ceremonial core, Group A, to document early buildings, we expanded our scope to examine a later elite complex, Group D, and the peripheral settlement (Fig. 2). The 154 radiocarbon dates obtained by the CPAP include samples from the ceremonial cores and the outlying residential zone, as well as 9 assays from the minor center of Caobal excavated by Munson (22⇓⇓–25). Fig. 1. Map of the Maya lowlands with a close-up of the Pasión region. Fig. 2. Map of Ceibal with a close-up of Group A. At Maya sites with long occupation, such as Ceibal, old deposits were often reused for fills of later constructions, and thus many layers commonly contained old pieces of charcoal. To reduce problems of stratigraphic mixing, we collected carbon samples mainly from primary contexts, such as on-floor burned layers, burials, caches, and shallow middens. When such contexts were not available, we also took samples from construction fills but focused mainly on those containing materials moved from short-period deposits, such as transferred middens and dumps.
Analysis We developed Bayesian models of our radiocarbon dates by combining information on stratigraphic sequences and ceramic phases. For the Classic period, we also incorporated calendrical dates. In the analysis of the radiocarbon dates, identifying stratigraphically mixed carbon samples and old wood was a critical step (26, 27). The Oxcal program version 4.2 facilitated this process through the statistical identification of outliers and visual representations of probability distributions (28⇓–30). The resulting refined calibrated dates helped us improve our ceramic chronology (Figs. 3 and 4, Figs. S1–S3, SI Text, Tables S1 and S2, and Datasets S1 and S2). Our ceramic analysis showed that the original chronology developed by Sabloff was sound and solid. With detailed stratigraphic information from CPAP excavations and radiocarbon dates, we subdivided Sabloff’s phases into shorter facets and established two new phases: Xate for the Terminal Preclassic (75 BC to AD 175) and Samat for the Postclassic (AD 1000–1200). The study of the Classic collapse through radiocarbon dating was challenging because calibrated radiocarbon dates from AD 700–950 typically had wide ranges of uncertainty resulting from a flat section and large bends in the calibration curve for this time period. In examining this problematic period, we relied primarily on textual information from inscriptions to identify precise timings of political changes (31, 32). We have reported the results of our chronological study for the Middle Preclassic period (1000–350 BC) in previous publications (23, 24). This article primarily addresses the Late and Terminal Preclassic (350 BC to AD 175) and Classic (AD 175–950) periods. Fig. 3. Results of Bayesian analysis showing the probability distributions of phase boundaries. Fig. 4. Chronological chart showing the ceramic phases of Ceibal and other Maya sites. To examine social trajectories, scholars have commonly estimated population levels with data obtained through survey, surface collection, and test excavations (33). Settlement investigations by Tourtellot during the HP provided important data in this regard (34). Nonetheless, surface collection and small test excavations typically produce a limited quantity of artifacts per tested site and may lack strict control of stratigraphy and contexts. The resulting chronological information tends to be coarse. Our study emphasized deep stratigraphic excavations, in which most lots (units of contextual control) were assigned to specific facets of our high-resolution chronology. The frequencies of lots and ceramics dating to specific temporal spans should approximate the intensity of construction and economic activity during those periods. To examine diachronic trends, we calculated values adjusted for the different lengths of periods, which we called time-weighed lot indices (TWLIs) and time-weighed ceramic indices (TWCIs) (Fig. 5 and SI Text). We should note potential biases in these data. For example, the large values for the Real and Escoba phases resulted partly from our excavation strategies emphasizing early constructions in Group A. Likewise, the TWLIs for the Bayal phase were somewhat inflated because we often subdivided a final occupation layer into more than one lot. Thus, TWLIs and TWCIs do not translate directly into regional population levels, but they reflect general diachronic trends and help us identify moments of marked increase and decline in construction and economic activity. Combined with Tourtellot’s data on regional demographic estimates, our study traced social changes at a temporal resolution that was not possible before. Fig. 5. TWLIs and TWCIs, approximating the intensities of construction and economic activity through time (SI Text and Tables S3–S4).
Results The first signs of social problems leading to the Preclassic collapse at Ceibal emerged at the beginning of the Xate phase, around 75 BC. Our study confirmed the observation by the HP researchers that the population of Ceibal declined significantly from the Cantutse phase to the Xate phase (Sabloff originally called them the Early and Late Cantutse phases, respectively) (18, 34). Xate ceramics at Ceibal corresponded to what Brady et al. defined as Protoclassic 1 ceramics for the Maya lowlands in general, which were characterized by pseudo-Usulutan decorations with parallel wavy lines and nubbin, conical, or hemispherical tetrapods (35). Besides these diagnostic traits, many Cantutse ceramic types continued into the following period, making the identification of Xate occupation challenging. Tourtellot’s calculation of a 74% population drop may have underestimated Xate occupation, but TWLIs and TWCIs also decreased drastically during the Xate 1 facet, suggesting that the decline in activity was real. An important change during this period was the establishment of Group D on a defensible hill surrounded by steep gullies and an escarpment. Although some residential groups in the periphery of Group D may have started during the Cantutse phase, as indicated by the HP archaeologists, our excavations demonstrated that the initial constructions of the ceremonial core of Group D dated to the Xate 1 facet. It is likely that the decline of Ceibal around 75 BC was related to the intensification of warfare in the region. Activity levels in Group D and the outlying residential zone remained fairly constant from the Xate 1 to Xate 2 facet. The higher TWLI and TWCI of the Xate 2 facet in Group A resulted mostly from a large number of ritual caches deposited there. The TWLI and TWCI declined significantly during the Xate 3 facet (AD 125–175), particularly in Group D and the residential zone. The values of this period for Group A are again inflated by numerous ritual deposits, although a similar decline in construction likely occurred in Group A as well. Group D and the residential zone regained some vigor during the Junco 1 facet (AD 175–300), which corresponded to Brady et al.’s Protoclassic 2 phase (35), characterized by Ixcanrio Polychrome and bulbous mammiform tetrapods. Tourtellot recorded a considerable number of Junco loci and suggested the continuity of Junco occupation from the Preclassic period. We suspect that a substantial portion of the Junco occupation identified by the HP researchers dated more specifically to the Junco 1 facet. All of the studied areas, however, experienced a drastic decline at the end of this period, around AD 300. Some scholars have suggested that the ceramic types of the Preclassic period continued to be produced during the Early Classic period in the southwestern lowlands and that the assignments of these ceramics to the Preclassic resulted in significant underrepresentations of Early Classic populations (36, 37). Our ceramic study, however, showed that these ceramics could be confidently separated, and the Early Classic population decline was real. Many parts of Group D and the residential zone were deserted, and some minor temples in outlying areas were intentionally buried with black soils (34). Only a small population remained at Ceibal during the Junco 2 facet (AD 300–400). The population level of Ceibal remained low throughout the Junco 2, 3, and 4 facets (AD 300–600). Remarkably, the Ceibal dynasty appears to have been established during this dark age of the center. The Ceibal Hieroglyphic Stairway, dedicated in AD 751, retrospectively mentions an early ruler possessing the Ceibal emblem glyph (dynastic title), who was active in AD 415 (38, 39). The reign of this individual at the beginning of the Junco 3 facet may have represented the origins of the Ceibal dynasty, although the inscription does not specify him as the dynastic founder. Excavations in Platform A-2 and the East Court of Group A, as well as the Karinel Group located near Group A, uncovered Junco 3 ceramics, which closely resemble those from central Petén, including Dos Arroyos Polychrome and Balanza Black vessels with basal flanges, as well as a small number of Teotihuacan-inspired tripod vases. It is probable that the Ceibal dynasty was established under influence from, or through the direct intervention of, central Petén groups, possibly the growing center of Tikal. Notably, the largest concentration of Junco 3 ceramics was found in Platform A-2 located on the southern side of the South Plaza, which was likely a focus of elite activity during this period. This location may have mimicked the position of the Tikal royal place, the South Acropolis. The HP researchers suggested that Ceibal was virtually abandoned during the sixth century, which made scholars wonder how the line of this early ruler connected to the Late Classic dynasty of Ceibal (40). Our research identified Junco 4 occupation dating to this assumed period of abandonment, which indicates that there was some continuity in the population of Ceibal, albeit diminished, from the Junco phase to the Late Classic Tepejilote phase. Tikal’s influence over the Pasión region appears to have ceased after its defeat in AD 562 (41), but the Ceibal dynasty may have persisted. After rapid population growth during the Tepejilote 1 facet (AD 600–700), the Classic-period decline of Ceibal started with its defeat by the Dos Pilas-Aguateca dynasty in AD 735, during the Tepejilote 2 facet (AD 700–750). Construction and economic activity dropped significantly during the following Tepejilote 3 facet (AD 750–810). An illegitimate ruler named Ajaw Bot, who did not use the Ceibal emblem glyph, appears to have placed his palace in the defensible location of Group D, probably as a response to the intensification of warfare during this period (42). The number of bifacial points, possibly used as weapons, also increased significantly (43). The reign of Ajaw Bot ended shortly after the dedication of his last monuments in AD 800, and Ceibal underwent a hiatus in monument erection until AD 849. Excavations at Group D by Bazy demonstrated that many buildings were ritually destroyed, most likely at the end of Ajaw Bot’s rule (44). The relatively high TWLI and TWCI of Group D for the Tepejilote 3 facet resulted from Bazy’s excavation strategy targeting these termination deposits. We were not able to subdivide the Bayal phase (AD 810–950), and thus the TWLI and TWCI for this period were not refined enough to trace the social trend associated with this political disruption. Nonetheless, investigations by the HP and CPAP indicate that a considerable number of peripheral groups were abandoned or exhibited little activity during the Bayal phase, which possibly reflects the social effects of Ajaw Bot’s fall. The arrival of a new ruler holding the Ceibal emblem glyph in AD 829, whose name may be read as Wat’ul K’atel, heralded a revival of Ceibal during the Bayal phase (45). This political regime, however, collapsed soon after AD 889, the last date recorded on monuments. The royal palace located in the East Court and some temples in Group A were destroyed, and Ceibal was completely abandoned (46).
Discussion Whereas the Classic collapse has a long history of study, aided by rich hieroglyphic records, the understanding of the Preclassic collapse is more limited. In this regard, our high-resolution chronology provides particularly important information on the latter. The trajectory of the Preclassic collapse at Ceibal exhibits a notable resemblance with that of the Classic collapse, with multiple waves of decline followed by short episodes of limited recovery. In both cases, the first signs of social problems appear to have been related to the intensification of warfare. Probable fortifications dating to the Late or Terminal Preclassic period are found at other Maya sites, including El Mirador, Becan, Edzna, Cerros, Murralla de León, Cival, Chaak Ak’al, and multiple hilltop sites along the Upper Usumacinta River (47⇓⇓⇓⇓⇓⇓⇓–55). Although it is not clear whether other Maya communities experienced decline during their Xate 1-corresponding periods, the construction of Group D in a defensible location at Ceibal was probably part of the growing social instability throughout the Maya lowlands around this time. The process of the Classic collapse in the Pasión region also began with the escalation of violent conflicts. Following the defeat of Ceibal in AD 735, Dos Pilas was also vanquished in AD 761 (56, 57). Then, a series of defensive walls were constructed at Dos Pilas and Aguateca, and Group D again became the center of elite occupation (58, 59). Violent encounters appear to have increased in other parts of the Maya lowlands as well (10). In both the Preclassic and Classic collapses, early signs of decline were followed by a wave of drastic political disintegration throughout the Maya lowlands. In the former period, the major center of El Mirador and other Maya communities declined around AD 150–175. The fall of El Mirador appears to have occurred during, or at the end of, the Xate 3-corresponding period, that is, the Protoclassic 1 phase, because of the presence of ceramics with pseudo-Usulutan decorations and tetrapods and the absence of Ixcanrio Polychrome and bulbous mammiform supports in abandonment layers at El Mirador (60). Similarly, during the Classic collapse a major wave of political disintegration occurred around AD 810, which affected many centers over a wide area. Social impacts were particularly profound in the southwestern lowlands, where Aguateca, Cancuen, Yaxchilan, Piedras Negras, and Palenque declined or were abandoned in a short period (61, 62). The fall of Ajaw Bot at Ceibal was most likely tied to this regional process. Centers in other parts of the southern lowlands, such as Tikal, Calakmul, Tonina, and Copan, experienced somewhat more gradual decline or a hiatus in monument erection, although signs of political problems were not so clear in the northern lowlands (63). These major waves of political disintegration in both the Preclassic and Classic collapses may have corresponded with prolonged droughts, which may have exacerbated deteriorating social conditions (6, 12, 14, 15). Processes following the wave of major collapse during the Preclassic and Classic periods appear to have varied in different regions. At Ceibal, the decline around AD 300 was the most profound in the course of the Preclassic collapse. The Belizean site of Cerros may also have been abandoned around AD 300, following a major decline in population and construction around AD 150–175 (64, 65). It is not clear how widespread the wave of collapse around AD 300 was in other areas. In the central lowlands, dynastic rule solidified during this period. In the case of the Classic collapse, the ninth century witnessed political recovery at a limited number of southern lowland centers, including Ceibal, Tonina, Tikal, and Calakmul, and the prosperity of northern communities, including Uxmal, other Puuc centers, Chichen Itza, and Ek Balam. Many of these centers declined around AD 900–950, and only Chichen Itza continued as a powerful center for another century or so (66). An intriguing question is the relation between the Preclassic collapse and the origins of Maya dynasties. Although the initial development of rulership can be traced back at least to the Late Preclassic period, as suggested by the San Bartolo murals, it was around the first century AD that historically recorded dynasties and royal tombs emerged at Tikal and possibly at other centers in the central lowlands (63, 67). These early dynasties probably predated and survived the major wave of collapse around AD 150–175. Centers in the peripheral zones of the Maya lowlands likely had some forms of political centralization during the Preclassic period, but the historically known dynasties of these regions, such as Ceibal, Yaxchilan, Piedras Negras, Palenque, and Copan, appear to have originated during the fourth and fifth centuries, in some cases through connections with the developed dynasties of the central lowlands. Tikal, in particular, appears to have spread its political influence to Ceibal and other parts of the Pasión region, whose population levels continued to be low in the wake of the Preclassic collapse (41). The Preclassic and Classic collapse exhibited tantalizing similarities in their diachronic patterns with multiple waves of political disruption, but they differed significantly in terms of the resulting forms of political organization. Whereas the former was tied to the development of dynasties with divine rulership, the latter led to the decline of this political system toward more decentralized organization and a stronger reliance on seaborne trade (5). Further analysis of these processes based on high-resolution chronologies should provide important insights into the vulnerability and resilience of these political systems.
Acknowledgments We thank two anonymous reviewers for their comments. Investigations at Ceibal were carried out with permits issued by the Instituto de Antropología e Historia de Guatemala and were supported by the Alphawood Foundation; the National Geographic Society; National Science Foundation Grants BCS-0750808 and BCS-1518794; National Endowment for the Humanities Grant RZ-51209-10; the Agnese Nelms Haury Program of the University of Arizona; Ministry of Education, Culture, Sports, Science and Technology of Japan Grants-in-Aid for Scientific Research (KAKENHI) 21101003 and 21101002; and Japan Society for the Promotion of Science KAKENHI 21402008, 26101002, and 26101003.
Footnotes Author contributions: T.I., D.T., J.M., and M.B. designed research; T.I., D.T., J.M., M.B., K.A., J.M.P., H.Y., F.P., and H.N. performed research; T.I. analyzed data; and T.I. wrote the paper.
The authors declare no conflict of interest.
This article is a PNAS Direct Submission.
This article contains supporting information online at www.pnas.org/lookup/suppl/doi:10.1073/pnas.1618022114/-/DCSupplemental. ||||| Using the largest set of radiocarbon dates ever obtained from a single Maya site, archaeologists have developed a high-precision chronology that sheds new light on patterns leading up to the two major collapses of the ancient civilization.
Archaeologists have long puzzled over what caused what is known as the Classic Maya collapse in the ninth century A.D., when many of the ancient civilization's cities were abandoned. More recent investigations have revealed that the Maya also experienced an earlier collapse in the second century A.D. — now called the Preclassic collapse — that is even more poorly understood.
University of Arizona archaeologist Takeshi Inomata and his colleagues suggest in a new paper, to be published in the Proceedings of the National Academy of Sciences, that both collapses followed similar trajectories, with multiple waves of social instability, warfare and political crises leading to the rapid fall of many city centers.
The findings are based on a highly refined chronology developed by Inomata and his colleagues using an unprecedented 154 radiocarbon dates from the archaeological site of Ceibal in Guatemala, where the team has worked for over a decade.
While more general chronologies might suggest that the Maya collapses occurred gradually, this new, more precise chronology indicates more complex patterns of political crises and recoveries leading up to each collapse.
"What we found out is that those two cases of collapse (Classic and Preclassic) follow similar patterns," said Inomata, the paper's lead author and a professor in the School of Anthropology in the UA College of Social and Behavioral Sciences. "It's not just a simple collapse, but there are waves of collapse. First, there are smaller waves, tied to warfare and some political instability, then comes the major collapse, in which many centers got abandoned. Then there was some recovery in some places, then another collapse."
Using radiocarbon dating and data from ceramics and highly controlled archaeological excavations, the researchers were able to establish the refined chronology of when population sizes and building construction increased and decreased at Ceibal.
While the findings may not solve the mystery of why exactly the Maya collapses occurred, they are an important step toward better understanding how they unfolded.
"It's really, really interesting that these collapses both look very similar, at very different time periods," said Melissa Burham, one of three UA anthropology graduate students who co-authored the paper. "We now have a good understanding of what the process looked like, that potentially can serve as a template for other people to try to see if they have a similar pattern at their (archaeological) sites in the same area."
Inomata and his UA colleagues — anthropology professor Daniela Triadan and students Burham, Jessica MacLellan and Juan Manuel Palomo — worked with collaborators at Ibaraki University, Naruto University of Education and the Graduate University for Advanced Studies in Japan, and with Guatemalan archaeologists and students.
Radiocarbon dating was done at Paleo Laboratory Company in Japan and at the Accelerator Mass Spectrometry Laboratory in the UA Department of Physics.
"Radiocarbon dating has been used for a long time, but now we're getting to an interesting period because it's getting more and more precise," said Inomata, who also is an Agnese Nelms Haury Chair in Environment and Social Justice at the UA. "We're getting to the point where we can get to the interesting social patterns because the chronology is refined enough, and the dating is precise enough."
Inomata's research was funded in part by the National Science Foundation, National Endowment for the Humanities, National Geographic Foundation, the Alphawood Foundation and the UA's Agnes Nelms Haury Program in Environment and Social Justice. | The Maya civilization suffered "waves" of war and political instability before its collapse in the 2nd century. The civilization later recovered, but history would repeat itself just a few hundred years later, delivering a final blow, researchers explain in a PNAS study offering a clear chronology of the civilization's demise. It's based on 154 radiocarbon dates from charcoal samples along with data from ceramics at the site of Ceibal in Guatemala which show "similar patterns" of warfare around 75 BC and AD 735, reports the International Business Times. Each was followed by a period of political upheaval, the first of which led into the Preclassic collapse of AD 150-300. Ceibal then saw a population decline and many other Maya cities were abandoned. But with "the development of dynasties centered on divine rulership," the civilization soon after rallied as power centralized, say researchers at the University of Arizona. However, warfare returned a few centuries later, resulting in a new period of political instability. This one led into the Classic collapse circa AD 800-950, when Ceibal again experienced a population decline, according to a release. But this time, power splintered and the Maya were unable to recover. Archaeologists now hope to survey other Maya sites for similar patterns of decline that may show why the Classic collapse proved fatal. (This Maya pyramid hides two others.) |
H.R. 6 was introduced by the House Democratic Leadership to revise certain tax and royalty policies for oil and natural gas and use the resulting revenue to support a reserve for energy efficiency and renewable energy. The bill is one of several introduced on behalf of the Democratic Leadership in the House as part of its "100 hours" package of legislative initiatives conducted early in the 110 th Congress. Title I proposes to reduce certain oil and natural gas tax subsidies to create a revenue stream to support energy efficiency and renewable energy. Title II would modify certain aspects of royalty relief for offshore oil and natural gas development to create a second stream of revenue to support energy efficiency and renewable energy. Title III of H.R. 6 creates a budget procedure for the creation and use of a Strategic Energy Efficiency and Renewable Energy Reserve, under which additional spending for energy efficiency and renewable energy programs can be accommodated without violating enforcement procedures in the Congressional Budget Act of 1974, as amended. The stated purpose of the bill is to "reduce our nation's dependency on foreign oil" by investing in renewable energy and energy efficiency. Specifically, Section 301 (a) of the bill would make the revenue in the Reserve available to "offset the cost of subsequent legislation" that may be introduced "(1) to accelerate the use of domestic renewable energy resources and alternative fuels, (2) to promote the utilization of energy-efficient products and practices and conservation, and (3) to increase research, development, and deployment of clean renewable energy and efficiency technologies." The budget adjustment procedure for use of the Reserve is set out in Section 301 (b). The procedure is similar to reserve fund procedures included in annual budget resolutions. It would require the chairman of the House or Senate Budget Committee, as appropriate, to adjust certain spending levels in the budget resolution, and the committee spending allocations made thereunder, to accommodate a spending increase (beyond FY2007 levels) in a reported bill, an amendment thereto, or a conference report thereon that would address the three allowed uses of the Reserve noted above. The adjustments for increased spending for a fiscal year could not exceed the amount of increased receipts for that fiscal year, as estimated by the Congressional Budget Office, attributable to H.R. 6 . According to the Congressional Budget Office (CBO), the proposed repeal of selected tax incentives for oil and natural gas would make about $7.7 billion available over 10 years, 2008 through 2017. The proposed changes to the royalty system for oil and natural gas are estimated to generate an additional $6.3 billion. This would yield a combined total of $14 billion for the Reserve over a 10-year period. The CBO estimates show that the total annual revenue flow would vary annually over the 10-year period, ranging from a low of about $900 million to a high of about $1.8 billion per year. H.R. 6 came to the House floor for debate on January 18, 2007. In the floor debate, opponents argued that the reduction in oil and natural gas incentives would dampen production, cause job losses, and lead to higher prices for gasoline and other fuels. Opponents also complained that the proposal for the Reserve does not identify specific policies and programs that would receive funding. Proponents of the bill countered that record profits show that the oil and natural gas incentives were not needed. They also contended that the language that would create the Reserve would allow it to be used to support a variety of R&D, deployment, tax incentives, and other measures for renewables and energy efficiency, and that the specifics would evolve as legislative proposals come forth for to draw resources from the Reserve. The bill passed the House on January 18 by a vote of 264-163. In general, the budget resolution would revise the congressional budget for FY2007. It would also establish the budget for FY2008 and set budgetary levels for FY2009 through FY2012. In particular, the House resolution ( H.Con.Res. 99 ) would create a single deficit-neutral reserve fund for energy efficiency and renewable energy that is virtually identical to the reserve described in H.R. 6 . In contrast, the Senate resolution ( S.Con.Res. 21 ) would create three reserve funds, which identify more specific efficiency and renewables measures and would allow support for "responsible development" of oil and natural gas. On March 28, the House passed H.Con.Res. 99 by a vote of 216-210. For FY2007, it would allow for additional funding for energy (Function 270) above the President's request that "could be used for research, development, and deployment of renewable and alternative energy." Section 207 would create a deficit-neutral reserve fund that fulfills the purposes of H.R. 6 to "facilitate the development of conservation and energy efficiency technologies, clean domestic renewable energy resources, and alternative fuels that will reduce our reliance on foreign oil." On March 23, the Senate passed S.Con.Res. 21 , its version of the budget resolution. In parallel to the House resolution, Section 307 of S.Con.Res. 21 would create a deficit-neutral reserve fund that could be used for renewable energy, energy efficiency, and "responsible development" of oil and natural gas. In addition, Section 332 would create a deficit-neutral reserve fund for extension through 2015 of certain energy tax incentives, including the renewable energy electricity production tax credit (PTC), Clean Renewable Energy Bonds, and provisions for energy efficient buildings, products, and power plants. Further, Section 338 would create a deficit-neutral reserve fund for manufacturing initiatives that could include tax and research and development (R&D) measures that support alternative fuels, automotive and energy technologies, and the infrastructure to support those technologies. | H.R. 6 would use revenue from certain oil and natural gas policy revisions to create an Energy Efficiency and Renewables Reserve. The actual uses of the Reserve would be determined by ensuing legislation. A variety of tax, spending, or regulatory bills could draw funding from the Reserve to support liquid fuels or electricity policies. The House budget resolution (H.Con.Res. 99) would create a deficit-neutral reserve fund nearly identical to that proposed in H.R. 6. The Senate budget resolution (S.Con.Res. 21) would create three reserve funds with purposes related to those in H.R. 6. However, the Senate version has more specifics about efficiency and renewables measures, and it would allow reserve fund use for "responsible development" of oil and natural gas. |
Remember the (fictitious, funny) Onion article "Planned Parenthood Opens $8 Billion Abortionplex"? Now the famed Abortionplex is on Yelp. Free nachos and mojitos after your partial birth abortion, with a Yelp discount code! As noted in a previous Boing Boing post, many people believe the Abortionplex (and other Onion coverage) is real. I can't wait for the credulous Fox News coverage to kick in.
Here's one recent review:
Tell your doc you want The Flying Dutchman if you want to squeeze your abortion appointment in between two pieces of meat, if you know what I mean, and let's face it, you always know what I mean.
But real pros know that nothing satisfies your hunger for an empty uterus quite as much as well as Animal Style. In this iteration of the classic abortion, after the doctor perfectly vacuums the contents of your uterus, she then fills it with a secret sauce filled with tiny unicorns which will trot around poking holes in your uterine lining and preventing zygotes from taking hold for at least 6 months. But let's face it, even if you're already filled to the brim with tiny unicorns and think you won't be abortion-hungry again for a while, you know you'll be poking around Abortionplex tomorrow on your lunch break. It's too good to stay away! ||||| The "Abortionplex" — that flight of Onion fancy that some think is real — has a Yelp page that's racked up a slew of satisfied user reviews. Next up: the CGI animated family feature. Get to work, Hollywood! [BoingBoing] ||||| 181
Topeka , KS 66611
As soon as my girlfriend and I got our welfare check cashed, we headed right over to Abortionplex for the second time this week and were initially seated in the smoking lounge where we only had to wait a few minutes before being seen by someone who said they were going to sell our baby parts for profit and cut us in on the deal! It is best to go during the middle of the day in the week when all those other suckers are out working hard while we sleep til noon, get up and smoke dope and get pregnant again! We get pregnant quite a lot since the Republicans shut down our local planned parenthood and we no longer have access to birth control but hell, with all the money we are making on baby parts, who cares! Now we can afford to buy a new gun at the new gun shop that replaced the abortion clinic! Now we can have all the irresponsible sex we want! The last time we had a foursome which included a lot of unprotected gay sex! Looks like we will be taking a trip to Abortionplex soon! My girlfriends parents would probably freak out if they saw her with a black baby so once again, Thanks Abortionplex!
They only did a half job! I had twins and they forgot one! Bah! Now I have to come all the way back here. Not happy Abortionplex! Will be writing a letter! 3 stars for half a job.
Holy crap! "Working Women's Wednesdays"???!? AND today IS Wednesday???!? Hay hay hayyyyyyy! ::: (\_(\ *: (=' :') :* *.. (,(")(")°..*`»
Ask and you shall receive. Spring break is done, season is over, Easter came and went and money should be back from tax refunds. What to do? What to do, but find an something fun to do like the rides at the abortionplex. People are friendly and service here is great. Don't forget your frequent customer loyalty card where your 5th one is on the house. BTW FYI here are some of the Yelp cliche. This is the best ( fill in the blank ) or abortions in town... This is a hole in the wall. Unless there is a real hole in the wall. Hehe. I'm surprised I didn't write a review on abortionplex I'm excited to try abortionplex out This is a hidden gem Nothing to write home about. One word review One sentence review I'm from _____ and I know how good ______tastes. I am _____. So I know how should taste. I am from Topeka and I know good abortions. The food is yummy I have one review no friends no avatar but listen to my review. The buffet is amazing or the perverted cousin amazeballs. If I could give negative stars I would. This place has the best _____ Ever / Ev ah! Meh. The good...the bad....the ugly.. Lol ROTF LMAO ETC ETC. bla Bla bla. Woot?! I wanted to like this place but... This is a diamond in the rough To die for. Really. (though shall not kill) Hands down the best/worst ______. PS I hear they plan to have BOGO on Memorial day. Just check in using FourSquare. "I'm going to be back soon." One more cliche sorry.
I find all this stuff simply outrageous! To all of the people who gave this place warm reviews, you are just plain evil! My wife and I have been trying to have a child for a quite some time now, but ever since one of her girlfriends took her to this Abortionplex place for a bachelorette party, it's like she's become some sort of abortion addict! I really want kids but I swear that four or five times in a row now, whenever that little pee stick turns pink, she just goes gallivanting off with her friends for another minor surgery! She says it's just "a girl thing" and that I wouldn't understand. Well I'm sick of it!! The only upside to this whole painful ordeal is that thus far, all of our destroyed fetuses have just been little girls and not the robust young sons that I want to raise. But even so, what if one HAD been a little boy?! This place and all of its supporters makes me want to puke.
Hadn't been awhile. Just went back for a trip down old memory lane. Doesn't get much better than this. Now I can go on that vacation to Europe I always wanted.
Tired of waiting in the long lines? Frustrated with not able to eat your own baby? Fret no longer, Abortionplex has heard Yelpers complaints and introduced the new Brilliant Affordable Baby Yanking (B.A.B.Y.) vending machines! With thousands of B.A.B.Y.s located throughout the Abortionplex, anyone who seeks to have an abortion simply swipes their credit card, walk into the machine, and 5 minutes later, voila, out comes the baby! Want to take the baby home and share with family and friends? No problem! Just select the 'CAKE' option before or after the procedure, and B.A.B.Y. will turn the discarded fetus into a sweet, soft, moist, colorful and delicious cake! http://www.yelp.com/biz_photos/1lzQnPn2tYvdP6oVbQ0BEw?select=pvw1BtSxKmsmrhpmqaKfYw Yum! It's really that easy! (This review is not sponsored by, endorsed by, or affiliated with Abortionplex in any way. Anticipated side effects following an abortion includes Abdominal pain and cramping, Nausea, Vomiting, Diarrhea, blah and blah. Potential more serious complications includes blah or blah blah, blah blah or sepsis, blah blah blah, blah blah, blah blah blah blah, and Death. Call your doctor and blah blah blah blah blah blah blah blah and blah blah.)
Sux, And the place looks like a hellhole. Makes me sick. Personally opposed to abortion in general, Despite my center left leanings. But this place looks sucky.
When I had to plan my BFF's bachelorette party, I emailed Abortionplex to see if they had any special bride-to-be packages. I'm happy to report that they do! So if you're like me and are totally bored with parties that begin with a pole dancing class and end with a half-day visit to a spa, I think you'll find this place to be a fun alternative. (I can't wait until my other friends plan their weddings, so I'll have an excuse to go back!) Just make sure to send all your invitees "Save the Dates" far enough in advance so that those who are lesbian, abstinent, or reproductively-challenged can make appropriate arrangements with a turkey baster, sperm bank, IVF, etc.--and so everyone else will know when to schedule a little hanky panky. (Oh! My BBF's grandma didn't want to miss out on the fun, so she hired a surrogate.) I chose the "Here Goes the Pregnant Bride" weekend package, which gave us the opportunity to spend the Saturday night playing Baby Pong at Abortionplex's super-hip "Got Fetal Alcohol Syndrome?" bar. I was particularly impressed by the giant flat-screened TVs showing around-the-clock live footage of other people's abortions (for some even larger larger-than-life cinematic entertainment, check out one of the many movie theatres)! And instead of cocktail stirrers, each drink came with a uterine curette. . . . It was such a cute touch and by the end of the evening, I'd amassed a whole collection of them to take home with me. Stop by the giftshop for some RU486 valuepacks and maybe a decorative fetus jar (preserved in formalin--heirloom quality), and you'll be all set with great souvenirs. After recovering from our hangovers the next morning, we pre-gamed at the "Say Goodbye to Eating for Two" brunch before hopping on the monorail over to one of the group abortion complexes. The best part of the whole weekend was how we all had our abortions done at the same time, on surgical tables right next to each other. (I can't wait to watch the DVD they gave each of us!) We all opted for vacuum aspiration with local anesthesia. In all my other abortions, I've opted for twilight anesthesia (love benzos and narcs!), but this time around we wanted to be able to chat about our last-minute plans for a "No Baby Shower" we had scheduled for that evening. And I'm so happy I was awake for the whole abortion; it was kinda like going to get manicures with all my girlfriends, but with stirrups! Also be sure to reserve the "Coat Hanger Suite" for the bride-to-be. When my BFF's grandma saw it, she got all nostalgic, it was pretty touching. Please also note: if you book your weekend far enough in advance, you might even be able to reserve the "Back Alley Penthouse." I asked for it, but it's booked straight through 2014. . . .
I've seen a fair share of abortionplexes in my day, and this one is only so-so. As with any establishment I review on Yelp, though, I'm willing to give it the benefit of the doubt. After all, it just opened. There are going to be kinks that need to be worked out, that sort of thing. For now, it earns three stars. One highlight that truly stood out to me was the "prep room." Surprised that no one else had mentioned it in their reviews, I took a photo of some nice ladies there who had shown up for their "big day" and were readying themselves with beer bongs. What a great attitude! I'd be lying if I said I didn't have a couple myself while waiting for the missus to finish up her business. Oh, and one other thing: the "Hall of Fame" fetus photo wall was a bit much. Call me a puritanical Southerner, but that's just my two cents. Nevertheless, I will return. Look forward to a follow-up!
My girlfriend and I had bought a Groupon here a few weeks ago (2 Abortions for the price of 1 [or twins in 1 visit]) which is perfect timing cause our Dyson vacuum is all clogged up at the moment. The unborn baby was starting to show, making her look a little fat so I couldn't wait to get here cause I'd heard such rave reviews. When you arrive they have you take a pregnancy test, one of those sticks you pee on, but the ones they have are edible! They're just like those white sticks you use to scoop out Fun Dip with! The waiting area for the guys is incredible! All the Capri Sun you could want and special edition Pringles cans that are wide enough to actually fit your whole fist in! Not to mention Funnel cake and face-painting (I had them make me look like Spiderman). My girlfriend opted for the deluxe packed which included a post-abortion uterus massage + anal bleaching. So I went to check out their walk-through exhibit called "The Joys of Aborting Motherhood." They put a fake stomach on you that has rumble sensors so you can feel almost what its like to experience the joys of having a baby sucked out. We were pretty hungry afterwards so we hit up their seafood restaurant, "Claws & Clams" I know what you're thinking, "Seafood in Kansas?!" But trust me,you HAVE to try the Placenta Cakes! They're just like crab cakes but so much more amazing, with a zesty-tangy kick! The indoor fireworks at the end of the night was just a perfect way to end a perfect day.
Though the ingredients aren't all locally sourced, the dishes are organic and very fresh. I highly recommend the tasting menu, as you really should experience the full range of flavors this amazing establishment has to offer. My favorite course was the fricassee with baby arugula in an amniotic reduction. It was a bit on the salty side, but tremendously flavorful with a terrific mouth-feel. Also, definitely don't leave without trying the dessert. I had the vanilla bean gelato with candied umbilical crisps. It was to die for.
As a gay guy, I have no reason to come here unless one of my friends get pregnant -- fortunately, God gave me really slutty friends and the sense to poke holes in their condoms, cause I can't get enough of this place! Every time I come here, I'm treated like a big shot, even though it's really my friend who's the star of the show. I think the receptionists keep some of your personal details on file, because they remember what I do and my favorite music well enough to ask me if I've been to any concerts lately. It's those little details that keep me coming back. Well, that and getting to witness the miracle of abortion.
I really, really, wanted to like this place. The convenience of frequent cards, on-site shopping, dining, and a movie theater! But alas, this place is the prime example of corporate American Greed. Yes, the on-site bar is a great way to help your lady take the leap, but the prices are outrageous! I mean $15 for a f*cking margarita?!?! I can easily get my girl liquored up in the car on the way for much less. Don't even get me started on the cost of "services rendered". I don't know how it is where you live, but I know plenty of reputable doctors (with degrees from Upstairs Hollywood Medical College) that can perform these services for less than half of what Abortionplex is charging. Hell, most of the time they'll even come to you and perform the procedure in your bathtub, and leave you some internet knock-off pericocet or oxy. Talk about convenience! Lastly, if you go to Abortionplex you'll miss out of the love from the Haters. There was no one here throwing fruit/veggies at us and reminding us how much God hates us. I mean, that's all part of the experience right? Sadly, we probably won't be returning here, but if you're a woman working the oldest profession in the world, or just some bar-slut who bangs anyone who buys you 8 or 9 bottles of the High Life, I can see how this place may be good fit for you.
I also got the groupon for the Abortionplex and overall my experience was fine. The clinic was fast and service was friendly and efficient but they seemed a little ruder towards me (I think) because I had a groupon. If you don't want people using the groupons, don't offer! I deserve those 4 abortions! However, the clinic wasn't so bad compared to the buffet. I'm not sure who the chef was, but I could tell that some of those fetuses at the make-your-own omelette bar were a little old. Come on, the clinic is right there! You'd think that they'd at least be able to guarantee freshness in their ingredients. Overall this place is totally worth it with the groupon, but I'm not sure if full price I would go back again. Skip the lazy river.
Was wary of paying those big city prices, but I had a Groupon, so I took the girls down to give this place a try. Well, I am impressed! The free buffet was delish, and the many entertainment options will really bring out the kid in you. There's an ACORN office on the premises, so my girls (who are all illegal immigrants) were able to register to vote. They even got a complimentary Obamaphone just for signing up! Loved the mechanical games too - "Whack a Hole", "Ride the Mechanical Papal Bull", "SuperPAC Man". Unfortunately the "Ms. SuperPAC Man" was not available, the sign said it was undergoing a transvaginal ultrasound. Overall, I give it five stars, would abort again. Spend a little extra on the manicure and Thai foot massage package, it just makes the DnC time fly by. And you single guys, keep your eyes out for the many single slutty hotties that are there. With this religious freedom = no birth control coverage thing, your opportunities will, unlike the women there, be multiplying!
Hey fellas! The Abortionplex ain't just for the girls. While you're waiting check out the awesome game room with XBox and Playstation. The sports bar has about 30 beers on tap and man the bar maids are hot. Tip extra and maybe you'll get a phone number or two. And don't forget to ask for your Frequent Fornicator punch card. Get six abortions and the 7th is free!
They tried to hard sell me their services, I paid, I was worked over, and then I go see my general practitioner and find out all I'd just paid to end the life of was my kidney stone. Future customers: Men don't need the services here, don't let them sell it to you.
I stopped by here to check it out because I got a gift card from my friend who told me that The Abortionplex has the best mimosas and performs the best abortions--and let me tell you, I am a sucker for both! I've been places that have had just so-so mimosas but good abortions, and vice versa. Pros: 1. The Buffet: Like other Yelpers, I would agree to skip the fish. But the brunch? Ah-MAZE-ing! I had the Eggs Benedict with the fresh fruit compote. And it was on top of biscuits, not English muffins. I loved it, but if you are looking for a more traditional Eggs Benedict, they have that, too. My companion said that the meatloaf was a little salty, so if you're watching your sodium intake, you may want to skip it. And the mimosas? Holy delicious! I think I teared up a little--they were *that* tasty. 2. The Rock Climbing Wall: Yes, this is the largest rock climbing wall that I have ever seen. I opted to skip it, but my companion climbed it and thought it was awesome. 3. Abortions: I got a tour of the rooms....clean, private, comfortable. Everything a girl could ask for. 4. The bar: My companion sat at the bar while I went for a tour. The whiskey sours are fantastic (like one Yelper wrote), and they know how to pour a Guinness, which is a bonus. Cons: Two things, which is why I gave it only 4 stars: First, anyone else irritated that the Orange Julius was being renovated? I'm hoping that it will be open by the time I return in about 8 weeks. Second, the bathrooms by the rock climbing wall were out of paper towels. And when I told someone about it, they were a bit rude. Super annoying! I heard that they are opening a Hot Topic by the food court to attract the younger crowd, and I just hope that it doesn't change the vibe of the place. Overall, The Abortionplex rocks. It would make a fun romantic getaway, a place to have your bachelorette party, or even a place to take your mom for brunch and a movie while you're seeing the doctor. Can't wait to get pregnant! ||||| TOPEKA, KS—Planned Parenthood announced Tuesday the grand opening of its long-planned $8 billion Abortionplex, a sprawling abortion facility that will allow the organization to terminate unborn lives with an efficiency never before thought possible.
During a press conference, Planned Parenthood president Cecile Richards told reporters that the new state-of-the-art fetus-killing facility located in the nation's heartland offers quick, easy, in-and-out abortions to all women, and represents a bold reinvention of the group's long-standing mission and values.
"Although we've traditionally dedicated 97 percent of our resources to other important services such as contraception distribution, cancer screening, and STD testing, this new complex allows us to devote our full attention to what has always been our true passion: abortion," said Richards, standing under a banner emblazoned with Planned Parenthood's new slogan, "No Life Is Sacred." "And since Congress voted to retain our federal funding, it's going to be that much easier for us to maximize the number of tiny, beating hearts we stop every day."
"The Abortionplex's high-tech machinery is capable of terminating one pregnancy every three seconds," Richards added. "That's almost a million abortions every month. We're so thrilled!"
The 900,000-square-foot facility has more than 2,000 rooms dedicated to the abortion procedure. The abundance of surgical space, Richards said, will ensure that women visiting the facility can be quickly fitted into stirrups without pausing to second-guess their decision or consider alternatives such as adoption. Hundreds of on-site counselors are also available to meet with clients free of charge and go over the many ways that carrying a child to term will burden them and very likely ruin their lives.
The remaining space is dedicated to amenities such as coffee shops, bars, dozens of restaurants and retail outlets, a three-story nightclub, and a 10-screen multiplex theater—features intended not only to help clients relax, but to foster a sense of community and make abortion more of a social event.
"We really want abortion to become a regular part of women's lives, especially younger women who have enough fertile years ahead of them to potentially have dozens of abortions," said Richards, adding that the Abortionplex would provide shuttle service to and from most residences, schools, and shopping malls in the region. "Our hope is for this facility to become a regular destination where a woman in her second trimester can whoop it up at karaoke and then kick back while we vacuum out the contents of her uterus."
"All women should feel like they have a home at the Abortionplex," Richards continued. "Whether she's a high school junior who doesn't want to go to prom pregnant, a go-getter professional who can't be bothered with the time commitment of raising a child, or a prostitute who knows getting an abortion is the easiest form of birth control—all are welcome."
Nineteen-year-old Marcy Kolrath, one of the Abortionplex's first clients, told reporters that despite her initial hesitancy, she was quickly put at ease by staff members who reassured her that she could have abortions over and over for the next decade before finally committing to motherhood. Kolrath also said she was "wowed" by the facility's many attractions.
"I was kind of on the fence in the beginning," she said. "But after a couple of margaritas and a ride down the lazy river they've got circling the place, I got caught up in the vibe. By the time it was over, I almost wished I could've aborted twins and gotten to stay a little longer."
"I told my boyfriend we had to have sex again that very night," Kolrath added. "I really want to come back over Labor Day." | The Abortionplex started out as a whopper of an Onion tale, but took on a life of its own when a few Facebook users, believing it was true, got hilariously angry about it. Now it has spawned a Yelp page, complete with a 3.5-star rating based on more than 100 "reviews." Sample lines: “I recommend getting a season pass like I did! I got the Gold Package so I can bring 2 girlfriends with me and their abortions are covered. Such a good deal and a bonding moment.” “The food court was as everyone has said. amazing? I spent the time gathering free samples (Chick Fil-A FTW) and got to slowly enjoy an Orange Julius as my niece had her lawfully obligatory sonogram.” “The only way it could be better would be if they offered drive-thru service, but I guess you just can't rush some things.” “I can't support the Abortionplex anymore. I used to bring girls there every few months, back when it was called ‘Abortion Place.’ Now I go, and the lines are way to long, and it's all commercialized.” “Originally came to the 'Plex on a whim with a LivingSocial Instant Deal... what a gem! For $11 I was treated to the their ‘lunch’ combo of a basic abortion AND an order of General Tso's chicken.” Check out all the reviews on Yelp. Hat tips to Gawker and Boing Boing for the find. |
2.40am: Good morning. It's now more than three days since the earthquake and tsunami hit Japan, but there is little sign of relief for survivors.
• There has been a second blast at the Fukushima No 1 nuclear plant. Television footage suggests the outer building of the third reactor has been blown away – as in the first blast – but officials believe the reactor container remains intact and says there is little prospect of a significant release of radioactive material. Officials warned yesterday that there might be another hydrogen explosion.
• The Daily Yomiuri newspaper says police are reporting about 1,000 bodies have been found in Minamisanriku, Miyagi and another 1,000 on the Ojika Peninsula coast in the prefecture.
• A very alarming tsunami warning this morning now appears to be a false alarm. Sirens along the coast sounded and television and radio alerts said officials had warned of a wave up to three metres high. The Japanese meteorological agency is now saying they believe it to be a false alarm and that there was no sign of a quake large enough to trigger a tsunami, although broadcaster NHK is apparently reporting that a helicopter pilot observed a large incoming wave.
The Guardian's Dan Chung and Jonathan Watts are reporting from the disaster zone. Their video on the aftermath near Sendai in Miyagi prefecture is here
2.50am: Japanese television earlier reported that the sea level had dropped five metres off the coast of Fukushima. The feeling in Japan is already tense following an earlier aftershock this morning, and much fear spreading across the country after unconfirmed reports of another tsunami. The Japanese Meteorological Agency says that no tsunami is expected.
3.00am: Japan's chief cabinet secretary, Yukio Edano, says a hydrogen explosion has occurred at Unit 3 of Japan's stricken Fukushima Daiichi nuclear plant. The blast was similar to an earlier one at a different unit of the facility.
People within a 12 mile radius have been ordered inside. Reports in the area say they felt the explosion 30 miles away, according to AP.
The No. 3 Unit reactor had been under emergency watch for a possible explosion as pressure built up there following a hydrogen blast Saturday in the facility's Unit 1.
More than 180,000 people have evacuated the area.
3.12am: It has now been confirmed that the reactor's inner containment vessel holding nuclear rods is intact.
TOKYO (AP) Tokyo Electric Power Co. says 3 injured, 7 missing after explosion at Japan nuclear plant.
3.45am: Chief cabinet secretary Yukio Edano has told reporters that the reactor unit appears intact and that the pressure inside the reactor has stabilised.
"I believe people who have seen the images are anxious about the situation, [but] according to the data we have been able to obtain, the containment vessel is not damaged," he added.
3.54am: More from Edano: plans to impose rolling black-outs in parts of Tokyo and the surrounding area are on hold for now. Tokyo Electric Power is asking everyone to try to limit electricity use, but will go ahead with the power cuts if that does not prove sufficient. The closure of nuclear plants has led to a drastic shortfall in supply.
But Japan's Jiji news agency says neighbouring Tohoku Electric Power is now considering black-outs.
AP reports that rail operators have suspended or reduced many regional services to try to reduce the demand for power.
3.59am: Justin McCurry, the Guardian's correspondent in Japan' has sent this message about the mood there this morning, following the tsunami warning and second explosion at the Fukushima nuclear plant.
"The general feeling is that the danger from quakes, tsunami and radiation leaks is far from over. News of the blast came as the Nikkei stock average lost more than 4.5% in morning trading and the Bank of Japan pumped 15 trillion yen of liquidity into the financial system. There are reports of panic buying in Tokyo of items such as batteries, instant noodles and water. There is no visible exodus, but plenty of anecdotal evidence of people, particularly expatriates, leaving the Tokyo area and heading to western Japan or overseas. The combination of a large aftershock this morning, rolling power cuts, the nuclear crisis in Fukushima and another tsunami alert - which proved to be a false alarm - has set nerves jangling across the region."
Justin has written a fuller piece on the mood in Japan, which you can read here
4.07am: Six workers were injured in the explosion at the Fukushima power plant, but all are conscious, NHK reports.
4.26am: Kyodo is reporting that many shops and manufacturers have closed in Tokyo and the surrounding area due to the power rationing. Sony has closed a facility producing industrial adhesive tape while Toshiba has suspended operations at a factory producing goods such as flat screen TCs. Other companies – such as Toyota - had already suspended operations because of their production is in the north east.
Isetan Mitsukoshi has closed all seven of its department stores in the region around the capital and Odakyu Department Store Compay and Sogo & Seibu each closed three branches.
An NHK twitter feed (@nhk_asianvoices) says the government has text-messaged citizens urging a nationwide effort to save energy.
4.45am (1.45pm JST) IEEE Spectrum has an interesting piece here on the role that rescue robots will play in operations. Unsurprisingly Japan has a lot of expertise of its own, but as their teams are deployed already the piece cites Dr Robin Murphy, director of the Center for Robot-Assisted Search and Rescue at Texas A&M University.
Dr. Murphy, an IEEE Fellow whose team has taken robots to disaster sites like the World Trade Center after the September 11, 2001 attacks and New Orleans after hurricane Katrina, tells me that robots have been used in at least one previous earthquake, the 2010 Haiti disaster. The U.S. Army Corps of Engineers, she says, used a Seabotix underwater remotely operated vehicle, or ROV, to investigate bridge and seawall damage as part of the U.S. assistance to the Haitian government. For a disaster like the Japan quake, she says several types of robots could prove useful, including: • small unmanned aerial vehicles like robotic helicopters and quadrotors for inspection of upper levels of buildings and lower altitude checks; • snake robots capable of entering collapsed buildings and slithering through rubble; • small underwater ROVs for bridge inspection and underwater recovery; • tether-based unmanned ground vehicles like sensor-packed wheeled robots that operators can drive remotely to search for survivors.
Incidentally, if you're wondering why the time stamp has got longer, reader gaikokujin used the comments section to request we add Japanese times - we will do so from now on; thanks for the suggestion.
4.52am (1.52pm JST): According to this piece in the New York Times, a US aircraft carrier, Ronald Reagan, which is sailing in the Pacific, passed through a radioactive cloud from the Japan nuclear reactors. It is reported that the crew on deck received a month's worth of radiation in an hour.
There is no indication any of the personnel have experienced ill effects from the exposure, officials said.
5.15am (2.15pm JST): Tokyo Electric Power Company, which runs the Fukushima Daiichi plant, says radiation levels at the unit are well below the legal limits following this morning's hydrogen explosion. Radiation at Unit 3 measured 10.65 microsieverts; operators must inform the government if a level of 500 microsieverts is reached.
Health experts have stressed that the risk from radiation appears low. Reuters has spoken to the Gregory Hartl of the World Health Organisation, who told the agency:
"At this moment it appears to be the case that the public health risk is probably quite low. We understand radiation that has escaped from the plant is very small in amount."
That has not stopped people from worrying. Singaporean authorities have announced they will test imported Japanese produce for potential radiation "as a precautionary measure".
5.38am (2.38pm JST): This screen grab, taken from NHK, shows the Fukushima No 1 nuclear plant before and after this morning's blast.
Officials say it was similar to the earlier hydrogen explosion; The Guardian's Ian Sample has put together this Q&A to explain how that happened.
5.53am (2.53pm JST): Some links that may be useful if you or someone you know is near the disaster zone: @dailyyomiuri has tweeted links for information on English-speaking doctors and hospitals in Iwate, visit http://tinyurl.com/IwateHospitals and Fukushima - though clearly some of these services are likely to have been affected themselves.
This Google map (Japanese only) shows meal and water supplies, evacuation spots and places to charge cellphone batteries
If you're in Tokyo and trying to cut your electricity consumption, this is a guide to how much power appliances use.
6.02am (3pm JST): An update on the situation at Fukushima's second nuclear plant (ie not the one that experienced the blast this morning) from the UN nuclear watchdog, released at 4.15am GMT (1.15pm JST).
The International Atomic Energy Agency reported:
Based on information provided by Japanese authorities, the IAEA can confirm the following information about the status of Units 1, 2, 3 and 4 at Fukushima Daini nuclear power plant. All four units automatically shut down on March 11. All units have off-site power and water levels in all units are stable. Though preparations have been made to do so, there has been no venting to control pressure at any of the plant's units. At unit 1, plant operators were able to restore a residual heat remover system, which is now being used to cool the reactor. Work is in progress to achieve a cold shutdown of the reactor. Workers at units 2 and 4 are working to restore residual heat removal systems. Unit 3 is in a safe, cold shutdown. Radiation dose rate measurements observed at four locations around the plant's perimeter over a 16-hour period on 13 March were all normal.
6.09am (3.09pm): The Guardian's Japan correspondent Justin McCurry has more information on the financial and economic impact of the disaster:
Japanese shares were heading for huge losses on Monday after the fallout from last week's deadly tsunami sent the Nikkei stock average down by more than 6% in Tokyo. Monday was the first full day of trading since the earthquake and tsunami brought devastation to vast parts of Japan's northeast coast. The scale of the damage is expected to exact a heavy economic toll and force the government to borrow heavily to fund the rebuilding effort. Concern is also mounting about the disaster's impact on energy supplies in the wake of serious problems with reactors at two quake-damaged atomic power plants. The yen slid against the dollar after the Bank of Japan said it would pump a total of 15 trillion yen into the financial system to ensure liquidity for private lenders affected by the quake. The bank will inject a further 3 trillion yen on Wednesday. Japan's automakers, electronics firms and oil refiners were among the hardest hit in Monday's trading, and saw their share prices drop by double-digit percentages. Toyota, the world's biggest carmaker, said it would suspend all production in Japan until at least Wednesday. Nissan and Honda have announced similar measures. The broader TOPIX index fell 7.6% and was on course for its biggest single-day loss since October 2008, when stocks nosedived after the Lehman shock. Japan's central bank, which is holding a policy meeting on Monday, said it would closely monitor currency markets. Some analysts expected the bank to announce further emergency measures later in the day.
6.19am (3.19pm JST): China is willing to offer more help to Japan, Premier Wen Jiabao said today in his annual press conference. A rescue team arrived yesterday and Beijing has also sent relief supplies. Reuters reports:
China has set aside acrimony over territorial disputes and wartime memories to extend the hand of friendship to Japan, sending a team of rescuers to help search for survivors from the disaster, which likely killed more than 10,000 people. "I want to use today's opportunity to extend our deep condolences for the loss of lives in this disaster and to express our sincere sympathy to the Japanese people," Wen said. "China is also a country that is prone to earthquake disasters and we fully empathise with how the Japanese people feel now," he added. "When the massive Wenchuan earthquake hit the Japanese government sent a rescue team to China and also offered supplies," said Wen, referring to the 2008 Sichuan earthquake that killed more than 80,000 people. "We will continue to provide further necessary aid to Japan in accordance with their needs."
7.25am (4.25pm JST): If you've just joined us, a quick summary of the situation this morning:
• The nuclear crisis at Fukushima No 1 nuclear plant continues, with a hydrogen explosion blowing off a reactor building and injuring 11 people.
The blast at the number 3 reactor had been anticipated and was similar to the explosion seen previously at the number 1 reactor. It has not, apparently, damaged the reactor itself or the containment vessel and authorities said radiation levels were normal around it.
However, Japanese media are reporting that cooling has now failed at a third reactor at the plant (confusingly, number 2 reactor).
• Police are reporting that about 1,000 bodies have been found in Minamisanriku and another 1,000 on the Ojika Peninsula coast in Miyagi, the worst hit prefecture, according to Japanese media.
• Plans for rolling black outs in Tokyo and the surrounding area are currently suspended. Many private firms have voluntarily halted business or taken other measures to help reduce demand; although supply has been hit badly by the nuclear plant closures, it is still keeping up with consumption at present.
• An early morning tsunami alert, with sirens and broadcast warnings, turned out to be a false alarm. But authorities have warned people to remain careful as aftershocks of up to 6.2 magnitude continue to rock the north east.
• Japan has said it may deploy reservists to supplement the 100,000 troops working on rescue and relief operations, NHK reports. If so, it would be the first time they had ever been required to cope with a natural disaster.
• Shares fell sharply and the yen slid against the dollar as financial markets reopened this morning.
More powerful footage is emerging of the disaster. This six minute video of the tsunami striking a town is thought to have been shot in Kesennumma, Miyagi, which we know from earlier reports was devastated by the waves.
7.40am (4.40pm JST): A UK International Search and Rescue (Isar) team is due to join an international hunt for survivors in the city of Ofunato, about 100 miles north of Sendai on the east coast.
The group, organised by the Department for International Development, is made up of 63 UK fire service search and rescue specialists, two rescue dogs and a medical support team.
The experts arrived in Japan yesterday on board a private charter plane carrying 11 tonnes of specialist rescue equipment, including heavy lifting and cutting gear.
Roy Wilshire, the team's leader, said it had met members of the US military and was travelling in convoy to Ofunato, which has a population of around 42,000.
"There are apparently hundreds of people missing there and we are in a convoy of several hundred rescuers,'' he told the BBC.
"When we arrive we will set up our base before taking instruction from members of the Japanese fire and rescue service."
7.58am (4.58pm JST): Associated Press has a disturbing report on the conditions for survivors despite a massive relief operation.
"People are surviving on little food and water. Things are simply not coming," said Hajime Sato, a government official in Iwate prefecture, one of the three hardest hit. "We have repeatedly asked the government to help us, but the government is overwhelmed by the scale of damage and enormous demand for food and water," he told The Associated Press. "We are only getting around just 10 percent of what we have requested. But we are patient because everyone in the quake-hit areas is suffering." He said local authorities were also running out of body bags and coffins. "We have requested funeral homes across the nation to send us many body bags and coffins. But we simply don't have enough. We just did not expect such a thing to happen. It's just overwhelming."
A second account focuses on the desperate situation at a hospital in Takajo, Miyagi:
The nurses have been cutting open soiled intravenous packs and scrubbing down muddy packs of pills with alcohol to cleanse them... "I'm sorry, we have no medicine," the staff repeatedly told a constant flow of people from the town, many of them elderly.
8.15am (5.15pm JST): A quick clarification: mjhollamby has pointed out (in the comments) that the UN's nuclear watchdog said six workers were injured in this morning's blast. However, the government revised the numbers upwards after the IAEA produced their statement.
According to Reuters, Yukio Edano - the chief government spokesman - said four personnel from the Self Defence Force and seven power plant workers were injured. One of the workers was seriously injured but remains conscious; the troops were only slightly injured and have already returned to work.
8.36am (5.36pm JST): This map created by Chris A is being shared a lot on Twitter. It purports to show the location of Japan's nuclear power facilities (although there are 55 nuclear power plants in the country, they are spread over 17 sites, according to the website of the Federation of Electric Power Companies of Japan).
View Nuclear Power facilities of Japan in a larger map
The exclamation mark (which become marks if you zoom in close enough) show the site of the two meltdowns. Thanks to @garethoconnor on Twitter for the link.
This is Adam Gabbatt taking over from Tania and Lee.
8.48am (5.48pm JST): My colleague Peter Walker writes that Japan's Research Laboratory for Nuclear Reactors is still insisting that there is no cause to fear a major nuclear accident.
An expert from the body told NHK TV's English-language service that while it was possible there would be similar explosions in the No 2 and No 4 reactor buildings at the Fukushima plant, all important safety features remained intact and plans to cool the reactors with seawater should be effective.
9.15am (6.15pm JST): Confusing reports regarding the Fukushima Daiichi plant. According to some sources Tokyo Electric Power Company (Tepco), which runs the plant, has declared that the No 1 and 2 reactors are out of emergency.
Other reports this morning suggested that the No 2 reactor had actually suffered a loss of its cooling function, with water levels at the reactor dropping. Government officials said efforts were under way at the No 2 reactor to prevent what would be the plant's third explosion since Saturday.
9.36am (6.36pm JST): A quick note from my colleague James Randerson on the "Richter Scale" – the logarithmic magnitude scale that was defined in 1935 to measure earthquakes in California. It was developed by Charles
Richter (who also happened to be a nudist) and Beno Gutenberg of the California Institute of Technology (CIT) and was originally referred to as "Local Magnitude" or ML, James writes:
Even though it was superseded in 1979 by the more uniformly applicable moment magnitude (Mw) scale the Richter scale has an amazing staying power in the public, and it has to be said journalistic, mind. In the barrage of information about the Japan earthquake numerous articles have used to the old scale incorrectly (including on occasion ours).
Scientists no longer use Richter's original methodology as it does not work for large quakes or ones where the epicentre is greater than 600km away. Science writer Ted Nield explains in this amusing piece from 2007 on the scale's staying power:
New magnitude scales that extended Richter and Gutenberg's original idea were developed as the number of recording stations worldwide increased. These include body-wave magnitude (Mb) and surface wave magnitude (Ms). Each is valid over a particular range of frequency and type of signal, and within its own parameters is equivalent to "Richter" magnitude. But because of the limitations of all three (especially the tendency to become saturated at high magnitudes, so that very large events cannot be easily distinguished) a more uniformly applicable magnitude scale, known as moment magnitude (Mw), was developed in 1979 by two other CIT scientists, Tom Hanks and Hiroo Kanamori. For very large earthquakes, Mw gives the most reliable estimate of earthquake size, and this is the measure that is always misreported as "the Richter Scale".
10.02am (7.02pm JST): The No 2 reactor at the Fukushima Dai-Ichi plant has lost all its cooling capacity, according to Japan's Nuclear and Industrial Safety Agency.
NHK World is reporting that Tokyo Electric Power Company, which owns the plant, has notified the agency of an emergency at the No 2 reactor.
This is the second emergency notice for the reactor. The utility firm told the agency shortly after the quake on Friday that the reactor's emergency cooling power system had failed. Since then, the company tried to cool the reactor by circulating water by steam power, instead of electricity. But an attempt to lower the temperature inside the vessel that houses the reactor did not work well. Fears of a hydrogen explosion at the vessel housing building are growing as the water level of the reactor is falling. A reaction with the steam and exposed fuel rods generates a large amount of hydrogen.
10.18am (7.18pm JST): The Australian broadcaster ABC has posted a series of before and after satellite images from around north east Japan which give a clear sense of how much devastation the tsunami caused. The viewer can slide back and forth between the before and after shots, seeing how entire towns have been swept away. See the gallery here.
10.27am (7.27pm JST): Water levels have fallen far enough to partly expose fuel rods at the No 2 reactor at Fukushima Daiichi, according to the Jiji news agency.
10.37am (7.37pm JST): Below the line Hoxtoner, who is in Japan, writes:
Just went to the local supermarket here in Sendai. NOTHING !! You have to see it to believe it. The only things I saw that were in bulk was alcohol, fags, coffee and tea and there wasn't much of that. Other nationalities have been instructed to leave the Tohoku area. I have checked the British FCO and I don't see any such statement. The Electric came on here in the early hours on Sat. We have water,but no gas. It's also a bad hayfever day to add on to all the tragedies and anxieties. I don't usually drink during the day yet I've decided to pour myself out a beer. Slight aftershocks and helicopters flying around as I type.
10.50am (7.50pm JST): The Guardian's science correspondent, Ian Sample, writes that the first report from Japan's Nuclear and Industrial Safety Agency this morning described a rise in radioactivity around the Fukushima nuclear power station when compared to Sunday's levels.
The highest level, at 680microSieverts per hour, was measured earlier on Monday in a direction north-northwest of the plant, the wind direction at the time. To put that level in perspective, the typical dose we receive from background radiation is 2000 microSieverts a year. So spending an hour at the monitoring post leads to an exposure equivalent to around four months due to background. The Nuclear and Industrial Safety Agency report that at 1.10am local time on Monday, engineers had to stop pumping water into reactors one and three at Fukushima because seawater pits ran dry. Injection of water in the reactor 3 - the one that uses a plutonium mix fuel - restarted at 3.20am. They give no further details of the situation at reactor 1.
10.56am (7.56pm JST): The Guardian has posted this gallery of images from Japan.
11.03am (8.03pm JST): Kyodo News has reported in the last five minutes that the fuel rods at the No 2 reactor at Fukushima Daiichi are now fully exposed.
11.09am (8.09pm JST): Ian Sample has more on the exposure of those fuel rods at reactor 2:
The rods are usually submerged in several metres of water to stop them overheating. Water level gauges at reactors 1 and 3 also suggest that fuel rods are partially exposed, but engineers said pressure readings from the reactors conflict with this and that the water gauges may be faulty. Workers are trying to circulate seawater around all three reactors to keep them cool. Serious melting of fuel rods inside any of the reactors could block the circulation of water.
11.20am (8.20pm JST): This dramatic amateur footage, shot in various locations around north-east Japan, shows raging tsunami waves rushing over embankments and flowing into cities and towns carrying vehicles, ships and houses inland.
.
11.26am (8.26pm JST): The International Atomic Energy Authority website appears to have suffered amid a surge in traffic, Ian Sample has just told me. Ian says their back-up system is working fine though: you can find all their announcements on the Japanese nuclear crisis on their Facebook page.
11.38am: The Guardian will be staging a live Q&A with nuclear experts from 1pm, inviting readers to post their questions about the events at Fukushima Daiichi to a panel of scientists.
You can post your questions in advance here, and I'll link again to the Q&A when it starts.
11.48am: The Swiss government has suspended plans to replace and build new nuclear plants, pending a review of two hydrogen explosions at Japanese plants.
AP reported that the suspension affects three requests for "blanket authorization for nuclear replacement until safety standards have been carefully reviewed and if necessary adapted."
The government is considering applications for a new plant in Solothurn and replacement plants in Aargau and Bern.
The head of the Swiss federal energy department, Doris Leuthard, said she has decided to suspend the plans because "safety has absolute priority." She says she has instructed the government to study what caused the Japanese explosions in the wake of Friday's massive earthquake and tsunami.
11.57am: A press briefing in the UK, aimed at promoting the benefits of nuclear power and due to take place tomorrow, has been postponed "because it would be inappropriate given the events in Japan".
Sir David King, Director of the Smith School of Enterprise and the Environment at Oxford University, had been due to present a report entitled 'A low carbon nuclear future: Economic assessment of nuclear materials and spent nuclear fuel management in the UK', and argue the need to develop a new long term strategic nuclear plan which encompasses new build and legacy issues.
12.17pm: Radioactive steam could continue to be released from reactors at Fukushima Daiichi for "weeks or even months", the New York Times is reporting.
The newspaper says reactor operators "now have little choice but to periodically release radioactive steam as part of an emergency cooling process for the fuel of the stricken reactors that may continue for a year or more even after fission has stopped".
The plant's operator must constantly try to flood the reactors with seawater, then release the resulting radioactive steam into the atmosphere, several experts familiar with the design of the Daiichi facility said. That suggests that the tens of thousands of people who have been evacuated may not be able to return to their homes for a considerable period, and that shifts in the wind could blow radioactive materials toward Japanese cities rather than out to sea. Re-establishing normal cooling of the reactors would require restoring electric power — which was cut in the earthquake and tsunami — and now may require plant technicians working in areas that have become highly contaminated with radioactivity.
The newspaper quoted a senior nuclear expert in Japan as saying "under the best scenarios, this isn't going to end anytime soon".
The New York Times also has this excellent interactive explaining how a reactor shuts down.
12.28pm: NHK World reports that "a core meltdown might have occurred" at reactor 2, Fukushima Daiichi.
The Nuclear and Industrial Safety Agency has tried to circulate the coolant by steam instead of electricity, but NHK reported that attempts to lower the temperature inside the reactor chamber have not worked well.
NISA is reportedly also considering opening a hole in the reactor housing building to release hydrogen generated by the exposed fuel rods.
12.36pm: My colleague Luke Harding is live blogging from the Hamburg offices of Der Spiegel as part of the Guardian's New Europe season, and reports that the German news magazine has decided to pull its reporter out of Tokyo because of the risk of a Chernobyl-style radiation cloud reaching the Japanese capital.
Thilo Thielke, Spiegel's veteran war correspondent, is leaving Japan today. Spiegel is now covering the story from Bangkok and the south of Japan. Mathias Müller von Blumencron says the latest information is ominous: the wind is blowing to the south – in the direction of Tokyo. "Perhaps this is a piece of German angst. But no country is more against nuclear power than Germany," he says. He adds: "The wind is shifting from the north and could blow a cloud south directly to Tokyo. This is really horrible. I think this is a big, big crisis and a wake-up call for nuclear energy." The German embassy is also making preparations to evacuate some staff, apparently.
12.57pm: Hello, this is Haroon Siddique, taking over for a while to give Adam a break. Our Berlin correspondent Helen Pidd has sent an interesting update relating to steps being taken in Germany concerning the country's nuclear programme in response to events in Japan:
Germany's chancellor Angela Merkel is expected to announce the suspension of plans to extend the life of its nuclear power stations later today. In the light of the Japanese disaster, Merkel has decided to re-examine her highly controversial decision last year to renew Germany's nuclear plants, the German media is reporting. Government sources have told FOCUS online that Merkel took the decision on Sunday night after a crisis meeting at the Berlin headquarters of her Christian Democratic Union (CDU) party. She is expected to confirm the suspension today at press conference at 4pm local time (3pm GMT) with her foreign minister, Guido Westerwelle.
1pm: There have been 38 earthquakes in or around Japan today alone, according to the US geological survey. The most powerful recorded today so far was 6.1 in magnitude.
1.35pm: Here's a summary of events so far today:
• A "core meltdown" might have occurred at reactor 2 Fukushima Daiichi. NHK World reports, as fears grow over the safety of the nuclear plant continues. Fuel rods are reportedly fully exposed. The nuclear and industrial safety agency (NISA) has tried to circulate the coolant by steam instead of electricity, but NHK reported that attempts to lower the temperature inside the reactor chamber have not worked well. NISA is reportedly also considering opening a hole in the reactor housing building to release hydrogen generated by the exposed fuel rods.
• A hydrogen explosion at the number 3 reactor at the Fukushima No 1 nuclear injured 11 people. The blast had been anticipated and was similar to the explosion seen previously at the number 1 reactor. It has not, apparently, damaged the reactor itself or the containment vessel and authorities said radiation levels were normal around it.
• Police are reporting that about 1,000 bodies have been found in Minamisanriku and another 1,000 on the Ojika Peninsula coast in Miyagi. Miyagi has been the worst hit prefecture. Video footage is continuing to emerge revealing the force of the tsunami that swept into north east Japan.
• Plans for rolling black outs in Tokyo and the surrounding area are currently suspended. Many private firms have voluntarily halted business or taken other measures to help reduce demand; although supply has been hit badly by the nuclear plant closures, it is still keeping up with consumption at present.
Other countries have been reviewing their nuclear programmes in the light of events in Japan. Switzerland has suspended plans to replace and build new nuclear plants and Germany is expected to is expected to announce the suspension of plans to extend the life of its nuclear power stations later today. ||||| A hydrogen explosion reportedly ripped through another reactor at the Japanese nuclear plant where a reactor exploded Saturday, deepening a crisis government officials are calling the worst the nation has faced since World War II.
TV Asahi reported that the explosion at Unit 3 of the Fukushima Daiichi Plant, which officials had warned could happen after Unit 1 exploded on Saturday.
Officials from Japan's Nuclear and Industrial Safety Agency (NISA) said the reactor's containment was not damaged and although radiation was leaked, levels were low.
NISA officials also report that reactor no. 2 at Daiichi plant has lost its cooling ability and pressure is rising.
The news came as Japanese officials issued and then quickly canceled a tsunami warning, following aftershocks along the already earthquake-ravaged eastern coast of the nation.
Japanese authorities have been working frantically to prevent a meltdown at a series of nuclear reactors in Fukushima. The U.S. Nuclear Regulatory Commission has sent two of its officials with expertise in boiling water nuclear reactors.
"Disaster in the Pacific": Watch "Good Morning America" and "World News" for special coverage of the Pacific earthquake and tsunami. CLICK HERE for more.
Unit 3 at the Fukushima Daiichi plant had been one focus of concern, and Chief Cabinet Secretary Yukio Edano had said the government knew an explosion there was possible.
Workers had tried releasing radioactive air and injecting sea water to reduce pressure and cool the reactor down to avoid an explosion like the one Saturday, an explosion at Unit 1 that injured four workers.
Already, at least 170,000 people have been evacuated in the 12 mile radius around the Fukushima plants. It is unclear if people are being asked to evacuate around the Miyagi power plant. Doctor Michio Kaku, a physicist, said that Japan should consider extending the evacuation orders.
"Winds don't stop blowing at 12 miles...computer models show that the radiation doesn't disperse in a sphere or a circle. It disperses in a plume, a pencil-like plume that then waves with the wind like a lighthouse," Kaku said.
Japanese authorities have declared a state of emergency at another nuclear power plant following Friday's massive earthquake that has left the country in a crisis Japan hasn't seen since World War II.
A state of emergency was declared at the Onagawa nuclear power plant, located in the hard-hit Miyagi prefecture, the International Atomic Energy Agency reported.
Following Friday's 8.9 magnitude earthquake, a fire broke out at the Onagawa plant but was later contained, the Associated Press reported. Officials from the Tohoku Electric Power Company said that higher than normal radioactivity readings prompted the heightened alert Sunday. The emergency is at level one, the lowest state of emergency.
Officials in Miyagi are still digging through smoldering debris and collapsed homes and buildings left in the aftermath of Friday's 8.9 magnitude earthquake and tsunami. One official estimates that up to 10,000 people could be dead in Miyagi alone.
At least 1,596 people have been killed, according to NHK News.
Officials anticipate another earthquake of 7.0 magnitude or greater in the coming days, possibly further damaging the already fragile nuclear reactors.
"If there's a secondary earthquake, that could tip the whole thing over. Pipes could break, leaks could take place and even as you put sea water in, the water could bleed out, creating a full scale meltdown. That's the nightmare scenario," Kaku said.
Another nuclear complex, the Tokai Dai-Ni plant, experienced a failure after Friday's quake, the Associated Press reported. It's not clear why the incident wasn't reported by the Japan Atomic Power Co. until Sunday.
Rolling Blackouts and Food Shortages Paralyze Japan
Japanese Prime Minister Naoto Kan said in a television address Sunday that the country is facing the most difficult crisis since World War II, but he is confident the nation can overcome this disaster.
"We Japanese people have overcome all kinds of hardships and were able to create a prosperous society. In the face of the earthquake and tsunami we should be able to overcome these hardships. We believe we can overcome this," Kan said.
Kan said 100,000 troops -- plus 2,500 police, 1,100 emergency service teams, and more than 200 medical teams -- have been deployed for recovery efforts.
Millions of the country's residents are grappling with food shortages, power outages and the collapse of basic services.
Ichiro Fujisaki, the nation's U.S. ambassador, said about 2.5 million households -- just over 4 percent of all households in Japan -- were without electricity Sunday, and 500,000 homes were without water.
The government is going to begin further rationing electricity by implementing rolling blackouts.
At least 49 countries along with numerous aid organizations have mobilized relief efforts.
The nuclear-powered aircraft carrier USS Ronald Reagan arrived off the coast of Honshu Saturday, one of a number of U.S. vessels diverted to offer disaster aid to the shattered country.
At least four more Navy ships will be arriving in the days to come to assist with relief efforts.
Tsunami survivors were plucked by helicopters and from rooftops, but hundreds more along the 1,300-mile stretch of coastline are waiting to be rescued.
A Navy P-3 maritime surveillance plane did a survey mission earlier Sunday and discovered a huge debris field 8 miles east of the Japanese coast line. Houses, barges, oil slicks, capsized boats, and cars fill the mile wide debris field.
The U.S. Embassy said that 100,000 Americans are known to be in Japan, and 1,300 of them live in the areas most affected by the earthquake and tsunami. There are still no known American casualties.
ABC News' Michael James, Dan Childs and Dean Schabner and the Associated Press contributed to this report. | A second hydrogen explosion rocked Japan's Fukushima Dai-ichi nuclear plant today as workers scrambled to avert a meltdown disaster. The explosion occurred in a separate unit of the reactor from the one hit by an earlier explosion Saturday. Nuclear agency officials said radiation was leaked, but that levels were low and that the reactor's containment was apparently not damaged. Eleven workers were injured, reports the Guardian. Nearby residents have been ordered to stay indoors, reports ABC News. “I have received reports that the containment vessel is sound,” said a government official. “I understand that there is little possibility that radioactive materials are being released in large amounts.” |
Emergency personnel transport a man who was shot by police at the Capitol. He allegedly tried to bring a gun into the visitor’s center. (EDITOR’S NOTE: The man’s face has been slightly obscured digitally at the request of the D.C. Fire and EMS Department.)
March 28, 2016 Emergency personnel transport a man who was shot by police at the Capitol. He allegedly tried to bring a gun into the visitor’s center. (EDITOR’S NOTE: The man’s face has been slightly obscured digitally at the request of the D.C. Fire and EMS Department.) Ricky Carioti/The Washington Post
A man with a gun was shot by police Monday afternoon, and the Capitol complex was put on lockdown.
A man who authorities said took out a gun and pointed it at officers as he tried to enter the U.S. Capitol Visitor Center on Monday was shot by police, prompting a scramble by law enforcement amid heightened security after terrorist attacks in Brussels and Paris.
Authorities identified the wounded suspect as Larry Russell Dawson, a minister from Tennessee. The 66-year-old Dawson previously was arrested in October in the District after he allegedly disrupted Congress by shouting that he was a “prophet of God.”
Police said Dawson walked into the visitor center about 2:40 p.m. Monday and was going through security screening when at least one officer opened fire. In the chaotic moments that followed, loudspeaker alerts warned tourists in the center of an “active shooter,” and officers yelled at people to get down.
“My husband said he heard a shot followed by a full clip,” said Diane Bilo of Ohio, who was in the cafeteria as her husband and two of their children were in the visitor center.
“Police started running this way, and then some people started running in the opposite way,” said Robiann Gilbert, a high school principal at Northside Methodist Academy in Dothan, Ala., whose group of two dozen parents and students had just wrapped up a tour of the Capitol. “And then chaos started.”
Larry Russell Dawson, 66, who authorities say was shot by police Monday at the U.S. Capitol Visitor Center, also caused a disturbance in the House last October. (C-SPAN)
Police swarmed the Capitol grounds, raised barricades and put the Capitol building and, briefly, the White House under lockdown, upending an otherwise tranquil day when Congress was in recess and tourists were flocking to the cherry blossoms and the White House Easter Egg Roll. Officers with long rifles stood guard at District intersections.
Two hours later, U.S. Capitol Police Chief Matthew R. Verderosa calmed nerves by saying that investigators “believe this is an act of a single person who has frequented the Capitol grounds before. There is no reason to believe this is anything more than a criminal act.”
Monday night, police said Dawson had been charged with assault with a deadly weapon and assault on a police officer while armed. They said he was in stable but critical condition and would appear in D.C. Superior Court after his release from the hospital. Two officials familiar with the case said Dawson was shot in the chest and thigh.
Verderosa said the security screening worked as intended. No officers were injured, but a female bystander between 35 and 45 years old suffered what appeared to be a minor injury and was taken to a hospital, the chief said, without elaborating on how she had been hurt. Two police officials said she had a wound on her face that they believe was caused by a bullet fragment.
The chief said that police recovered a weapon on the scene and that the suspect’s car was found on the Capitol grounds.
‘I am not under the Law!’
It was not clear why Dawson was at the visitor center Monday, but Verderosa said the suspect was known to authorities in the District. On Oct. 22, police said in an arrest affidavit, Dawson stood in the House chamber gallery, “where he began shouting Bible verses which disrupted the normal flow of Congress.”
A police officer tried to grab his arm to escort him out, according to the affidavit, but he “refused to comply” and pulled away. Two other officers grabbed him and pushed him outside the gallery. Police said he broke free again and ran toward an exit, where he was caught by officers and handcuffed.
Dawson was charged with assaulting and resisting police and disorderly conduct in the October incident. A judge freed him pending his next court hearing and ordered him to stay away from the Capitol and surrounding streets.
After failing to show up for a hearing in November, he wrote the court in January, saying: “I have been called chosen and sent unto You this day. I am not under the Law! . . . Therefore, I will not comply with the court order, nor will I surrender myself unto your office.”
The letter adds: “For sin shall not dominion over you. For you are Not under the law, but under Grace!!!” It concludes, “No longer will I let myself be governed by flesh and blood, but only by the Divine Love of God!!!!”
[Dawson calls himself ‘Prophet of God’ in letter to court]
Kristie Holliman, who said she is Dawson’s daughter, said the family had not been contacted by authorities about the shooting as of late Monday afternoon. She declined to offer any information about her father or talk about why he might be in Washington.
“I’m just trying to figure out what happened,” Holliman said during a brief phone interview.
Dawson’s attorney in the October disorderly-conduct case did not return calls seeking comment Monday.
[Woman fatally shot by police at U.S. Capitol after leading chase]
There have been several previous incidents of gunfire on the Capitol grounds. In April, a man fatally shot himself on the building’s west front in an apparent suicide.
In October 2013, a Connecticut woman was shot and killed by law enforcement officers after she tried to drive through a White House security checkpoint, then raced down Pennsylvania Avenue and drove her car into a security barricade on the northeast side of the Capitol grounds.
In 1998, a mentally ill gunman opened fire at an entrance to the Capitol building, killing two Capitol Police officers. Those officers, Jacob Chestnut and John Gibson, are the only officers from that force to have died in the line of duty.
‘I was scared’
Moments after the gunfire Monday, visitors reported a burst of chaos and then tension. Amy and Kai Gudmestad of Minneapolis were at the visitor center with their two children for a tour when officers suddenly started yelling at them to get down.
“There was just yelling, and they told us to get down, and then the officers got us into the theater,” Amy Gudmestad said. The family said they did not hear gunshots.
“It happened very fast,” said Eli, her 10-year-old son, who described the experience as “more confusing” than the fire and tornado drills he has been through at school.
“He was calm. I was scared,” added his 8-year-old sister, Lucy.
The Gudmestads said that large groups of teenage tourists were in the visitor center at the time of the incident and that officers and staffers worked to move everyone to safe locations.
Eventually, officials told them there had been a shooting — though Kai Gudmestad said he had already figured that much out from Twitter.
Trevor Kussman, a textile executive visiting with his wife and children from Chicago, said his family was inside the visitor center watching an educational movie when an announcement was made about “shots being fired.” The movie continued to play, but some people got up to leave.
[Dawson charged with disrupting Congress by shouting Bible verses]
Gilbert, the school principal, said her group was “quickly exited out” through tunnels and locked down in a cafeteria for 45 minutes. During the wait, Gilbert said, they got periodic updates from police.
One parent, Tony Ward, said that it’s “a shame that this is a part of reality today.”
Mike DeBonis, T. Rees Shapiro, Spencer S. Hsu, Karoun Demirjian, Justin Jouvenal, Victoria St. Martin, Jennifer Jenkins and Lyndsey Layton contributed to this report. ||||| Suspect In Custody After Drawing A Weapon At U.S. Capitol
Enlarge this image toggle caption Win McNamee/Getty Images Win McNamee/Getty Images
U.S. Capitol Police shot a man at the Capitol Visitor Center on Monday afternoon after he drew a weapon during a routine security screening, Chief of the U.S. Capitol Police Matthew Verderosa said at a news conference.
Verderosa said the man pulled "what appeared to be a weapon" and an officer shot him. A weapon was recovered at the scene.
Capitol Police later identified the suspect as Larry R. Dawson, 66, of Tennessee and said he has been charged with assault with a deadly weapon and assault on a police officer while armed.
The statement adds, "The defendant is currently in stable but critical condition and will be presented to the District of Columbia Superior Court upon his release from the hospital. The defendant's vehicle has been secured and will be searched upon the granting of a search warrant."
Verderosa said the suspect is known to the Capitol Police from prior contact and there is "no reason to believe this is anything more than a criminal act."
An female bystander also suffered injuries that appeared to be minor, Verderosa said.
A lockdown at the Capitol Complex was lifted shortly after the suspect was apprehended.
Early news reports said at least one officer had been wounded; these reports were incorrect.
Staffers, reporters and others had been told to "shelter in place" and were not allowed to exit or enter any buildings. Congress is currently in recess. The Senate Sergeant-At-Arms Office said that the Capitol is open for official business.
The White House was briefly put on lockdown, according to news reports.
Washington's Metropolitan Police Department tweeted that there "is no active threat to the public."
The Associated Press reported that "visitors were being turned away from the Capitol as emergency vehicles flooded the street and the plaza on the building's eastern side. Police, some carrying long guns, cordoned off the streets immediately around the building, which were thick with tourists visiting for spring holidays and the Cherry Blossom Festival."
The Visitor Center is part of the U.S. Capitol Complex, a group of about a dozen buildings in Washington, according to its website. The center opened in 2008 and serves as an underground screening point for visitors to the U.S. Capitol. After years of discussion about a facility for visitors, construction on the center began after the 1998 killing of two U.S. Capitol Police officers at the ground floor entrance.
This is a developing story. Some things that get reported by the media will later turn out to be wrong. We will focus on reports from police officials and other authorities, credible news outlets and reporters who are at the scene. We will update as the situation develops. ||||| News4's Darcy Spencer has new details on a suspect who was shot when he brought a pellet gun into the Capitol Visitor Center (Published Monday, March 28, 2016)
A Capitol police officer shot and injured a man who brought a weapon into the U.S. Capitol Visitor Center Monday afternoon, the chief of Capitol Police said.
An officer fired after the man pointed what appeared to be a weapon at him, U.S. Capitol Police Chief Matthew Verderosa said. The man was wounded and was in surgery Monday afternoon at Washington Hospital Center, where he is listed in critical condition, according to hospital officials.
The U.S. Capitol Police Department said Larry R. Dawson, 66, of Antioch, Tennessee, has been charged with assault with a deadly weapon and assault on a police officer while armed. They said Dawson's vehicle was located near the Capitol and was secured at a separate location.
A female bystander received minor injuries and also was taken to a hospital.
Police and EMS personnel transport the person believed to be the gunman away from the shooting scene at the U.S. Capitol Visitor Center on March 28, 2016 in Washington, D.C. EDITORS NOTE: Suspect's face was slightly obscured digitally at the request of DC Fire and EMS Department.
Photo credit: Ricky Carioti / The Washington Post via Getty Images
"We believe that this is the act of a single person who has frequented the Capitol grounds before, and there is no reason to believe this is anything more than a criminal act," Verderosa said at a news conference.
The man went through a metal detector at the visitor center, an alarm went off and he pulled out the gun, two sources told News4's Shomari Stone.
"It appears the screening process worked as intended," Verderosa said, noting the suspect has not yet been charged. A weapon was recovered at the scene.
NBC News’ Pete Williams reported early Monday evening that the weapon was a pellet gun. Dawson was known to U.S. Capitol Police and was a frequent visitor, Williams reported.
Dawson is facing charges for allegedly standing up and shouting Bible verses in October 2015 in the House Chamber Gallery. According to court documents from the Superior Court for the District of Columbia, Dawson identified himself as a “Prophet of God” to the people in the gallery.
According to the documents, Dawson was removed from gallery and, while being removed from the building, pushed a police officer and began to run. He was caught and charged with assault on a police officer.
Officer Shoots Suspect With Pellet Gun at US Capitol: Police
A man armed with what police now believe was a pellet gun is in critical condition after being shot by U.S. Capitol Police, News4 reports. (Published Monday, March 28, 2016)
A Stay Away Order was issued to Dawson, including a map of the area he was supposed to avoid, which included the U.S. Capitol building and grounds, including all Congressional buildings.
The U.S. Capitol complex was locked down about 3 p.m. The shelter-in-place order was lifted at 3:45 p.m., but the Capitol was open only for official business. The visitor center remained closed.
At first, anyone outside was advised to seek cover immediately, U.S. Capitol police said. The D.C. Police Department later said in an update that there had been an isolated incident and there was no threat to the public.
Visitors were turned away from the Capitol as emergency vehicles flooded the street and the plaza on the building's eastern side. Police, some carrying long guns, cordoned off the streets immediately around the building, which were thick with tourists visiting for spring holidays and the Cherry Blossom Festival.
Initial reports by The Associated Press said a police officer sustained minor injuries. Sources told Williams and Stone that was not the case. Verderosa said no officers were injured.
The visitor center will be open as usual on Tuesday, Verderosa said.
US Capitol Suspect Believed to Have Carried Pellet Gun
Authorities believe a man shot by police at the U.S. Capitol was carrying a pellet gun, according to Pete Williams of NBC News. (Published Monday, March 28, 2016)
Jill Epstein, executive director of the California Association of Marriage and Family Therapists, told NBC News she was at the visitor center on a lobbying trip to meet a senator when an active shooter was reported.
"I was with a group of my colleagues walking into the visitor center and as we were literally going through the metal detectors, people started screaming, 'Get out! Get out!' We didn't know which way to run. We ran out and and they told us to get against the wall so we were crouching against the wall outside the visitor center,” she said.
"Police appeared out of everywhere and they were screaming, 'Run for it! Run for it! Run up that ramp!' And we ran like you see in videos. It was surreal. It was so beautiful out and the cherry blossoms are in bloom and people are running for their lives. It was unsettling and scary," Epstein said.
The witness said one of her colleagues bolted for the door without his watch, wallet or phone, which were still on the conveyor belt of the metal detector.
It's the second time in less than a year that the U.S. Capitol was locked down due to a gun incident. Last April, a 22-year-old from Lincolnwood, Illinois, fatally shot himself on the building’s west front, triggering an hourslong lockdown.
A dental hygienist from Connecticut, Miriam Carey, 34, was shot and killed outside the Capitol Oct. 3, 2013, after police said she tried to ram a temporary security barrier outside the White House with her car and then struck a Secret Service uniformed division officer. She then fled the scene, leading police on a chase.
According to subsequent investigation by the News4 I-team, U.S. Capitol police stopped at least 13 people from carrying guns on or near Capitol grounds from 2012 to 2015.
On July 24, 1998, two U.S. Capitol Police officers were killed after a gunman stormed past a Capitol security checkpoint and opened fire. Officer Jacob J. Chestnut Jr. was fatally shot at the checkpoint, and a tourist was injured in the initial crossfire between the gunman and police. Detective John M. Gibson then told congressional aides to seek cover before exchanging gunfire with the shooter. Gibson was fatally wounded, but police say his actions allowed other officers to subdue the gunman. | Officials confirm the man accused of pulling a weapon only to be shot by US Capitol police at the Capitol Visitor Center on Monday is Larry Russell Dawson, a minister from Antioch, Tenn., who is charged with assault with a deadly weapon and assault on a police officer while armed, reports NBC Washington. At least one officer fired at Dawson, 66, after a metal detector sounded and he revealed "what appeared to be a weapon" around 2:40pm, US Capitol Police Chief Matthew Verderosa says, per NPR. Officials tell the Washington Post that Dawson was shot in the chest and thigh; he is listed in stable but critical condition. A female bystander reportedly suffered a minor facial injury from a bullet fragment; her condition is unknown. "It appears the screening process worked as intended," says Verderosa, adding that a weapon was recovered from the scene. An official tells the Wall Street Journal that it may have been a pellet gun. Dawson's truck was also found on the premises and "will be searched upon the granting of a search warrant," police say. Dawson had been ordered to keep away from the Capitol after an incident last October when "he began shouting Bible verses" from the chamber gallery of the House of Representatives, according to an affidavit. He was then charged with assaulting and resisting police and disorderly conduct. |
The National Performance Review (NPR) was begun by the President in March 1993 and is a major management reform initiative by the administration under the direction of the Vice President. In September 1993, the Vice President published 384 NPR recommendations designed to make the government work better and cost less. We have commented on these recommendations and discussed their implementation in two previous reports. “to pick a few places where we can immediately unshackle our workers so they can re-engineer their work processes to fully accomplish their missions—places where we can fully delegate authority and responsibility, replace regulations with incentives, and measure our success by customer satisfaction.” In response to the Vice President’s request, dozens of federal agencies have established reinvention labs throughout the government. Although similar in some respects to pilot projects that have been used on numerous occasions in federal agencies to test new procedures, the reinvention lab concept originated at the Department of Defense (DOD) during the mid-1980s. DOD’s model installation program was initiated by the then Deputy Assistant Secretary of Defense for Installations (DAS/DI). The program focused on reducing the amount of regulation governing administrative functions at certain military installations. Through this program, DOD identified hundreds of pages of regulations governing military installations that it believed did not make sense or wasted time and money. The DAS/DI waived as many DOD regulations as possible and allowed the base commanders to operate the installations in their own way. According to an NPR official, the program was enthusiastically supported by the installations, which began to improve not only administrative operations but also mission-related functions. The model installations program became so successful that DOD opened the program to all military installations in March 1986. In early 1993, the DAS/DI was appointed the Director of the overall NPR effort. According to an NPR official, the Director suggested to the Vice President that “reinvention labs” similar to the model installations be established within all federal agencies as part of the administration’s governmentwide effort to improve government operations and save money. The NPR effort is headed by the Vice President, but the day-to-day operation of the effort is the responsibility of an NPR task force that comprises staff from various federal departments and agencies. The staff are assigned to the task force for a temporary period of time, usually 3 to 6 months. The total number of staff assigned to the task force has varied over time but has usually been between 40 and 60. About 10 of these staff have worked on the NPR task force since it was established in 1993, but even they technically remain employees of their home agencies. The NPR task force has attempted to advertise and promote the reinvention lab effort in a variety of ways. For example, the task force has sponsored or cosponsored several reinvention lab conferences (with another scheduled for March 25-27, 1996) and has periodically published information about the labs. It has also developed a lab database using information voluntarily submitted by the labs identifying their agencies, location, contact persons, and other general information about the reinvention efforts. However, consistent with its overall philosophy, the NPR task force has avoided control mechanisms and has consciously taken a “hands-off” approach to the development and oversight of the labs. NPR officials said it is up to each agency to decide whether it will have any labs and, if so, how they should be structured and operated. The NPR task force has not required agencies to notify it when labs are created or to report to NPR on their progress. In fact, the task force recommended that labs not be required to file progress reports with their agencies’ management. Overall, agencies have been allowed to operate reinvention labs as they believe appropriate, without top-down control or interference from the task force. The task force views its role as encouraging federal agencies to establish reinvention labs and highlighting those labs that are “success stories” and that focus on customer service. The Office of Management and Budget (OMB) has played less of a role in the reinvention lab effort than the NPR task force. OMB has not been involved in the labs’ designation or their oversight and does not collect or disseminate information about the labs. However, OMB officials said that OMB program examiners are generally aware of the existence of labs in the agencies for which the examiners have responsibility. OMB is responsible for providing management leadership across the executive branch and therefore can be important to the implementation of NPR management improvement ideas. In fact, OMB has already begun to play that role in some areas. For example, during the fiscal year 1996 budget cycle, OMB stressed agency downsizing plans and the use of performance information—key elements of the overall NPR effort—during its reviews of agencies’ budget submissions. OMB itself was “reinvented” as part of the NPR effort when its budget analysis, management review, and policy development roles were integrated into a new structure designed to improve the decisionmaking process and the oversight of executive branch operations. After the Vice President’s April 1993 letter, each federal agency was made responsible for designating organizational units, programs, or new or ongoing initiatives as reinvention labs. Although their comments in the intervening period provide some indication of what kinds of reinvention projects they envisioned, neither the Vice President nor the NPR task force has established specific criteria defining a lab. “e hope this process will involve not only the thousands of federal employees now at work on Reinvention Teams and in Reinvention Labs, but millions more who are not yet engaged. We hope it will transform the habits, culture, and performance of all federal organizations.” In October 1993, representatives from reinvention labs at a number of agencies attended a conference in Hunt Valley, MD, at which they discussed their ideas and experiences. One of the key topics of discussion at the conference was, “What is a reinvention lab?” The conference proceedings stated that a lab “is a place that cuts through ’red tape,’ exceeds customer expectations, and unleashes innovations for improvement from its employees.” The proceedings listed five areas of consensus about the characteristics of a reinvention lab: (1) vision (continually improving value to customers); (2) leadership (unleashing the creativity and wisdom in everyone); (3) empowerment (providing employee teams with resources, mission, and accountability); (4) incentives (offering timely “carrots” for innovation and risk-taking); and (5) accountability (ensuring the customer is always right). The Vice President said that reinvention labs were doing the same things as the rest of the agencies, “only they’re doing them faster.” Several of the Vice President’s and NPR officials’ comments about the reinvention labs centered on the labs’ ability to avoid complying with regulations that could encumber their efforts. As noted previously, the Vice President told agencies in his April 1993 letter that regulations should be replaced with “incentives” in the labs. NPR officials also told the reinvention labs that they should be provided freedom from regulations. A number of the comments at the Hunt Valley conference focused on eliminating red tape and unnecessary regulations. Another recurring theme in the Vice President’s comments and NPR publications has been the need to communicate about lab results. At the Hunt Valley conference, the Vice President said that reinvention labs “will need to share what they learn and forge alliances for change.” A 1993 NPR report also voiced support for spreading reinvention ideas. Reinvention labs are but one of a number of efforts initiated in recent years by the administration or Congress to reform the operation of the federal government. Because these other reform efforts were being implemented at the same time that the reinvention labs were being initiated, they may have affected the labs’ development. For example, the Government Performance and Results Act (GPRA), enacted in August 1993, was designed to improve the effectiveness and efficiency of federal programs by establishing a system to set goals for program performance and to measure results. GPRA requires federal agencies to (1) establish 5-year strategic plans by September 30, 1997; (2) prepare annual plans setting performance goals beginning with fiscal year 1999; and (3) report annually on actual performance toward achieving those goals, beginning in March 2000. As a result of GPRA’s requirements, greater emphasis is to be placed on the results or outcomes of federal programs. OMB is responsible for leading the GPRA implementation effort and has designated more than 70 programs and agencies as pilots. As noted previously, the reinvention lab effort was initiated in 1993 at about the same time that the original NPR recommendations were being developed. As part of that effort, the 1993 NPR report said that the civilian, nonpostal workforce could be reduced by 252,000 positions during a 5-year period. The report said these cuts would be made possible by changes in agencies’ work processes and would bring the federal workforce to its lowest level since the mid-1960s. In 1994, Congress enacted the Federal Workforce Restructuring Act, which mandated an even greater 5-year workforce reduction of 272,900. The September 1995 NPR status report estimated that more than 160,000 jobs had already been eliminated from the federal government. In December 1994, the administration launched a second phase of the NPR effort, referred to as NPR II. One aspect of NPR II was an agency-restructuring initiative in which the Vice President asked the heads of each agency to reexamine all of their agencies’ functions and determine what functions could be eliminated, privatized, devolved to state or local governments, or implemented in a different way. The agencies developed a total of 186 agency-restructuring recommendations, which were aggregated and published in the September 1995 NPR status report. For example, the Department of Housing and Urban Development (HUD) proposed consolidating 60 grant programs into 3, giving greater flexibility to governors and mayors. There have also been several recent congressional proposals to reform the federal government. For example, in May 1995, the Senate Committee on Governmental Affairs held hearings on proposals for the elimination of the Departments of Commerce, Housing and Urban Development, Energy, and Education. In February 1995, the House Committee on Economic and Educational Opportunities proposed merging the Departments of Education and Labor and the Equal Employment Opportunity Commission into a single department. There has also been a proposal to combine elements of the Departments of Commerce and Energy with the Environmental Protection Agency and other independent agencies to create a Department of Science. Although reinventing government and the NPR effort have been frequently discussed in the professional literature, relatively little has been written about reinvention labs. In the Brookings Institution’s Inside the Reinvention Machine: Appraising Governmental Reform, one author briefly mentioned several agencies’ labs and said they were but one component in the agencies’ reinvention efforts. She also said the labs frequently were “bottom-up” reform processes, sending a message to the staff that we’re all in this together. Another author in this volume said that the labs “represent exciting innovations in the federal government” and that they were generating “an impressive amount of fresh ideas and information about how government workers can do their jobs better.”However, he also noted that there had been no systematic survey of what the labs had accomplished. An article exclusively about reinvention labs described the lab effort as being a struggle between advocates for change and those individuals with power within the agencies. The author describes labs at several agencies (e.g., the Departments of Agriculture and Education and the General Services Administration), noting that in some cases entire agencies have become labs (e.g., the Agency for International Development and the Federal Emergency Management Agency). Other articles have briefly discussed the activities of a few reinvention labs, but no research efforts have systematically collected information about all of the labs. We initiated this review of the reinvention labs as part of our ongoing body of work examining NPR issues. The objectives of this review were to determine (1) the focus and developmental status of the labs, (2) the factors that hindered or assisted the development of the labs, (3) whether the labs were collecting performance data, and (4) whether the labs had achieved any results. We addressed all of these objectives by conducting a telephone and fax survey of all of the reinvention labs. However, to design and conduct the survey, we had to obtain preliminary information from the NPR task force, agencies, and some of the labs themselves. We obtained information from the NPR task force’s database about the labs’ locations, their developmental status, subject areas covered, and a contact person at each of the lab sites. As of February 1995, NPR’s database indicated that there were 172 labs. However, NPR’s database did not include some labs and double-counted others. After contacting officials responsible for the labs in each of the agencies that the task force reported had ongoing efforts, we later concluded there were 185 labs active as of early 1995. The NPR task force told us that the regional labs were further along in the implementation process than the labs in the Washington, D.C., area. Therefore, we conducted a structured interview of the regional labs by telephone in the summer of 1994 to obtain information on their status, the type of procedure or process being reinvented, and any results the labs had produced. Using the information obtained from these contacts, we selected 12 labs to visit on the basis of two criteria: (1) labs that represented a variety of procedures or processes being reinvented (e.g., procurement, personnel, financial management, or general operations); and (2) labs that had generally progressed to at least the planning stage. We visited each of these 12 labs and obtained detailed information concerning each of our objectives. We developed case studies on each of the 12 labs and subsequently sent them to both the lab officials from whom we gathered the data and the agencies’ headquarters for their review and comment. Their comments were incorporated into the final version of the case studies. (For a list of these labs, see app. I. See apps. II through XIII for the full case studies.) We then conducted two surveys of all 185 of the labs—first a telephone then a fax survey—and received responses from 181 of the labs (98 percent). The telephone survey was primarily designed to obtain a general description and overview of the labs’ operations. We sent the second survey to the respondents by fax after the completion of the telephone survey. If a lab focused on more than one area for reinvention (i.e., the lab was engaged in multiple lines of effort), we asked the respondent to focus his or her answers to the fax survey on the lab’s primary line of effort. (See app. I for a list of the labs by agency and subject category.) The fax survey consisted primarily of structured multiple-choice items that focused on each of our objectives. (See app. XIV for copies of the telephone and fax surveys.) Questions focused on such issues as the lab’s developmental status and the nature and extent of performance data being collected. We also asked questions about a number of factors that could affect the labs’ development—e.g., waivers from certain regulations, communication with other labs and the NPR task force, and agency management support. On the basis of comments made by lab officials during our site visits, we selected these factors for specific follow-up in the survey phase of our work. They may not cover all possible factors affecting lab development. We did not independently verify the information we received from any of the information sources—the NPR task force, the site visits, the telephone survey, or the fax survey. For example, if a survey respondent said that his or her lab had collected performance data or had communicated with other labs, we did not assess those data or check with the other labs. However, we did collect some relevant documents or data regarding these issues during our site visits to the 12 labs. We conducted our work between June 1994 and August 1995 in accordance with generally accepted government auditing standards. The telephone and fax surveys were administered between April and July 1995, so the survey data are as of those dates. Although we attempted to survey all of the reinvention labs in the federal government, we cannot be sure that the 185 labs we contacted included all agencies’ labs. Others may have been active at the time of our survey, but we were not aware of them either because of the lack of a specific definition for reinvention labs, the NPR task force did not keep an accurate record on the number of operating labs, or we were denied access to agency officials. In one instance, we were unable to verify the existence of a lab appearing on NPR’s list as being at the Central Intelligence Agency (CIA) because a CIA official said that it was their standard policy to deny GAO access to CIA reinvention activities. Also, other labs may have been developed since the survey was conducted. We submitted a draft of each case study to the relevant lab and agency headquarters officials for their review and have incorporated their comments into the final version of each appendix. On December 27, 1995, we submitted a draft of this report to the Vice President (as head of the NPR effort) and to the Director of OMB for their review and comment. Their comments are described at the end of chapter 5. In the reinvention labs, agencies were supposed to experiment with new ways of doing business, and the NPR task force purposely gave agencies wide latitude in how the labs could be structured and what topics they could address. Agencies were also free to build on existing management reform efforts or to start their reinvention labs from scratch. Aside from the general parameters of customer service and employee empowerment, few restrictions were placed on the labs’ initiation or development. Federal agencies responded to the Vice President’s call for the creation of reinvention labs in earnest. Labs were designated in dozens of agencies and in virtually every region of the country. Our survey indicated that the labs varied widely in terms of their origin, their stage of development at the time of the survey, the number of reinvention efforts addressed by each lab, and the subject areas covered by the labs. Also, although many of the labs shared a common customer service focus, they differed in who they defined as their customers. Finally, the survey indicated that a number of the labs’ efforts actually began before the NPR effort was initiated. As table 2.1 shows, the 185 reinvention labs that had been designated at the time of our survey were spread across 26 federal departments, agencies, and other federal entities. DOD had the most labs (54), followed by the Department of the Interior (DOI) (28). The number of labs in each agency was not always related to its size. Some large agencies had relatively few labs (e.g., the Department of Veterans Affairs); while some comparatively small agencies had initiated a number of labs, e.g., the General Services Administration (GSA). Some agencies that serve the public directly and that had been the subject of both the 1993 and 1995 NPR recommendations had not started any labs at the time of the survey (e.g., the Small Business Administration). Figure 2.1 and table 2.2 show the number of reinvention labs at the time of our survey within each standard federal region. As the figure illustrates, labs had been established in virtually every federal region, but the mid-Atlantic region (region 3) had over two-thirds of the labs. Most of these labs were located in the Washington, D.C., area, but some affected operations in other areas. Relatively few labs were located in the northeast (regions 1 and 2) or the northwest (region 10). Some of the labs were operated in multiple locations within a single region. For example, one HUD lab effort had several sites that included HUD’s offices at Chicago, Milwaukee, and Cleveland. (See app. VIII for a discussion of this lab.) Other labs had multiple sites located in different standard federal regions. For example, GSA’s Federal Supply Service lab was headquartered in New York City (region 2), but some aspects of the lab were being implemented in Boston (region 1). (See app. VI for a discussion of this lab.) We asked the survey respondents why their labs were initiated, allowing them to designate more than one closed-ended response category and/or add additional reasons. They indicated that the reinvention efforts were generally focused and uncoerced. As shown in figure 2.2, nearly two-thirds of the respondents said that they were trying to address a specific problem, and over half indicated that they volunteered to become a lab.Only 13 percent of the respondents reported that they were told to pursue their labs by agency officials. Forty percent said their labs were an outgrowth of quality improvement efforts in their agencies. We also asked the respondents when their labs’ efforts actually began, regardless of when the labs were officially designated as labs. The lab start dates varied widely, ranging from as early as 1984 to as recently as March 1995—1 month before the start of our survey. About one-third of the respondents indicated that their labs’ efforts began before the announcement of the NPR effort in March 1993. The early beginning of so many lab efforts is not surprising given that 40 percent of the respondents said that their labs originated in their agencies’ quality improvement efforts—efforts that started in some federal agencies in the early 1990s.For example, lab officials at the sites we visited told us the following: • GSA’s reinvention labs in two regional offices originated with the offices’ quality assurance programs that began in 1988 and 1989. (See app. VI and app. VII.) • The Internal Revenue Service’s (IRS) reinvention lab in Helena, MT, began as a joint quality improvement process launched in 1988 by IRS and the National Treasury Employees Union. (See app. XI.) • The United States Department of Agriculture’s (USDA) lab on baggage inspection operations in Miami started in 1989 as an effort to improve productivity as staff resources declined and the workload increased. (See app. II.) • DOI’s efforts to improve information dissemination at the U.S. Geological Survey began in 1986 when it attempted to establish a more efficient and responsive order entry, inventory control, and distribution system. (See app. X.) Officials from 14 of the labs we surveyed said that they sought lab designations for existing management improvement efforts because the officials thought such designations would give them more latitude to make changes and provide greater visibility for their efforts. For example, one of the survey respondents said that reinvention lab designation provided the lab team with the momentum needed to overcome common barriers to change. During one of the site visits, an official from HUD’s lab on reinventing the field operations of the Office of Public and Indian Housing said that before its lab designation “we could not get in the door at headquarters.” However, he said that after the lab’s designation “the waters parted” and that headquarters officials became interested in the new oversight approach. (See app. VIII for a discussion of this lab.) Other respondents said that being designated as a reinvention lab provided the mechanism by which they could seek waivers from cumbersome rules and regulations that had been an impediment to previous management reform efforts. The 1993 NPR report called for a new customer service contract with the American people—a new guarantee of effective, efficient, and responsive government. The report also stated that federal agencies were to provide customer service equal to the best in business. In his April 1993 letter calling for the creation of reinvention labs, the Vice President said the labs were to measure their success by customer satisfaction. Consistent with this goal, 99 percent of our survey respondents said that customer service improvement was a primary goal of their labs to at least “some extent”; 93 percent of the respondents said this was true to a “great” or “very great” extent. (See ch. 4 for information on the labs’ collection of performance data.) The survey respondents frequently indicated that the changes that were occurring in their reinvention labs represented a substantially different mode of operation, not simply a minor change in procedures. Over 65 percent of the respondents said that their reinvention labs involved changing the way staff in their agencies did their work to a “great” or “very great” extent. Over 20 percent said that changes in work processes occurred to a “moderate” or “some” extent. Lab officials reported the following examples: • The Defense Logistics Agency’s (DLA) lab on inventory management made significant changes in its work processes and staff roles. DLA officials said they shifted from acting as a wholesaler who buys, stores, and sells inventory to acting as a broker who obtains the most efficient and effective military support for its customers through any appropriate mechanism—including the use of private-sector vendors to store and distribute inventories. (See app. IV.) • The U.S. Geological Survey’s information dissemination lab improved internal communications and job processes by combining the organizational unit that took map purchasing orders with the unit that filled the orders and by cross-training staff. (See app. X.) • GSA’s mid-Atlantic regionwide lab improved customer service in the region’s Public Buildings Service office by shifting staff from working as teams of specialists responsible for moving projects through their segments of a work process to working as multidisciplinary teams made up of specialists responsible for processing one project. (See app. VII.) About two-thirds of the respondents who said that their labs were involved in changing the way staff did their work indicated that the changes improved customer service to a “great” or “very great” extent. However, only 20 percent of the respondents indicated that these changes required substantial alterations in their agencies’ personnel systems. The labs’ definition of their customers varied depending on the lab. Given the opportunity to choose more than one response category, the respondents described their labs’ customers as the general public; their agencies’ constituencies; another government organization (e.g., federal, state, or local); and/or other offices within their own agencies. Almost two-thirds of the respondents said their labs’ customers were both internal and external to the government. For example, officials in HUD’s lab on reinventing the field operations of the Office of Public and Indian Housing said that their lab’s customers included the residents of the public housing units and the local governments’ public housing authorities who operated the housing units. (See app. VIII.) Overall, the two most frequently selected response categories for customers were “another government organization” and “other offices within the lab’s agency”; 18 percent of the respondents said that these were their labs’ only customers. For example, the Department of Commerce’s reinvention lab in Boulder, CO, defined its customers as the scientists and engineers working within the department’s scientific laboratories. (See app. III.) We asked the survey respondents to characterize their labs’ stage of development in one of five categories: (1) planning stage (no implementation begun), (2) implementation begun but not completed at the lab site, (3) implemented at the lab site only, (4) implemented at the lab site and planning or implementation begun at other sites, (5) implemented at the lab site and at other sites, or (6) other. As figure 2.3 shows, the respondents were equally divided between those who said that their labs had been at least implemented at the lab site (responses 3 through 5) and those that had not gotten to that stage of development (responses 1 and 2). The most common single response (35 percent) was “implementation begun but not completed.” Planning stage or implementation incomplete (49%) Implementation at or beyond site (49%) We also asked the respondents whether their labs were focused on a single effort or multiple lines of effort. Nearly two-thirds (63 percent) of the respondents said that their reinvention labs had only one line of effort. As figure 2.4 shows, DOD labs reported they were much more likely to have multiple lines of effort (58 percent) than were civilian labs (29 percent). A line of effort is not the same as a subject category. For example, a lab with only one line of effort can address a variety of subjects, including personnel management, procurement, information technology, and financial management. Nearly three-fourths of the survey respondents indicated that their labs were focused on more than one subject area. The most commonly cited subject area was operations (72 percent), followed by information technology (60 percent), personnel (45 percent), procurement (45 percent), and financial management (39 percent). Examples of these subject areas include the following: In an operations lab, USDA officials examined ways to improve the operation of their airport baggage inspection program by permitting more self-direction by employees and allowing them to identify ways to improve procedures. (See app. II.) • An information technology lab explored the use of electronic media, such as the Internet, E-mail servers, fax on demand, and the Worldwide Web to disseminate information on the latest medical research from sources around the world. • A procurement lab established teams of customers, contractors, and contract administration officials to identify areas for process improvements. The lab was also trying to develop a “risk management” approach to contract administration in which the lab’s level of contractor oversight would be linked to an assessment of the contractor’s performance. In addition to the traditional subject area categories previously mentioned, analysis of survey respondents’ comments in the survey and during our site visits indicated three crosscutting areas of interest: (1) marketing services and expertise; (2) using electronic commerce (EC) and electronic data interchange (EDI) to improve operations, such as procurement and benefit transfers; and (3) developing partnerships with other levels of government, the private sector, and customers. (See app. I for a complete list of these reinvention labs.) The 1993 NPR report advocated creating competition between in-house agency support services and what it termed “support service enterprises”—federal agencies that offer their expertise to other agencies for a fee. Officials from 20 reinvention labs said that their labs were planning or implementing these kinds of reforms, using marketing techniques to expand their customer base. Examples of marketing services include the following: • Two of the labs were department training centers that were attempting to become self-sufficient by charging fees for their services. In addition to marketing their training courses, officials from both centers said they were contracting with other agencies to provide consulting services. • One respondent said that his lab was experimenting with franchising its contracting services to civilian agencies. Lab officials developed a standard rate to be charged for their services and had signed agreements with other agencies to provide those services. • One respondent said that his lab had successfully marketed its organic waste disposal services to other federal, state, and local agencies. He also said that the lab generated additional income by recycling these wastes for resale as compost. One DOD official said that existing statutes had prevented his lab from marketing its duplicating services to non-DOD agencies. He said Congress requires federal agencies to contract printing and duplicating to the private sector via the Government Printing Office (GPO), which applies a surcharge. However, he said that one of our recent reports noted that some of the agency’s in-house duplicating services were about 57 percent cheaper than GPO’s prices. The 1993 NPR report recommended that federal agencies adopt EC and EDI techniques that the private sector had been using for some time because, NPR said, they can save money. Respondents for 38 labs said that their labs were in the process of implementing EC and EDI systems to enable them to easily transfer information on financial and procurement transactions and on client services and benefits. For example, DLA officials said the agency was using EC and EDI to develop a paperless, automated system for critical documents in the contracting process, including delivery orders, requests for quotations, bid responses, and awards. They said that this system would ultimately provide a standard link among DLA, its customers, and suppliers in the private sector. (See app. IV.) At the time of our survey, 54 labs reported attempting to develop partnerships with other levels of government, labor organizations, contractors, and/or their customers. Several of these partnership efforts focused solely on intra- or intergovernmental relations. For example, one official said his lab was working with other federal agencies and state and local government agencies to design an ecosystem management strategy. Another lab was focused on developing an automated prisoner processing system for use by five federal law enforcement entities. Officials for 16 other labs also said that their labs were developing partnerships with contractors, academia, or the private sector. For example, at the Department of Energy’s (DOE) Hanford reinvention lab, the department entered into an agreement allowing a private company to disassemble and use excess equipment, saving the government $2.6 million in disposal costs. In another lab, agency officials and contractors formed teams to rework contracting processes and shift oversight from an adversarial position to a team approach so that both the agency and its contractors could lower oversight costs. Nine respondents said that their labs were establishing partnerships with employee unions. For example, officials at the Commerce Department’s Boulder reinvention lab said that their efforts had built a strong union-management relationship by changing the rigid work environment so that skilled workers would be able to work together as teams and supervisors could perform more as coaches than managers. Reinvention labs were intended to be agents of change in the federal government. As such, they have faced many of the same challenges as other change agents—eliminating rules that stand in the way of progress, ensuring top management support, communicating with others attempting similar changes, and coping with cultural resistance. However, some of the challenges the reinvention labs faced were difficult, such as attempting to initiate new ideas or new work processes while their organizations were shrinking and while other management reform efforts were being implemented. We asked the survey respondents to provide information on a variety of factors that could have hindered or helped the development of the labs, and some of the results were contrary to our initial expectations. For example, many of the lab officials said they had not sought waivers from regulations, even in labs that were fully implemented at the lab site. Few reported substantial communication with other labs or with the NPR task force. However, over 80 percent enjoyed top management support. Analysis of the survey responses also indicated other factors that the respondents said affected the development of their labs. One of the NPR effort’s recurring themes is that regulations and red tape stifle the creativity and ability of federal workers to solve problems and improve service to the public. At the Hunt Valley reinvention lab conference in October 1993, NPR officials encouraged the labs to request waivers from requirements imposed on them “which are barriers to reinvention.” The Vice President said that he was looking to the reinvention labs to identify “barriers that stand in the way of getting the job done in the right way” and to “drive out rules and regulations that just don’t make sense anymore.” A September 1993 NPR report noted that carefully crafted waiver requests and prompt review of these requests can be “experiments for government’s reinvention.” Regulations can come from a variety of sources. Some regulations are promulgated by central management agencies—e.g., OMB, GSA, or the Office of Personnel Management (OPM)—and apply to all or virtually all federal agencies. Other regulations are issued by line agencies and apply only to the issuing agency. In the reinvention lab effort, the entity that establishes a regulation is to receive and rule on any waiver requests. Although they were encouraged to seek regulatory waivers, 60 percent of the survey respondents who answered the question said that their labs had not sought such waivers. Of these respondents, about half said that they considered seeking a waiver, but they did not do so; half said they had not even considered seeking a waiver. When asked why their labs did not seek waivers, the respondents most commonly indicated that waivers were not needed to accomplish their labs’ goals (54 percent) or that it was too early in the reinvention process to seek waivers (30 percent). (Respondents were allowed to select more than one response category to this question.) The relationship between the labs’ stage of development and their propensity to seek waivers was supported by other data in the survey. As figure 3.1 shows, labs that were at least fully implemented at the lab site were almost twice as likely to have requested a waiver than labs that had not reached that stage of development. However, nearly half of the fully implemented labs had not sought any regulatory waivers at the time of the survey. Over two-thirds of the respondents for the fully implemented labs that had not sought a waiver said that a specific waiver was not needed to accomplish their labs’ goals, and they cited a variety of reasons. For example: In some labs, the agencies reported that constraints on pre-lab operations were nonregulatory and that removal of the constraints did not require a waiver. For example, officials from one reinvention lab planned to request a general waiver from using GSA’s supply schedule to enable the site’s supply room to seek the best value for each product it provides. According to an official, this request was dropped because lab officials discovered that procurement rules allowed agencies to ignore the supply schedule if a local source can provide the product at a lower price. In other labs, a blanket waiver of internal regulations, or a delegation of authority, provided by agency headquarters eliminated the need for individual waiver requests. In blanket waivers, agency headquarters typically granted labs the authority to make their own decisions on which agency-specific rules to eliminate without asking for prior permission. For example, GSA gave the Mid-Atlantic Regional Administrator a blanket waiver from nonstatutory internal rules and regulations that might hinder the development of the region’s lab. (See app. VII.) In another lab, officials told us that passage of the Federal Acquisition Streamlining Act removed the legislative barriers to the lab’s reform efforts. Therefore, lab officials said they did not need to go forward with their proposals to waive contracting rules and regulations. The survey respondents indicated that their labs had requested nearly 1,000 waivers from regulatory requirements. Some respondents said their labs had requested only one waiver, but other labs reported requesting dozens of waivers. The respondents also indicated that their labs’ waiver requests involved regulations in a range of subject areas. One-third of all the waivers requested involved agency work process rules or regulations, with the remaining two-thirds about equally divided between personnel rules, procurement rules, and other rules. Examples of agency work process regulations include the following: • Officials from GSA’s office products lab requested a waiver from an agency work process regulation requiring the use of a certain quality assurance technique so that they could replace it with another, reportedly better, technique. (See app. VI.) • The reinvention teams at the U.S. Bureau of Mines’ reinvention lab proposed 21 changes to departmental procedures, such as altering the review process for computer equipment acquisition, removing restrictions on the use of local attorneys to process patent paperwork, and eliminating one level of supervision within the lab’s research center. (See app. IX.) • Contracting officials from the Department of Veterans Affairs’ (VA) reinvention lab in Milwaukee requested nine waivers from both departmental regulations and governmentwide Federal Acquisition Regulations (FAR). Eight of these waivers were pending at the time of our review, including an authorization to remove annual contracts from the current fiscal year cycle and to permit the lab to participate with private-sector purchasing groups in best value purchasing. (See app. XII.) As shown in figure 3.2, over half of the waivers the labs sought were reported to be from agency-specific rules issued by the respondent’s own agency, and nearly one-third of the requested waivers were from governmentwide rules issued by central management agencies. The respondents said the remaining 16 percent of the waiver requests focused on rules from other sources (e.g., executive memorandum), or the respondents were unsure of the source of the regulation from which the waiver was requested. The survey respondents frequently said that it was difficult to obtain waivers from both governmentwide and agency-specific regulations, but they indicated that waivers of governmentwide rules issued by central management agencies, such as GSA, OMB, or OPM, were the most difficult to obtain. More than three-fourths of the respondents who offered an opinion said it was difficult to obtain a waiver from governmentwide rules, with nearly twice as many choosing the “very difficult” response category compared with the “somewhat difficult” category. Only 7 percent of the respondents said it was “easy” to obtain waivers from governmentwide rules. In contrast, 50 percent of the respondents who sought a waiver from rules issued by their own agencies said such waivers were “difficult” to obtain. Of these respondents, most said obtaining agency-specific waivers was only “somewhat difficult,” and 31 percent said it was “easy.” The difficulty survey respondents reported in receiving waivers from governmentwide regulations was also indicated by waiver approval rates. As shown in figure 3.3, lab officials said that over 60 percent of their labs’ requests for waivers from agency-specific rules had been approved at the time of our survey, compared with only about 30 percent of the requests for waivers from governmentwide regulations. Lab officials also reported other types of problems when they requested regulatory waivers. For example, officials from the Pittsburgh Research Center lab in the U.S. Bureau of Mines said the lab team spent a substantial amount of time concentrating on waiver requests that were beyond the scope anticipated by NPR officials. The lab team said they were not clearly warned by DOI management that “overturning statutes was off-limits” when requesting waivers. (See app. IX.) Officials from three different reinvention labs said that they found it difficult to use the delegation of authority to waive regulations that had been given to them by their agencies’ headquarters. For example, officials from these labs said that they had to obtain approval from legal counsels to use that authority and that getting this approval proved to be just as time-consuming as it would have been to get a specific waiver from headquarters. Officials from the Commerce Department’s Boulder reinvention lab said that they tried to use their waiver authority to develop alternative procedures to abolish three staff positions. In keeping with one of the lab’s areas of emphasis to build management and labor partnerships, field managers worked with the local union president to develop an alternative procedure that was less disruptive than the traditional one. However, one lab official said that even though the lab had been given authority to deviate from procedures, headquarters officials required extensive documentation and heavily reviewed the proposal. The lab official said as many as 19 headquarters officials were involved in reviewing and approving every aspect of these procedural changes. (See app. III.) Top management support is crucial to the successful management of changes within organizations, particularly changes of the magnitude envisioned by the Vice President. Top management can provide needed resources and remove barriers that may stand in the way of organizational changes. On the other hand, managers can also negatively affect changes by withholding needed resources and erecting barriers that effectively prevent changes from occurring. Eighty-three percent of the survey respondents who expressed an opinion said top management in their agencies (i.e., Office of the Secretary/Agency Head) were supportive of their reinvention labs, and 77 percent said that upper level career managers were also supportive. In some cases, lab officials said that top management was the leading force behind the reinvention labs. For example, staff developing DOI’s U.S. Geological Survey lab said their lab proposal was approved by headquarters because of the active support of the department’s leadership. (See app. X.) DLA officials said that their top management pushed for a total overhaul of the agency before the start of the NPR effort and that the reinvention labs provided a vehicle for enhancing the visibility of these reforms. (See app. IV.) An official from IRS’ reinvention lab said that IRS management expressed its support for that lab by approving a memorandum of understanding between the lab and its regional office. Included in the memorandum was a commitment from the regional commissioner to provide oversight and program support to the lab, to reduce the reporting requirements on front-line managers, and to offer assistance in implementing the reinvention ideas. (See app. XI.) However, in a few cases labs reported that they were adversely affected by a lack of top management support or attention. For example, one lab official said his lab initially had a high-level supporter in headquarters who could get waivers and delegations of decisionmaking authority approved. However, he said that when the lab lost this supporter, other headquarters officials began to actively resist the lab’s efforts, and some even engaged in what he termed “pay-back.” Another survey respondent said managers in his agency were inattentive to the agency’s lab. The respondent also reported that management was unconcerned about the lab’s progress; did not provide needed resources (e.g., relieving the reinvention team of their usual duties); and did not direct field offices to participate in the lab. Survey respondents also related examples of resistance to their reinvention efforts from nonmanagerial staff in headquarters. One respondent said that the lab was set up in such a manner that staff members at headquarters, whom he said were threatened by the lab’s goals, could obstruct its progress. Another respondent said that staff at her facility had been “frustrated with the NPR experience” and questioned the point of the labs. She said that the lab staff had submitted a proposal to their headquarters that would have allowed them to buy fuel oil from a local supplier at a cheaper price than from their in-house supplier. The headquarters staff sought feedback on the idea from their in-house supplier, who naturally objected to the proposal. On the basis of this response, the headquarters staff denied the request. “We will transform the federal government only if our actions—and the Reinvention Teams and Labs now in place in every department—succeed in planting a seed. That seed will sprout only if we create a process of ongoing change that branches outward from the work we have already done.” If the reinvention labs are to “plant seeds” for organizational change, communication of information about what they have tried and how it has worked is essential. Therefore, we asked lab officials about communication with other reinvention labs and with the NPR task force. The respondents who offered an opinion indicated that substantial communication among labs or between the labs and the NPR task force was relatively rare. Only 11 percent of the respondents said that their labs had communicated with other labs to a “great” or “very great” extent, and only 18 percent reported that level of communication between their labs and the NPR task force. Twenty-three percent of the respondents said they had communicated to a “moderate” extent with other labs and with the NPR task force; the stage of lab development had little effect on their responses. Officials in fully implemented labs were no more likely to have communicated with their colleagues in other labs or with NPR staff than officials in labs that had not gotten to that stage of development. Nevertheless, over 70 percent of the respondents who said they had at least some communication with other labs said it was helpful to the development of their labs. About 68 percent of the respondents reporting that level of communication with NPR staff said it was helpful. For example, one respondent said that DOD held a reinvention lab conference in March 1995 that allowed the agency’s labs to share experiences and exchange ideas. According to lab officials from DOE’s Hanford site reinvention lab, NPR staff assisted them in seeking a waiver enabling DOE to privatize some laboratory services. (See app. V.) There were clear differences in the responses in this area between DOD lab officials and respondents for the other labs. Where over two-thirds of the DOD respondents said that they had at least some communication with other labs, only half of the non-DOD labs indicated this level of lab-to-lab communication. Similarly, DOD lab officials were much more likely to report that this communication had aided in the development of their labs (83 percent) than respondents from other agencies (59 percent). Interestingly, DOD and non-DOD labs did not differ in the degree to which they communicated with the NPR task force (62 percent for both responses) or the extent to which they believed that the communication had assisted in their labs’ development (62 percent for DOD labs versus 60 percent for non-DOD labs). As noted in chapter 1, many of the reinvention labs were initiated or were being implemented at a time when federal agencies were being reduced in size. The September 1995 NPR report estimated that at least 160,000 positions had been eliminated from the federal workforce since early 1993. Because they were operating in this environment, we asked the survey respondents whether agency downsizing had a positive, negative, or other effect on their reinvention labs. (The respondents were allowed to check multiple categories.) About 44 percent of the respondents reported that downsizing had a positive effect on their labs, but about 53 percent reported that downsizing had a negative effect. The respondents mentioned such negative effects of downsizing as slower implementation of lab efforts; loss of corporate memory; and morale problems (e.g., fear, stress, and uncertainty) that resulted in less interest in and support of management reforms and less risk-taking. In addition, some respondents said that downsizing had jeopardized their labs’ ability to achieve desired outcomes and raised concerns that decreasing manpower, coupled with the same or increasing work requirements, would reduce the amount of time respondents had available to focus on lab activities. The respondents who said downsizing had a positive effect on their labs commonly indicated that it was a catalyst for real change in their agencies. Several of the respondents noted that downsizing forced management and staff to rethink agency operations, support reforms, adopt NPR efforts and labs, and work more collaboratively. A few of these respondents also noted that downsizing led to greater innovation and creativity. Five other respondents said that their labs benefited from the downsizing of other agencies. For example, one lab reported that reductions in other agencies’ contract administration staff increased interest in the contract administration services that the lab was marketing. Thirty-three percent of the respondents reported both positive and negative effects from agency downsizing. For example, one respondent said that although downsizing had forced staff to consider radical changes that would have otherwise been rejected, it had also reduced the amount of staff, time, and resources available for concentrating on making these improvements. We also asked the survey respondents what effect, if any, the implementation of GPRA and the agency restructuring initiative in the second phase of the NPR effort (NPR II) had on their reinvention labs. Compared to their views on downsizing, the respondents were less clear about the effects of GPRA implementation and NPR II’s restructuring on their labs. They were more likely to say that they did not know the effects of GPRA or NPR II on their labs, perhaps because these reforms had not been fully implemented at the time of our survey. However, the survey respondents were much more likely to indicate that GPRA had a positive effect on the development of their labs (33 percent) than a negative effect (6 percent). For example, they said that GPRA • complemented and reinforced their labs’ ongoing reinvention efforts; • promoted the development of performance measures and results-based management systems that were a part of their labs’ goals; forced their organization to focus on performance, redefining mission, corporate goals, and objectives; • compelled management to think about how to integrate various management reform legislation, such as the Federal Managers’ Financial Integrity Act of 1982 and the Chief Financial Officers Act of 1990, with the reinvention labs; and • provided a driving force for interest in, and design of, a new operations evaluation process for the lab. At least one of the labs was also participating in a GPRA pilot program. As a pilot site, VA’s New York Regional Office’s claims processing lab developed a new system of measures, including one that VA officials said enabled teams to determine how productive they were by comparing the dollar value of the claims they processed to the relative salary of the team. (See app. XIII.) Officials from six labs said that developing performance measures and complying with GPRA requirements were integral parts of their reinvention efforts. Labs’ performance-based reform initiatives included (1) developing GPRA performance measures and defining a matrix program of performance-based management techniques, (2) building GPRA requirements into the lab’s strategic planning effort, and (3) integrating planning and performance measurement requirements into a standard agencywide system. However, two survey respondents said that the implementation of GPRA had little effect on their labs because they were already developing and using performance measures. Less than 6 percent of the respondents said that GPRA had a negative effect on their reinvention labs. These respondents typically said that GPRA was perceived as “busy work” or as having increased the staff’s workload. In contrast to the respondents’ comments on GPRA, the proportion of positive and negative responses about NPR II restructuring was relatively close—31 and 24 percent, respectively. One respondent said that agency restructuring had resulted in greater cooperation between his lab and OPM on personnel issues. Another respondent said that restructuring provided the framework to take the lab initiative to the next level of improvement. Yet another respondent said that officials at his lab viewed NPR II restructuring as basically a budget exercise. In their comments, the survey respondents also mentioned three other barriers to the development of their reinvention labs—lack of interagency coordination, existing legislation, and organizational culture. Several respondents provided examples of the difficulties they experienced in undertaking management reforms that crossed agency boundaries, even when those agencies are within the same department. Other respondents said that existing statutory requirements, which would require an act of Congress to change, had hindered their labs’ performance. Still other survey respondents said that implementation of the reforms in the lab required changing the organizational culture within their agencies—that is, the underlying assumptions, beliefs, values, practices, and expectations of employees and managers. Many governmental functions are performed by more than one agency or level of government. In some cases, the federal government is addressing very broad issues, such as environmental degradation or the need for job training, that fall within the missions of several agencies. Therefore, similar programs have been established in different federal agencies. Other federal programs require the cooperation of state and local governments. Federal agencies also have similar administrative responsibilities (e.g., personnel, procurement, and contracting) that require the provision of resources in each agency to fulfill those functions. In all of these areas, opportunities exist for greater cooperation and sharing of resources. As noted in chapter 2, at the time of our survey, 54 labs were attempting to develop partnerships with other levels of government, labor organizations, contractors, and/or customers. Other labs were attempting to consolidate activities among different federal organizations. The survey respondents provided several examples of the difficulties involved in enacting management reforms across agency boundaries. For example, one respondent said that statutes requiring the use of different contracting procedures in different agencies were a significant barrier to his lab’s goal of consolidating multiagency programs. The respondent said that one agency had to use competition when awarding contracts, while other agencies were required to set aside a percentage of contract awards for minority contractors. Officials at the Commerce Department’s Boulder reinvention lab said that they established a multiagency team to address the issue of funding for administrative services. However, they said the team was ultimately disbanded because it could not reach consensus on proposed funding alternatives. According to one lab official, the team lacked sufficient authority needed to push a proposal forward. (See app. III.) Other difficulties that the lab officials described in such multiagency efforts included (1) nonparticipation in or withdrawal from the lab by some relevant agencies, (2) resistance from top management at one or more of the agencies, and (3) failure by some agencies to send staff to NPR-related training courses. Some of the survey respondents said certain statutory requirements had a negative effect on their labs. For example, some respondents mentioned federal contracting laws as a constraint on reinvention labs. In one case, a lab official said it was difficult to determine the extent of the lab’s authority to reform contracting procedures because of the myriad of different contracting statutes. Another respondent noted that the FAR was designed to prevent close relationships from developing between federal contracting units and contractors. The respondent said this FAR-required “arms length” relationship prevented sharing costs and resources with contractors and was not conducive to cost savings and cycle time reductions. Lab officials at VA’s Clement J. Zablocki Medical Center in Milwaukee provided an interesting example of how such constraints affected the lab’s performance. The officials said VA classifies eyeglasses as a prosthetic device, and statutorily based regulations state that prosthetics can be provided only to veterans with nonservice-related medical conditions who have been admitted to the hospital. Therefore, patients having outpatient cataract surgery must be admitted to the hospital for a 2-day stay in order to receive corrective eyeglasses. Medical center officials said this is an unnecessary and costly requirement, and they have sought a waiver from the regulation. According to the President, one of the goals of the reinvention effort is changing the culture of the national bureaucracy “away from complacency and entitlement toward initiative and empowerment.” A 1993 NPR report stated that traditional cultural values in the federal government resist change, preserve mistrust and control, and operate within a rigid and hierarchical structure. The report also said that this segmented system creates artificial organizational boundaries that separate staff within and among agencies that work on related problems. Several lab officials indicated that this traditional culture had hindered the process of change in their organizations. In an attempt to change their units’ culture, several organizations combined organizational restructuring with changes in individual performance measurement systems as a way to reinforce new employee behaviors. This type of organizational restructuring typically involved moving from hierarchical, specialized departments that were responsible for the performance of a single component of a work process (commonly known as stovepipes) to multidisciplinary work teams responsible for the performance of an entire process. To ensure that incentive systems were aligned with restructured operations, labs were evaluating the use of self-directed work teams by • creating business contracts with built-in product delivery and customer satisfaction targets, with both the customer and the team evaluating the team’s overall performance and each member’s contribution; • having the team leader conduct evaluations rather than the management of the functional units; and • creating an award system that ties group awards to the team’s contribution to the achievement of the agency’s goals. By creating work teams within their organizations, these labs have tried to address the Vice President’s goal to change the culture of the federal government. The collection and analysis of performance data are key elements in changing the way the federal government operates, particularly when those changes are initiated as pilot projects. At the most basic level, performance data are needed to determine whether the changes being implemented are producing the expected results. If the data indicate that the changes are successful and merit wider implementation, performance data can be used to make a compelling argument for changing what may be long-standing policies and practices. Because reinvention labs are intended to explore new ways of accomplishing agencies’ existing missions, often on a small scale before broader implementation begins, data about the labs’ performance can be crucial to the labs’ long-range success. Without such data, decisionmakers will not know whether the changes are an improvement over existing practices. Also, without performance data, lab officials will find it difficult to obtain support for full-scale implementation within their agency or for diffusion beyond their agency to other federal entities. The survey respondents frequently said their labs were collecting various types of performance data. Those labs not collecting data were commonly described as not being sufficiently developed to do so. Where data were collected, the respondents indicated that it showed the labs were improving productivity and customer service. However, the respondents also frequently said that their labs did not have pre-lab data against which post-lab data could be compared. Some respondents also indicated other problems with their labs’ data collection efforts. As figure 4.1 shows, over two-thirds of the respondents said that their labs had collected or were collecting some type of performance data. Even those respondents who said data were not being collected generally recognized its importance. Over 80 percent said their labs planned to gather such data in the future. We asked the survey respondents who said their labs were collecting performance data to identify the kinds of data being collected from the following categories: (1) informal, ad hoc comments from staff or customers; (2) customer opinion survey data; (3) staff opinion survey data; (4) output data reflecting the unit’s level of activity or effort (e.g., the number of claims processed); (5) outcome data indicating the unit’s results, effects, or program impacts (e.g., changes in infant mortality rates); and/or (6) some other kind of data. (Survey respondents were allowed to identify more than one type of data for their labs.) The respondents most commonly said their labs were collecting data on the units’ outputs (77 percent) and/or were collecting informal comments from staff or customers (69 percent). Other frequent responses were customer opinion survey data (57 percent), outcome data (52 percent), and staff opinion survey data (40 percent). Many of the labs (88 percent) reported collecting more than one type of data. Of those respondents who said their labs were not collecting performance data, over three-fourths said that it was too early in the reinvention process to do so. Analysis of the labs’ stage of development and whether or not they collected data supports the lab officials’ opinion that it was too early in the reinvention process to be collecting performance data. As shown in figure 4.2, nearly 90 percent of the labs that were at least fully implemented at the lab site said they had collected or were collecting performance data. In contrast, only about half of the labs in the planning or beginning implementation stages of development had collected or were collecting such data. A more detailed breakdown of the responses from fully implemented labs further demonstrates this relationship between stage of development and data collection. As figure 4.3 shows, although more than three-fourths of the labs implemented at only the lab site were collecting performance data, over 90 percent of the labs implemented at the lab site and beyond were collecting such data. Therefore, the more developed the lab, the more likely that it would have collected performance data. Although most of the survey respondents indicated their labs were collecting performance data, 14 percent of the respondents who said their labs were not collecting such data said they did not do so because gathering performance data was not seen as essential to their labs’ efforts. For example, lab officials from GSA’s Mid-Atlantic Regional Office and the Commerce Department’s Boulder reinvention lab said that efforts to measure “obvious improvements” were unnecessary. One official from the Boulder lab said that data collection efforts should be concentrated on those changes in which outcomes are more dubious. Other officials from this lab said that they had planned to use the agency’s Inspector General to monitor the lab’s progress, but the Inspector General told them that many of the lab’s changes were based on common sense and, therefore, did not require measurement to prove their worthiness. (See app. III.) Another 12 percent of the respondents said that they had not collected performance data because they had experienced difficulty in identifying and/or developing appropriate performance measures. To be valuable, performance data must not only be collected but also be used by decisionmakers to assess the changes being made in agencies’ operations. However, not all of the data the labs collected appear to have been used. For example, officials from USDA’s lab reinventing the baggage inspection operations in Miami said that they had collected data that could have been used to judge the lab’s performance, but the data were never used by anyone in the agency or the lab for that purpose. (See app. II.) Eighty-two percent of the respondents who said their labs had collected or were collecting performance data said that the data had allowed them to reach conclusions regarding the performance of their labs. Of these respondents who offered an opinion, 98 percent reported improved customer service, nearly 92 percent noted improved productivity in their units, and 84 percent said their labs had improved staff morale. Examples of customer service improvements follow: • VA’s New York Regional Office claims processing lab said that the average amount of time veterans had to wait before being seen for an interview had been reduced from about 20 minutes before the lab to less than 3 minutes after the lab was established. Lab officials also said that VA employees had greater control and more authority and found their jobs much more satisfying. (See app. XIII.) • VA’s reinvention lab at the Zablocki Medical Center in Milwaukee said two surveys—one of physicians and the other of patients and their family members—indicated that customer satisfaction had improved as a result of the lab’s effort to coordinate veterans’ outpatient and inpatient care by teaming social workers with primary care physicians. (See app. XII.) • DOE’s reinvention lab at the Hanford site in Washington State said that the lab had reduced the safeguard and security budget by $29 million over a 4-year period by changing the installation’s security operations from a large paramilitary organization that supported a national defense mission to an industrial-style organization that supports environmental cleanup. (See app. V.) • HUD’s reinvention lab in Chicago, Milwaukee, and Cleveland said that by developing partnerships with public housing authorities the lab had improved the satisfaction of the public housing residents. Lab officials also said that an overall measure of the public housing authorities’ management performance in such areas as rent collected, condition of the housing units, and operating reserve had improved since the lab was initiated. (See app. VIII.) • DLA’s lab said the lab reduced the agency’s overall pharmaceutical inventories by $48.6 million and achieved similar inventory reductions and cost savings at DOD medical facilities. (See app. IV.) Respondents frequently said that performance data allowed them to conclude that their labs had improved units’ productivity, customer satisfaction, and staff morale. However, conclusively documenting these improvements may be very difficult. As figure 4.4 indicates, many of the respondents who said their labs were collecting performance data did not collect similar types of data before the start of the lab to serve as a baseline for documenting the labs’ effects. The most common forms of pre-lab performance data (baseline data) that respondents indicated existed concerned a unit’s outputs (53 percent of the respondents) and informal comments (57 percent). Labs reported that they were least likely to have such data on customer (24 percent) and staff (17 percent) opinions. At the time of our survey, 26 agencies and other federal entities had designated a total of 185 reinvention labs in various parts of the country. The survey respondents indicated that the labs generally were established to do what the Vice President suggested in his April 1993 letter to federal departments and agencies—improve customer service; address specific problems; and, ultimately, improve the operation of federal agencies. Because many of the labs had not been implemented at the time of our review, it is too early to tell whether they will accomplish these goals. Even for the labs that the respondents said had been fully implemented, it may take years before it can be determined whether the changes will have a long-lasting effect on federal agencies beyond the lab site. Also, because there is not a specific definition of a reinvention lab or guidance from either the NPR task force or OMB as to how labs should operate, few clear criteria exist against which to judge the labs’ performance. Nevertheless, some preliminary observations about the labs are possible based on comments the Vice President and others have made about the labs and the information developed during this review. For example, the Vice President said that the labs should ideally be initiated where the government serves the public in a highly visible way. Although virtually all of the survey respondents indicated that improving customer service was a primary goal of their labs, they did not always define their labs’ customers as the public. In fact, lab officials most commonly viewed their labs’ customers as other governmental organizations, and, for some of the labs, a government organization was their only customer. Although the linkage of these labs to the public may not have been as direct as the Vice President envisioned, the public or the agency’s constituency appeared to be at least indirectly served in virtually all of the labs. Although the survey respondents indicated that the labs’ changes represented a substantially different mode of operation, the scope of the reforms being developed in the labs was relatively narrow compared to the sweeping changes contemplated by GPRA, the NPR II agency-restructuring recommendations, and the congressional proposals to consolidate agencies’ functions or eliminate agencies entirely. However, the labs’ comparatively narrow scope is a natural consequence of the Vice President’s charge that they “reengineer work processes.” Agencies and employees were not asked to suggest macro-level changes, such as whether entire agencies or programs should be abolished or whether multiple agencies should be merged into a single structure. Ultimately, though, the diffusion and widespread adoption of the labs’ reengineering proposals could lead to the “fundamental culture change” that the Vice President envisioned in 1993. At the beginning of the lab effort, a number of observers indicated that a key factor in the success of the effort would be the labs’ ability to obtain waivers from federal regulations. Although the respondents said many labs sought and received regulatory waivers, a large number of the efforts were able to be implemented without such waivers. Some lab officials said they believed waivers would be needed, but they later discovered that they already had the authority needed to change their work processes. Although some impediments to the labs were clearly real, the experiences of those officials suggest that at least some barriers to organizational change may be more a function of perception than reality. Most of the survey respondents said they were collecting performance data to measure the effect of their labs’ reinvented work processes. However, some of the respondents’ comments raised questions about their commitment to measuring performance or the quality of the data being collected. Some lab officials said that either they or other agency officials did not believe that the collection of performance data was necessary or worthwhile. Other lab officials said that they had difficulty developing measures of performance or that data had been collected but had not been used by decisionmakers. One of the most common types of data reportedly being collected by the labs was informal comments from customers or staff—anecdotal data that are not measurable and, therefore, may not be convincing to skeptics of the reinvention process. Of particular concern to us are the labs that were reportedly collecting data about their reinvention efforts but had not collected similar types of data before the start of their labs. Without such pre-lab data, lab officials have no baseline for documenting a lab’s effects and therefore will find it difficult, if not impossible, to reach persuasive conclusions about the lab’s effects. The absence of both pre- and post-lab data will also make it difficult to support expanding a lab’s changes to the rest of its agency or to other organizations. Development of pre-lab performance measures is particularly important for the substantial number of labs reportedly still in the planning stage. Nevertheless, the reinvention lab effort has produced hundreds of ideas to reengineer work processes and improve agencies’ performance—ideas drawn from employees with hands-on experience in operating government programs. Many of the labs are addressing issues that are at the cutting edge of government management, such as how agencies can use technology to improve their operations; how they can be more self-sufficient in an era of tight budgetary resources; and how agencies can work in partnership with other agencies, other levels of government, or the private sector to solve problems. This progress notwithstanding, even more innovations are possible in these and other areas as agencies review and rethink their existing work processes. The labs we surveyed were at varying stages of development. About half had not been fully implemented at the lab sites and were still in the planning or developmental stages. The rest of the labs had been fully implemented at the lab sites, and some had proven that the innovations being tested can save money, improve service, and/or increase organizational productivity. However, relatively few of the labs’ proposals had been implemented beyond the original lab site. The types of assistance the labs need depend on their stage of development. Labs that are in the planning or developmental stages need the support, encouragement, and, at times, the protection that a “change agent” in a position of influence can provide. Governmentwide, the Vice President and the NPR task force have attempted to perform that role. There have also been change agents within particular agencies that have encouraged and supported the labs’ development. Labs that have been fully implemented, particularly those that have demonstrated ways to save money and/or improve federal operations, need a different type of assistance if the ideas they represent are to spread beyond the lab sites. Nonlab organizations both within the labs’ agencies and in other agencies need to become aware of the labs, recognize the applicability and value of the ideas the labs represent to their own organizations, and learn from the labs’ experiences. As the Vice President said, for the labs to achieve their full potential they “will need to share what they learn and forge alliances for change.” The real value of the labs will be realized only when the operational improvements they initiated, tested, and validated achieve wider adoption. Also, by learning from the labs’ experiences, other organizations can avoid the pitfalls that some of the labs experienced. Sharing this information will keep other organizations from having to “reinvent the wheel” as they reinvent their work processes. If the changes the labs represent end at the lab sites, a valuable resource will have been wasted. Therefore, communication about the labs is crucial to the long-term success of this part of the overall reinvention effort. However, the survey respondents indicated that relatively few labs have had substantial communication either with other labs or with the NPR task force. Also, although it has encouraged the labs’ development and made certain information available about them, the NPR task force has not actively solicited information from the labs, has encouraged agencies to focus on reinventing rather than reporting, and has not systematically contacted the labs to provide them with information or direction. As a result, the NPR task force was not able to provide us with an accurate listing of all of the labs. The task force’s “hands-off” approach to the reinvention lab effort was a conscious decision by NPR officials not to micromanage the labs and impose a top-down “command and control” structure. This approach, while appropriate to encourage and empower employees and agencies to find the solutions they believe most appropriate to reengineer their work processes, may not be the best strategy for moving the labs’ results beyond their experimental environments. Furthermore, there is no certainty that the NPR task force will still be in existence when some of the labs reach maturity. Therefore, we believe that some type of information “clearinghouse,” placed in a relatively stable environment, is needed to allow other organizations to become aware of the labs and to learn about the labs’ experiences. The clearinghouse could, among other things, provide information and guidance to labs on the development of appropriate performance measures, including baseline data against which the labs’ performance could be judged. A number of federal organizations could conceivably perform this clearinghouse role. For example, OMB’s responsibility for providing management leadership across the executive branch makes it a candidate to serve as the clearinghouse. Other possible candidates include OPM, GSA, the President’s Management Council, or an executive agency interested in tracking innovations. We recommend that the Director of OMB ensure that a clearinghouse of information about the labs be established. Working with the NPR task force, the Director should identify which agency or other federal entity can effectively serve as that clearinghouse. The clearinghouse should contain information that identifies the location of each lab, the issues being addressed, points of contact for further information about the lab, and any performance information demonstrating the lab’s results. We provided a draft of this report to the Vice President and the OMB Director for their review and comment. On January 17, 1996, we met with the Senior Policy Advisor to the Vice President for NPR issues and the Deputy Director of the NPR task force. On January 22, 1996, we met with OMB’s Deputy Director for Management. All of the officials indicated that the report was generally accurate, interesting, and helpful. The OMB and NPR Deputy Directors said the report was the most comprehensive analysis of the reinvention labs to date. Certain technical changes the officials suggested were incorporated into the report as appropriate. In the draft, we recommended that OMB serve as the clearinghouse for information about the labs. All of the officials expressed concerns about this recommendation. The Senior Policy Advisor and the NPR Deputy Director were somewhat concerned that the recommendation might be read as implying that OMB, rather than NPR, should have had responsibility for initiating and promoting reinvention labs. They pointed out that OMB’s historical role, its budget responsibilities, and its statutory management responsibilities compete with its role as a “change agent” fostering innovation. We explained that our recommendation was intended to emphasize OMB’s responsibility to facilitate the dissemination of work process innovations beyond the lab sites, not make them change agents responsible for initiating the labs. The Senior Policy Advisor and the Deputy Director agreed that this innovation dissemination function is important and agreed that OMB was one place where this responsibility could be placed. The OMB Deputy Director for Management suggested that the recommendation be changed to allow for options other than OMB itself as the clearinghouse. He said that although OMB has a leadership role to play in this regard, OMB may not be the best candidate to collect and provide information about the labs. Other possible candidates, he said, include the President’s Management Council, other central management agencies, and the Chief Financial Officers Council. We agreed to change the recommendation to state that the OMB Director should ensure that a clearinghouse is established and, working with the NPR task force, should identify the appropriate site for the clearinghouse. | GAO reviewed the National Performance Review's (NPR) initiative to establish reinvention labs in federal departments and agencies, focusing on: (1) the labs' developmental status; (2) factors that hindered or assisted their development; (3) whether the labs were collecting performance data; and (4) whether the labs have achieved any results. GAO found that: (1) more than 2 dozen federal agencies and other entities have developed a total of 185 reinvention labs; (2) the labs deal with a variety of issues, from personnel management to improving operations using technology; (3) almost all of the labs consider customer service as their primary goal, and consider other government organizations to be customers; (4) while labs considered management support to be important to lab development, the use of regulatory waivers and communication about the labs' progress were rarely needed or used; (5) other federal reform efforts, such as downsizing and the implementation of the Government Performance and Results Act, had both positive and negative effects on the labs' development; (6) labs experienced difficulties in sustaining efforts that crossed agency boundaries or challenged agencies' existing cultures; (7) over two-thirds of the labs had collected some type of performance data, ranging from information on unit outputs to informal comments from staff and customers, but some lab administrators refused to collect performance data because they believed it was unnecessary or not worthwhile; (8) the performance data are inconclusive, since there are no previous data for comparison and the nature of the data is subjective; (9) the labs have yielded results by improving customer service, increasing unit productivity and employee morale, and reducing costs at some federal sites; and (10) the value of the labs will be realized only when lab efforts proven to be effective spread beyond the lab sites. |
In 1995, the Congress passed the ICC Termination Act, which abolished the Interstate Commerce Commission (ICC) and created the Board. The act transferred many of ICC’s core rail functions to the Board, including the responsibility to review and approve railroad mergers. The Board has exclusive jurisdiction to review proposed rail mergers, and if approved by the Board, such mergers are exempt from other laws (including federal antitrust laws that would otherwise apply to the transaction) as necessary to carry out the transaction. The Board also conducts oversight of mergers that have been approved. However, there is no statutory requirement for merger oversight. ICC had approximately 400 employees in 1995, its last year of operation. For fiscal year 2001, the Board received an appropriation to support 143 employees. In October 2000, the Board proposed modifications to its regulations governing major rail consolidations. According to the notice of proposed rulemaking, the Board recognized that current merger regulations are outdated and inappropriate for addressing future major rail mergers that, if approved, would likely result in the creation of two North American transcontinental railroads. In June 2001, the Board adopted final regulations governing proposed major rail consolidations. The final regulations recognize the Board’s concerns about what the appropriate rail merger policy should be in light of a declining number of Class I railroads, the elimination of excess capacity in the industry, and the serious service problems that have accompanied recent rail mergers. The final rules substantially increase the burden on applicants to demonstrate that a merger is in the public interest, in part by providing for enhanced competition and protecting service. The rules also establish a formal annual oversight period of not less than 5 years following a merger’s approval. The Board is responsible for approving railroad mergers that it finds consistent with the public interest. When necessary and feasible, conditions are imposed by the Board to mitigate any potential harm to competition. Oversight is designed to ensure that merger conditions have been implemented and that they are meeting their intended purpose. In determining, under the ICC Termination Act of 1995, whether proposed mergers are consistent with the public interest, the Board is required to consider a number of factors that relate to competition. These include the effect of a proposed transaction on the adequacy of transportation to the public; the effect on the public interest of including, or failing to include, other rail carriers in the area involved in the proposed transaction; and the impact of the proposed transaction on competition among rail carriers in the affected region or in the national rail system. The act also establishes a 15-month time limit for the Board to complete its review of accepted applications for mergers between Class I railroads and reach a final decision. Since the Board was created, two applications for merger between Class I railroads have been submitted—Conrail’s acquisition by CSX and Norfolk Southern and Canadian National/Illinois Central—both of which were approved. The Board also approved the Union Pacific’s acquisition of Southern Pacific, an application that had originally been submitted to ICC. During the merger review process, the Board considers comments and evidence submitted by all interested parties, which, together with the application, form the record upon which the Board bases its decision. The applicants as well as interested parties may submit information on the potential public benefits and potential harm of a proposed merger. Public benefits can include such things as gains in a railroad’s efficiency, cost savings, and enhanced opportunities for single-line service. Potential harm can result from, among other things, reductions in competition and harm to a competing carrier’s ability to provide essential services—that is, services for which there is a public need but for which adequate alternative transportation is not available. Whenever necessary and feasible, the Board imposes conditions on mergers that it approves so as to mitigate potential harm associated with a merger, including harm to competition. In determining whether to approve a merger and to impose conditions on its approval, the Board’s concern has focused on the preservation of competition and essential services— not on the survival of particular carriers or enhancing competition. Board officials told us that, while the Board’s efforts to preserve competition have primarily focused on maintaining competitive options for those shippers that could face a reduction in service from two railroads to service by only one railroad, competition that is the result of having two “nearby” railroads has also been preserved. Conditions can include such things as trackage rights, switching arrangements, access to another railroad’s facilities or terminal areas, or divestiture of lines. For example, in the UP/SP merger, the Board granted about 4,000 miles of trackage rights to the Burlington Northern and Santa Fe Railway (BNSF) to address competition-related issues for those rail corridors and shippers that could have potentially faced a reduction in service from two railroads (UP and SP) to service by only one railroad (UP). (See fig. 1.) The Board may also impose privately negotiated settlement agreements as conditions to mergers. The Board will normally impose conditions only when a merger would produce effects harmful to the public interest (such as a significant reduction in competition) and the condition will ameliorate or eliminate these harmful effects. In addition, a condition must be operationally feasible, produce net public benefits, and be tailored to address the adverse effects of a transaction. If a merger is approved, the Board has broad discretion to impose oversight conditions, as well as flexibility in how it conducts oversight. Such oversight conditions establish the Board’s intent to monitor a merger’s implementation and to conduct annual oversight proceedings (called formal oversight in this report). An oversight condition may also establish a time period during which the Board will monitor the effects of a merger. Although oversight conditions are not necessary for the Board to retain jurisdiction over a merger—particularly with regard to carrying out conditions the Board has imposed—oversight conditions ensure that the Board’s retained jurisdiction will be meaningfully exercised and gives parties an added opportunity to demonstrate any specific anticompetitive effects of a merger. According to the Board, oversight also (1) permits the Board to target potential problem areas for the subsequent imposition of additional conditions if this proves warranted in light of experience, (2) puts applicants on notice that they consummate the transaction subject to reasonable future conditions to mitigate harm in limited areas, and (3) helps to ensure cooperation by the merging carriers in addressing problems and disputes that may arise following merger approval. As such, oversight provides an additional check that Board-approved mergers are in the public interest. When an oversight period ends, the Board has stated that it continues to retain jurisdiction and can reopen a merger proceeding, if necessary, to address concerns pertaining to competition and other problems that might develop. Board officials described postmerger oversight as a process consisting mainly of an annual oversight proceeding. This proceeding is an examination of the implementation of merger conditions and whether conditions have effectively met their intended purpose. Oversight is generally conducted each year for 5 years after a merger has been approved. As part of the oversight proceeding, public comments and supporting information are formally submitted into the record by shippers, carriers, and other interested parties. Periodic progress reports, which provide, among other things, details on the implementation of conditions, are also submitted by merging railroads as required. Board officials told us that reporting requirements are frequently used as part of oversight and that such reporting has served to replace the industry and merger monitoring once conducted by ICC’s field staff. As an adjudicatory body, the Board relies on parties affected by a merger to identify whether a proposed transaction has harmed competition and, if so, to what extent; the Board does not independently collect this type of information. Board officials noted that it has been standard practice in merger oversight to require relevant railroads, such as UP and BNSF in UP/SP oversight, to make available under seal to interested parties the railroads’ confidential 100 percent traffic tapes—tapes that include information such as shipments moved and freight revenue generated—so that parties other than the merging carriers would also have the opportunity to submit postmerger rate analyses to the Board. As part of the oversight process, the Board may consider information obtained from monitoring industry operations, such as service levels, as well as any studies conducted, whether specific to that merger or industrywide. In conducting formal oversight, the Board may modify existing conditions if they are not achieving their intended purpose or may impose additional reporting requirements if necessary. The Board also has the authority to initiate a new proceeding to determine if additional conditions should be imposed to address unforeseen merger-related issues. Board officials noted that the agency engages in other activities associated with oversight. Included are such things as informal monitoring of merging railroads’ operations and service performance and responding to certain filings, such as petitions to clarify or modify a merger condition based on competition-related issues or other claims of merger harm. Although the Board retains some form of oversight jurisdiction for all rail mergers, the use of formal merger oversight has become standard only since the mid-1990s. Board officials told us that before 1995, formal postapproval oversight of mergers was rare and was instituted only in unusual situations when strong concerns about competition were present. These officials pointed to only two cases when a period of formal oversight was imposed prior to 1995: once in 1984 in a rail/barge merger between CSX Corporation and American Commercial Lines, Inc., and in 1992 as part of the merger of Wisconsin Central Transportation Corporation and Fox Valley & Western, Ltd. Neither case involved the merger of two or more Class I railroads. In both cases, however, oversight conditions were imposed in response to concerns raised about potential harm to competition. In recent years, in light of the complexity of transactions and the service and competitive issues that have arisen, the Board has expanded its use of formal oversight of railroad mergers. ICC did not impose specific oversight conditions on its approval of the 1995 Burlington Northern and Santa Fe Railway merger because, according to Board officials, there were few concerns raised in that merger about service issues or potential harm to competition. Since August 1995, when the BNSF merger was approved, the Board has imposed oversight on all three Class I railroad mergers that it has approved: the 1996 UP/SP merger, the 1998 Conrail acquisition by CSX and Norfolk Southern, and the 1999 Canadian National/Illinois Central merger. For two of the three transactions (UP/SP and Conrail), the oversight period was set for 5 years. In the third merger—Canadian National and Illinois Central—a 5-year oversight period was established with continuation to be reviewed annually. All three oversight periods are ongoing. The Board has significant discretion and flexibility to adapt its oversight as circumstances warrant. For example, in conducting oversight in recent years, the Board has, when necessary, incorporated additional monitoring elements to supplement its oversight activities. For example, it has added more reporting requirements. The UP/SP merger provides a good illustration of service monitoring. As the result of a service crisis that developed during the implementation of this merger, the Board required both UP/SP and BNSF to provide weekly and monthly reports to its Office of Compliance and Enforcement—information which, according to Board officials, had never been available before. These reports included statistics on such things as average train speed, cars on line, and terminal dwell time—the time loaded railcars spend in a terminal awaiting continued movement. This information allowed the Board to monitor the operations and service levels of both railroads. Similar reporting requirements were imposed on both CSX and Norfolk Southern in the Conrail merger. In this instance, the Board, anticipating possible transitional service problems during the integration process, required the weekly and monthly reports both to monitor the merger’s implementation and to identify potential service problems. Board officials told us that as a result of the lessons learned in the UP/SP merger, oversight has expanded to incorporate monitoring of operational and service issues—in part to serve as an early warning of problems that might occur during the merger integration process. Future mergers will also be subject to operational monitoring. The merger rules adopted by the Board in June 2001 state that the Board will continue to conduct significant postapproval operational monitoring of mergers to insure that service levels after a merger are reasonable and adequate. In general, the Board has found few competition-related problems when conducting oversight of recent mergers but has acted to modify some conditions designed to address such problems when it felt such action was necessary. Even though many of the shipper and railroad trade associations told us that the oversight process is valuable, some shippers and small railroads are dissatisfied with aspects of the Board’s oversight. In addition, some larger carriers are concerned that shippers are using the oversight process to address issues not related to mergers. The Board’s recently adopted merger rules could affect oversight by changing the focus of merger approval toward enhancing rather than preserving competition. A review of oversight decisions in recent merger cases shows that the Board has found few problems related to competition. Board officials also told us they believe that, to date, the conditions originally imposed on mergers have met their intended purpose and have mitigated any potential harm to competition. In determining whether to modify a condition, the Board reviews the evidence presented, considers the nature and extent of the alleged harm, and assesses what action may be warranted. In general, the Board has not found it necessary to modify or add conditions during oversight of recent mergers. However, the Board has found such action to be appropriate in some cases. For example, in December 1998, the Board added a condition and modified a condition in the UP/SP merger. The added condition addressed traffic congestion in the Houston/Gulf Coast area; the modified condition changed the location where BNSF railcars are transferred to another railroad. Similarly, in 1998 and 1999, the Board modified four conditions in the Conrail transaction. These modifications were designed to preserve competition by, among other things, introducing a second carrier and requiring carriers to negotiate an acceptable transfer point to interchange railcars bound for an Indiana power plant. Providing specific evidence of harm to competition is critical in obtaining additional Board relief. According to the Board’s decisions, shippers and others have sometimes alleged harm to competition during oversight without presenting specific evidence of such harm. For example, as part of the UP/SP merger, the Board granted over 2,100 miles of trackage rights to BNSF on the Central Corridor to preserve competition for those shippers that could have been reduced from service by two carriers (UP and SP) to service by only one (the merged UP/SP) and for those exclusively served shippers who benefited from having another railroad nearby. Some organizations have asserted that, despite the trackage rights, postmerger competition has not been adequate on this corridor. However, in its UP/SP oversight decisions, the Board has concluded that postmerger competition on this corridor has been adequate, in part because no shippers came forward with specific evidence of harm. In another instance, in the Conrail merger, the Board granted trackage rights to Norfolk Southern to access a power plant in Indiana. In order to use the trackage rights, Norfolk Southern negotiated a fee with CSX. The power plant owner believed that the negotiated fee was too high to allow adequate competition between the railroads and requested a lower fee so that Norfolk Southern could compete for its business. In denying this request, the Board stated that the evidence of harm presented was not sufficient, in part because both CSX and Norfolk Southern demonstrated that the negotiated fee would amount to only a minimal cost increase ($0.004 per ton) over the amount the Board had previously found to be reasonable. A review of merger oversight documents shows the Board has acted to address competition-related postmerger issues when it believed such action was necessary. For example, during oversight of the Conrail acquisition, the Board reduced fees for trackage rights and switching charged to Canadian Pacific to permit competition between CSX and Canadian Pacific Railway in the Albany, New York, to New York City corridor. Although the Board had initially set these fees in a postmerger decision, the Board later determined that the fees were too high to allow Canadian Pacific to use CSX tracks to provide meaningful competition between the carriers. Consequently, the Board acted to reduce the fees to promote competition. The Board also acted during the Conrail oversight period to void provisions in two contracts between CSX Intermodal, Inc., a rail shipper, and Norfolk Southern that required Norfolk Southern to be the primary carrier of CSX Intermodal goods between northern New Jersey and Chicago during the contract period. Voiding these provisions allowed CSX immediately to compete with Norfolk Southern for these shipments. Shipper and railroad trade associations and railroad companies with whom we spoke believe postmerger oversight is a valuable process. Officials from the National Grain and Feed Association and the National Industrial Transportation League told us that the Board has always been willing to listen to their concerns. Officials from Norfolk Southern and BNSF said the merger oversight process provides shippers and railroads with an opportunity to submit merger-related questions, problems, and concerns. Railroad and railroad association officials stated that the Board acts to protect the interests of the public and the shipping community by allowing railroads and shippers to work together during oversight to resolve actual and potential merger-related problems. Officials from one trade association said that without an oversight process, their members might be faced with a less desirable alternative. For example, officials from the American Chemistry Council told us that the only other option for shippers would be to use the Board’s time-consuming and expensive complaint process. Officials from the American Chemistry Council, as well as officials from UP and BNSF, said a 5-year oversight period has been a benefit to both railroads and shippers. However, an American Chemistry Council official said some mergers may need oversight for a longer or shorter period than 5 years and that it is unclear what type of oversight will occur after the 5-year oversight period for the UP/SP merger expires in 2002. Despite seeing oversight as a valuable process, some shipper and small railroad associations are dissatisfied with aspects of the Board’s oversight procedures. A number of reasons were cited. The Board has been viewed as unresponsive to concerns of shippers and small railroads. For example, an official representing the Edison Electric Institute told us that it had expressed concern to the Board in 2000 about the degree of competition for the transport of Utah and Colorado coal in the Central Corridor, but that the Board declined to answer questions about this issue. An official from the American Chemistry Council expressed similar frustration that the Board did not adopt any part of a plan developed by shippers and others to address the Houston/Gulf Coast service crisis that occurred during the implementation of the UP/SP merger. This plan had broad support from both private sector and state government officials.Dissatisfaction was also expressed about the time and resources required for preparing and submitting comments during the postmerger oversight period, especially for small shippers. For example, officials from the Edison Electric Institute and the American Chemistry Council told us that small shippers might not have the time or the money to invest in the formal oversight process. Finally, officials from several shipper associations and the American Short Line and Regional Railroad Association (an association representing smaller railroads) said their members are discouraged from participating in the oversight process, in part because of the reasons cited above. Although generally satisfied with the Board’s oversight process, officials at some Class I railroads have cited certain drawbacks to it. For example, officials at Norfolk Southern, CSX Transportation, and UP said some shippers use the formal oversight process as a mechanism to raise non- merger-related issues, which they claim have protracted the oversight process. Railroad officials told us that inviting comments by interested parties allows them to reintroduce issues that were initially denied during the merger approval process. They noted that, as a result, they must invest their time to address non-merger-related issues. Officials with Norfolk Southern said that if the Board allows parties to reintroduce issues already decided, this could delay implementation of a merger. Board officials told us that oversight is an open process and anyone can submit comments. The basis for making decisions is the merger and postmerger oversight record and Board officials said they encourage parties such as shippers, railroads, and others to submit information into the record so that the Board can act with as much information as possible. However, Board officials acknowledged that parties sometimes reargue issues during oversight that were not decided in their favor in the merger decision. For example, in its November 2000 oversight decision in the Canadian National/Illinois Central merger, the Board refused to require that Canadian National sell its share of the Detroit River Tunnel as requested by various parties. The parties were concerned that Canadian National would competitively disadvantage the Detroit River Tunnel by not allowing needed capital investments to be made and favoring another nearby tunnel it owned. The Board found that this issue was not directly related to the merger and was a matter being privately negotiated between the parties. Finally, Board officials have said the oversight process has evolved over time and the Board has incorporated additional reporting and other requirements to provide more information on actual and potential problems experienced during merger implementation. Moreover, the Board has focused on preserving, not enhancing, competition and does not seek to restructure the competitive balance of the railroad industry during postmerger oversight. Both shipper association and railroad officials with whom we spoke recognized that the Board has a limited number of staff to conduct formal oversight. According to officials from the American Short Line and Regional Railroad Association, the Board’s perceived slowness in handling oversight issues may be attributable to the significant amount of information that needs to be processed during the annual oversight proceeding—information that is generally handled by a core team of 15 employees (who, Board officials noted, also work on agency matters other than mergers). Board officials acknowledged that their resources are limited. However, they said oversight offers an open, no-fee process in which any interested party may participate. They also said the Board has issued in a timely manner its decisions in the annual oversight proceeedings, as well as in matters involving specific material issues during oversight. The rail consolidation rules issued in June 2001 could change how the Board conducts oversight by providing for merger applications to include plans to enhance competition and to ensure reasonable service and by holding applicants accountable if they do not act reasonably to achieve promised merger benefits. Shifting the focus of merger review towards enhancing competition and ensuring reasonable service, as well as including some degree of accountability for postmerger benefits, could require the Board to expend additional time and resources reviewing these issues. For example, the final rules would call upon merger applicants to enhance competition so as to offset any negative effects resulting from a merger, such as potential harm to competition and disruptions of service. This could affect the way the Board uses and oversees conditions during the merger approval and oversight processes. Similarly, to require railroads to calculate the net public benefits to be gained through a proposed merger and to hold them accountable for acting reasonably to achieve these benefits, such as improved service, the Board will monitor as part of the general oversight proceeding the realization of merger benefits claimed. These activities would enlarge the current focus of assessing whether conditions are working as intended. In the event that public benefits fail to materialize after a merger is approved, the Board said it would consider the applicant’s proposals for additional measures. It is not likely that the final merger rules will resolve all concerns expressed by shipper and railroad organizations about oversight. The final rules will not change the basic process established for oversight. While the final rules may address concerns of shippers and railroads about service levels by requiring merger applicants to develop service assurance plans, they will not address more general concerns that the Board is not responsive to their issues. Furthermore, the final rules will not likely address concerns about the time and resources necessary to participate in postmerger oversight. Rather, the amount of time and resources required could increase, given that during oversight the Board will assess enhancement of competition, service issues, and accountability for proposed merger benefits as well as whether conditions are working as intended. In addition, issues may continue to be introduced that are not directly related to the merger under review. Board officials said they do not consider participation in oversight to be an expensive or burdensome process. However, they acknowledged that the new merger rules would require applicants to provide more detailed information on competition, service, and benefits as part of the merger application and that the amount of time and resources required during oversight could increase. Finally, the final rules may also not address all of the shippers’ concerns about the extent of competition in the rail industry resulting from mergers. While provisions regarding the enhancement of competition may address some competition-related issues, it is not clear how these provisions will be implemented. Both shipper and railroad officials told us that enhanced competition had not been defined in the proposed rules and, therefore, they were not clear how the provisions might affect specific situations involving competition. The final rules acknowledge that the Board cannot predict in advance the type and quantity of competitive enhancements that would be appropriate in a particular merger proposal. Lastly, the new merger rules make clear that the Board will not use its authority to impose conditions during merger approval to provide a broad program of open access. We analyzed the effects of the 1996 UP/SP merger on rail rates in two selected geographic markets that have high concentrations of shippers that faced a reduction in service by two railroads to service by only one railroad (called 2-to-1 shippers). We found that the merger reduced rail rates for four of the six commodities we reviewed. However, in one instance, the merger placed upward pressure on rates, even though other factors caused overall rate decreases. For the remaining commodity, rates were relatively unchanged. Our analysis illustrates that the Board could make more informed decisions during oversight about whether merger conditions are protecting against harm to competition, as measured by the merger’s effect on rates, if it had information that separated rate changes specifically resulting from a merger from rate changes caused by other factors. A merger reduces the number of rail carriers and can potentially enhance the market power of remaining carriers. This enhanced market power could be used to profitably increase rail rates if no action were taken to preserve competition. Board officials told us that rate trends are a good indicator of postmerger competition. In 1996, UP acquired SP in a transaction that raised significant competition-related issues. This merger encompassed a number of geographic areas where the loss of competition from SP could have reduced the number of carriers from 2 to 1. Most of these areas were in Texas and Louisiana, but some were in the Central Corridor between California and Colorado. (See fig. 1.) In granting trackage rights to BNSF in this merger, the Board sought to replace the competition for potential 2-to-1 shippers in these geographic areas. To understand how the UP/SP merger affected rail rates, we looked at rail rates in two geographic areas—Reno, Nevada, and Salt Lake City, Utah— both in the Central Corridor. We selected these areas because they had high concentrations of potential 2-to-1 shippers and, according to BNSF and UP/SP officials, were less affected by the service crisis that developed during implementation of the UP/SP merger. They also provided relatively clear examples of where BNSF service substituted for SP service. The primary commodities shipped to and from Reno and Salt Lake City were nonmetallic minerals (such as barites) and chemicals (such as sulfuric acid or sodium). (See table 1.) Farm products (such as corn and wheat) accounted for about 13 percent of the traffic shipped to Salt Lake City. We also included coal in our analysis of Salt Lake City rail rates, since it accounted for the highest percentage of carloads shipped to and from that area. However, BNSF officials told us that, in general, they have not yet used the trackage rights they were granted to transport coal to or from the Salt Lake City area. In its decision approving the UP/SP merger, the Board noted that BNSF was granted access to only a small portion of coal traffic on the Central Corridor, mostly in the northwestern section of Utah. As the table shows, the potential 2-to-1 shippers served by BNSF, as a percentage of total shippers in these geographic areas, ranged from 10 to 22 percent. This is consistent with comments made by Board officials that BNSF received trackage rights to serve about 20 percent of the postmerger UP/SP traffic on the Central Corridor. Our analysis found that by itself the merger would have served to reduce rates for four of the six commodities shipped to or from the geographic areas we chose. (See table 2.) Specifically, the merger would have reduced rates for coal shipments to and from the Salt Lake City area (by 8 percent and 10 percent, respectively), chemical shipments from the Salt Lake City area (by 6 percent), and farm products to the Salt Lake City area (by 5 percent). However, the rates for shipments of chemicals to the Reno area would have increased by 21 percent because of the merger, while rates for shipments of nonmetallic minerals originating in the Reno area would have been relatively unchanged by the merger (i.e., the merger-related change was not statistically significant). The effect of a merger on rail rates depends on the cost savings the merger might generate relative to the exercise of any enhanced market power by the railroad carriers. Since the Board acted to preserve the level of competition by granting trackage rights to BNSF to serve potential 2-to-1 shippers in these geographic areas, the rate decreases from the merger likely reflect cost savings from the consolidation. Another way in which the merger could result in lower rates is if BNSF provided more effective competition to UP in the postmerger period than SP did in the premerger period. While the effects of a merger can put downward (or upward) pressure on rates, an analysis focused on overall rate changes alone could lead to an inaccurate conclusion about whether conditions imposed on a merger to mitigate potential harm to competition have been effective. The results of our analysis indicate that, in addition to merger effects, other factors, such as the volume of shipments, had an equal or greater influence on overall rate changes for the specific movements we examined. In some cases, the effects of these other factors were strong enough to offset or even reverse the downward pressure of the merger on rates. (See table 2.) For example, for shipments of chemicals from the Salt Lake City area and for shipments of coal to and from the Salt Lake City area, while the merger alone would have decreased rates, the rates nevertheless increased overall. On the other hand, while rates decreased overall for chemicals shipments to the Reno area, the merger by itself put an upward pressure on rates. Finally, we found that postmerger rates for potential 2-to-1 shippers (served by BNSF) in the Reno and Salt Lake City areas decreased for one of the commodities we looked at but were essentially unchanged in three other instances. (See table 3.) The rate changes for potential 2-to-1 shippers (served by BNSF) shipping chemicals from the Salt Lake City area were about 16 percentage points less than similar rates for shippers shipping similar products but served solely by UP. However, rail rate changes for potential 2-to-1 shippers (served by BNSF) who shipped farm products to the Salt Lake City area, nonmetallic minerals from the Reno area, and chemicals to the Reno area were all higher than for shippers served exclusively by UP, but this difference was not statistically significant, meaning that the rates were essentially unchanged. These results are not wholly unexpected, since the levels of rail competition for the two kinds of shippers—potential 2-to-1 and non-2-to-1—differ and rail rates are set using differential pricing. Under differential pricing, shippers with less effective transportation alternatives generally pay a proportionately greater share of a railroad’s fixed costs than shippers with more effective transportation alternatives. There are limitations in the analysis and data we used. The results presented are only for the two geographic markets we reviewed and cannot be generalized to other geographic locations or for rate changes from the UP/SP merger as a whole. In addition, although econometric models of the factors that determine rail rates have been used to analyze a variety of policy-related issues in rail transportation and have been useful, such a model can be sensitive to how it is specified. We tested the model’s key results to ensure that our findings were reliable and are confident that the results are reasonable for the commodities in the geographic areas we examined. Finally, the Carload Waybill Sample data used in our model also have limitations. For example, these data do not necessarily reflect discounts or other rate adjustments that might be made retroactively by carriers to shippers exceeding certain volume requirements. Our analysis provides an example of how rates subject to merger conditions could be analyzed. Although the results in this study are not directly comparable to those in other studies of rates that are based on broader geographic areas, our analysis suggests that overall rate changes do not identify the specific impact of mergers on rates. In general, the Board has been presented with rate studies that have focused on overall rate changes, not on the portion of changes caused by a merger. For example, rate studies prepared by UP during merger oversight indicate that, overall, rates decreased immediately after the merger and have continued to decrease at 2-to-1 points and for traffic moving in the Houston-Memphis and Houston-New Orleans corridors. Similarly, both CSX and Norfolk Southern have conducted studies of rail rates in the Buffalo, New York, area since their acquisition of Conrail in 1999. Again, these studies have focused on the overall direction of rate changes and have shown that rail rates in the Buffalo area have generally decreased. Neither the UP nor the CSX/Norfolk Southern rate studies identified the specific effects of mergers on rates—effects that could have potentially been different from the overall rate trends. According to Board officials, in general, the parties in merger oversight proceedings have focused on determining the overall magnitude and direction of rate changes without trying to relate such changes to specific causes, and the Board’s own December 2000 staff study of nationwide changes in rail rates took this approach. Board officials said they have attempted to take into account, in the context of postmerger oversight, such non-merger-related factors as the recent significant rise in diesel fuel prices but have not been presented with an econometric approach to analyze rail rates in the context of merger oversight. They said that they had questions and concerns about the precision and reliability of the analysis we conducted. However, the Board is amenable to seeing this general approach developed in the context of a public merger oversight record where it would be subject to scrutiny and refinement by relevant parties. Board officials noted that presenting and rebutting econometric studies, because of their sophisticated nature, could increase the burden of participating in the merger oversight process. It is important to note that the Board, in approving the UP/SP merger, was provided with various empirical rate studies by the applicants and interested parties that included econometric analyses. In addition, econometric evidence has played an important role in merger-related cases that have been reviewed by courts and other government agencies. As an adjudicatory agency, the Board relies on affected parties to identify alleged harm when it exercises oversight to ensure that conditions imposed in railroad mergers are working and that competition has not been harmed. Therefore, it is necessary for shippers, railroads, or others not only to identify instances when they have been, or might be, harmed, but also to present evidence to the Board demonstrating this harm. For the Board to make sound decisions about the extent to which mergers affect rate changes, the Board should have information that separately identifies the factors that affect rates and the specific impact of these factors. Without such information, the Board’s ability to evaluate whether merger conditions have been effective in protecting against potential harm to competition may be limited. To better assist the Board in the oversight of railroad mergers and in ensuring that conditions imposed in such mergers protect against potential harm to competition, we recommend that the Board, when appropriate, require railroads and others to provide information to the Board that separately identifies the factors affecting postmerger changes in rail rates and the specific impact of these factors on rate changes. In particular, the Board, when appropriate, should require railroads and others to provide information that identifies the effects of mergers on changes to rail rates, particularly in those geographic areas subject to potential reductions in competition. This information should be considered in deliberations on the need to modify conditions, add reporting requirements, or initiate proceedings to determine if additional conditions are required to address competition-related issues. We provided a draft of this report to the Surface Transportation Board and the Department of Transportation for their review and comment. The Board did not express an overall opinion on the draft report, but rather supplied suggested revisions to it. Most importantly, while the Board is amenable to seeing an econometric approach developed in the context of a public oversight record, it commented that such an approach could increase the burden of the parties participating in the merger oversight process. This increased burden might occur because of the effort entailed to develop, present, and rebut econometric studies. We agree that an increased burden might occur and incorporated this view into our report. Allowing parties to critique the usefulness of our recommendation and the effort involved in implementing it should provide the Board with the information it needs on implementation. The Board offered extensive clarifying, presentational, and technical comments which, with few exceptions, we incorporated into our report. The Department of Transportation did not express an overall opinion on the draft report. Its comments were limited to noting that several Class I railroads were under common control. We incorporated this change into our report. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 15 days after the date of this letter. At that time, we will send copies of the report to congressional committees with responsibilities for transportation issues; the Secretary of Transportation; the Acting Administrator of the Federal Railroad Administration; the Chairman of the Surface Transportation Board; and the Director, Office of Management and Budget. We will also make copies available to others upon request. This report will also be available on our home page at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834. Key contributors to this report were Stephen Brown, Helen Desaulniers, Leonard Ellis, John Karikari, Tina Kinney, Richard Jorgenson, Mehrzad Nadji, Melissa Pickworth, James Ratzenberger, and Phyllis Scheinberg. August 16, 1995 35,400 Western United States and Canada $1.3 billion, plus assumed liabilities Largely end-to-end. However, in approving this merger, ICC found that of the approximately 29 locations that were served by both railroads, only a few would have potentially sustained harm from reduced competition given the presence of other railroads and of extensive truck competition at many of the locations. Conditions were attached to preserve competition where necessary. August 6, 1996 38,654 Western United States $3.3 billion in cash and stock, plus assumed liabilities Significant parallel components. In approving this merger, the Board granted about 4,000 miles of trackage rights to BNSF and other railroads to protect potential 2-to-1 shippers and others from loss of competition. No Class I merger transactions. About 21,800 Eastern United States and Canada $9.9 billion, plus assumed liabilities and fees Largely end-to-end. Although CSX Corporation and Norfolk Southern Corporation jointly acquired Conrail and then divided most of the assets between them, Conrail continues to operate certain shared assets areas for the joint benefit of CSX and Norfolk Southern. These shared assets areas are located in North Jersey (generally from northern New Jersey to Trenton, New Jersey), South Jersey/Philadelphia (generally from Trenton, New Jersey, to Philadelphia and southern New Jersey), and Detroit. Both CSX and Norfolk Southern have the right to operate their own trains, with their own crews and equipment and at their own expense, over any track included in the shared assets areas. Various other areas formerly operated by Conrail are subject to special arrangements that provide for a sharing of routes or facilities to a certain extent. For example, the Monongahela Area in Pennsylvania and West Virginia, although conveyed to Norfolk Southern, is available to CSX on an equal-access basis for 25 years, subject to renewal. May 21, 1999 18,670 Midwestern United States and Canada $1.8 billion, plus the value of 10.1 million common shares of Canadian National stock End-to-end. No Class I merger transactions. No Class I merger transactions proposed through June 2001. Our review focused primarily on the Board’s oversight of Class I railroad mergers that occurred since its creation in January 1996. These mergers included (1) the Union Pacific Railroad Company (UP) with the Southern Pacific Transportation Company (SP), (2) the Canadian National Railway Company with the Illinois Central Railroad and (3) the acquisition of the Consolidated Rail Corporation (Conrail) by CSX Transportation, Inc., and the Norfolk Southern Corporation. However, to aid in showing how merger oversight has changed over time, we also included information on the Burlington Northern Railroad Company merger with the Atchison Topeka and Santa Fe Railway Company, which was approved by ICC in August 1995. To address the role of the Board in approving and overseeing railroad mergers and to determine how merger oversight is conducted, we reviewed relevant laws and regulations and analyzed documents prepared by the Board addressing its merger authority and functions. We also discussed with the Board’s staff how merger oversight is conducted and how such oversight has changed over time. In addition, we discussed with the Board’s staff the activities conducted as part of formal oversight—that is, activities included in an annual general oversight proceeding—as well as informal oversight activities (such as monitoring of railroad performance data) associated with mergers. To address how the Board acts to mitigate potential merger-related harm to competition, we reviewed documents contained in its merger dockets, including merger approval and oversight decisions and progress reports filed by merged railroads. We discussed with Board officials how oversight of conditions is conducted and the factors considered by the Board in determining if conditions imposed have been effective in mitigating potential harm to competition. We also discussed oversight issues with various trade associations representing shipper and railroad interests as well as with officials from Class I railroads. (The organizations we contacted are listed at the end of this app.) The shipper trade associations represented major commodities shipped by rail. Finally, to identify how merger oversight might change in the future, we reviewed the Board’s notice of proposed rulemaking on major rail consolidations published in October 2000 and the final regulations issued in June 2001. We discussed with the Board how the final merger rules differed from the proposed rules. To address how the UP/SP merger affected rail rates in selected geographic areas, we obtained data from the Board’s Carload Waybill Sample for the years 1994 through 1999. The Carload Waybill Sample is a sample of railroad waybills (in general, documents prepared from bills of lading authorizing railroads to move shipments and collect freight charges) submitted by railroads annually. We used these data to obtain information on rail rates charged by different railroads for specific commodities in specific markets subject to potential reduction in competition in the UP/SP merger. We focused on this merger because it was identified by the Board as having significant competition-related issues, especially in the number of shippers potentially going from service by two railroads to service by only one railroad (called 2-to-1 shippers). Using documents submitted by the Union Pacific Railroad, as well as discussions with officials from both the Union Pacific Railroad and the Burlington Northern and Santa Fe Railway, we identified those locations and corridors containing the majority of potential 2-to-1 shippers. Using economic areas defined by the Department of Commerce’s Bureau of Economic Analysis, our analysis focused on those economic areas containing the majority of these potential 2-to-1 shippers. We used the Carload Waybill Sample instead of more specific data on rates for individual shippers because of the lack of sufficient premerger rate data from SP’s operations. Although it is possible to get rates for 2-to-1 shippers from the Carload Waybill Sample, the sample is not designed for use in analyzing rates for specific shippers. However, the sample can be used to analyze rail rates within and between geographic areas. For these reasons, we used economic areas containing a majority of potential 2-to-1 points in conjunction with the Carload Waybill Sample to conduct our analysis. The rate data obtained from the Carload Waybill Sample were then used in an econometric model that analyzed the effects of the UP/SP merger on changes to rail rates for various commodity shipments to and from the economic areas with the majority of potential 2-to-1 shippers. A detailed description and discussion of this model can be found in appendix III. Some railroad movements contained in the Carload Waybill Sample are governed by contracts between shippers and railroads. To avoid disclosure of confidential business information, the Board provides for railroads to mask the revenues associated with these movements prior to making this information available to the public. We obtained a version of the Carload Waybill Sample that did not mask revenues associated with railroad movements made under contract. Therefore, the rate analysis presented in this report presents a truer picture of rail rates than analyses that are based solely on publicly available information. There are also limitations associated with data from the Carload Waybill Sample. For example, according to Board officials, revenues derived from this sample are not adjusted for such things as year-end discounts and refunds that may be provided by railroads to shippers that exceed certain volume requirements. However, both Board and railroad officials agreed that, given the lack of sufficient premerger SP data, the Carload Waybill Sample was the best data source available for conducting our analysis. We performed our work from July 2000 through June 2001 in accordance with generally accepted government auditing standards. Burlington Northern and Santa Fe Railway Co. CSX Transportation, Inc. Norfolk Southern Corporation Union Pacific Railroad Co. This appendix describes and discusses our analysis of the effects of the 1996 UP/SP merger on rail rates in selected geographic areas where the merger had the potential for harm to competition because 2-to-1 shippers could have lost one of the two railroad carriers upon which they had relied. In particular, we discuss (1) the econometric model we developed to analyze separately the effects of the merger and of other factors on rail rates, (2) the construction of the data used for the analysis, and (3) our analysis, including a comparison of overall changes in rates, based on mean-difference analysis, with the results of the econometric model. We developed an econometric model to examine both the specific impact of the 1996 UP/SP merger and the impact of other factors on rates in selected geographic areas where competition could have been potentially reduced. In developing the model, we focused on the trackage rights granted to BNSF by the Board, and applied existing empirical literature on how rail rates are determined. The UP/SP merger covered areas where the services provided by UP overlapped those provided by SP. As a result, some rail shippers could have been reduced from being directly served by both SP and UP to being directly served by UP only. In order to preserve competition in those potential 2-to-1 situations and for those shippers exclusively served by UP or SP who benefited from having another independent railroad nearby, the Board granted trackage rights to BNSF in order to replace the competition that would be lost when SP was absorbed by UP. As done in previous studies, we use an econometric model to identify the factors affecting rail rates following the UP/SP merger—rail rates being the dependent variable used in the model. Rail Rates: We measured rail rates—the freight rate charged by a railroad to haul a commodity from an origin to a destination—by revenue per ton- mile, adjusted for inflation. We used data from 1994 and 1995 for the premerger period, and data from 1997 through 1999 for the postmerger period. We excluded 1996 data, since the UP/SP merger was approved in August 1996. We also excluded shipments with rail transportation charges less than $20,000 (in 1996 dollars) in order to focus on the major movements. The level of each observation was shipments at the 7-digit Standard Transportation Commodity Code—a classification system used to group similar types of commodities such as grains—between an origin and a destination. The factors that explained the rail rates were generally those related to market structure and regulatory conditions, as well as cost and demand factors. Market Structure and Regulatory Conditions: We included the variable MERGER to capture the effect of the merger on rates. The extent of rail competition is expected to affect rail rates. We used a variable that would reflect the difference in rates charged to shippers with competitive options—SP and UP before the merger, and BNSF and UP afterwards— and shippers served solely by one railroad both before and after the merger to capture the influence of this fact on rates. The variable is RAILROAD-BNSF. Cost and Demand Factors: These factors are generally captured by the shipment and shipper characteristics of the traffic. As in previous studies, we use the following variables to measure the influence of cost and demand factors: variable cost per ton-mile (COST), the weight of shipments (TON), the length of haul (DISTANCE), the annual tonnage shipped between an origin-destination pair (DENSITY), and OWNERSHIP of railcars. In addition to the explanatory factors mentioned above, we included the following factors: First, we introduced a variable for contract rates (CONTRACT) to account for possible differences between contract rates and noncontract rates. Second, we included a variable to account for the possible effects of the service crisis that arose after the merger and lasted through 1998 (CRISIS). Third, following previous studies, we included the squared terms for the variables TON (TON_SQ) and DISTANCE (DISTANCE_SQ), to account for possible nonlinear relationships between these variables and rates. We also included dummy variables for the major commodity groups (COMMODITY) where appropriate. We selected geographic markets that had high concentrations of potential 2-to-1 shippers because of the possibility for harm to competition in those areas. Using the Carload Waybill Sample, we performed several data- processing tasks that included matching similar sets of traffic before and after the merger, and selecting the primary commodities that were shipped, based on carloads, for analysis. All the data used for the study were constructed from the Carload Waybill Sample, which is a sample of railroad waybills (in general, documents prepared from bills of lading that authorize railroads to move shipments and collect freight charges) that are submitted annually by the railroads.However, there are limitations in using the Carload Waybill Sample for rate analysis. Among these limitations is that no specific information is provided about the identity of the shippers. This makes it difficult to identify potential 2-to-1 traffic by shipper name. Also, data for rates for shipments moved under contract between railroads and shippers (called contract rates), which are masked or disguised in the Carload Waybill Sample, may be incomplete. We selected the Reno, Nevada, and Salt Lake City, Utah, business economic areas, which are in the Central Corridor and which had high concentrations of potential 2-to-1 shippers. Both SP and UP served these two areas prior to the merger; BNSF service was not available in the area at that time. Also, according to BNSF officials, the Central Corridor was relatively less affected by the service crisis that emerged after the UP/SP merger. In addition, UP fully integrated its computer and information systems with SP in the Central Corridor much earlier than in the other regions, making rate and other data there more reliable. However, there are limitations in using the Central Corridor to illustrate the possible effects of the UP/SP merger on rates. According to the Board, BNSF generally had problems ramping-up its trackage-rights service in the Central Corridor. Also, the Reno and Salt Lake City areas are not typical rail hubs, because the traffic to and from these areas is not high volume, compared with other areas, such as the Houston-Gulf Coast area. Despite these limitations, the two selected areas provide an opportunity to illustrate the impact of the UP/SP merger on rates in predominantly potential 2-to-1 situations. We performed several tasks to organize the Carload Waybill Sample for our analysis. We identified traffic by origin and destination, and at the 7- digit Standard Transportation Commodity Code level separately for periods before the merger and periods after the merger. We then matched similar sets of railroad traffic existing before and after the merger. The matching involved shipments that we could determine, on a commodity and origin-and-destination basis, that were made in both periods. To help identify traffic associated with BNSF’s trackage rights, we also identified the railroad carrier(s) associated with the shipments that we matched for both periods. There were two Class I railroads serving the two geographic areas before the merger (SP and UP). After the UP/SP merger, all the traffic belonging to SP and UP came under the merged UP’s sole control, except for potential 2-to-1 shippers and shippers that could take advantage of such provisions as build-in/build-out and new facilities conditions. As a result of the trackage rights imposed by the Board as part of the merger conditions, BNSF obtained access to the potential 2-to-1 traffic, regardless of whether the traffic had been carried by SP or UP prior to the merger. Our matching process was intended to identify this potential 2-to-1 traffic. The following matching was done in the following sequence: 1. SP premerger traffic was matched to BNSF postmerger traffic—this is BNSF trackage rights over SP (BNSF-SP). 2. UP premerger traffic was matched to BNSF postmerger traffic that is still unmatched—this is BNSF trackage rights over UP (BNSF-UP). 3. SP premerger traffic that was still unmatched was matched to UP postmerger traffic—this is UP traffic over SP (UP-SP). 4. UP premerger traffic that was still unmatched was matched to UP postmerger traffic that is still unmatched—this is UP traffic over UP (UP-UP). The BNSF-SP and BNSF-UP traffic (henceforth BNSF) consists of only potential 2-to-1 traffic that was served by SP or UP before the merger but served by BNSF in the postmerger period. The UP-SP and UP-UP traffic (henceforth UP) includes potential 2-to-1 traffic as well as non-2-to-1 traffic. However, according to UP officials, the latter traffic substantially comprises shippers that are served solely by one railroad because they could be served in the premerger period only by UP or SP, but not both, and in the postmerger period, only by UP. The two broad types of shippers identified reflect different levels of rail competition. The potential 2-to-1 traffic (served by BNSF) is considered more competitive than the traffic served solely by UP because direct rail competition was preserved or maintained for the potential 2-to-1 shippers, while the traffic solely-served by UP had only indirect competition, which was preserved through build- in/build-out and new facilities conditions. Finally, because our study focuses on potential 2-to-1 shippers, we included only the commodity groups for which BNSF had presence. Although BNSF officials told us they had not aggressively exercised their trackage rights for coal shipments in the Salt Lake City area, we included these shipments because coal is a major commodity shipped to and from the Salt Lake City area. Summary statistics of the commodities shipped to and from the Salt Lake City and Reno economic areas are provided in tables 4 and 5. The commodities include coal, chemicals, primary metals, farm products (such as corn and wheat), petroleum/coal, food, nonmetallic minerals, lumber/wood, and stone/clay/glass/concrete. Each of these commodities accounted for at least 10 percent of the traffic to or from an area. The share of BNSF’s potential 2-to-1 shippers to all shippers was mostly between 10 and 25 percent. (See table 4.) Also, the rail rates and the direct costs for the total traffic were very similar to the rates for the matched traffic. (See table 5.) The econometric model that we developed was estimated using an appropriate estimation technique. We also discuss the results of our study in terms of the effects on rail rates attributable to the merger and the effects of other factors. We used a reduced-form rate model of shipping a commodity between an origin and a destination because such a model is useful for analyzing the impact of a regulatory policy, such as a merger, on rates. The service crisis of 1997 and 1998 could potentially make the estimation results less reliable because the rates may not be at the market-clearing level. However, we included a CRISIS variable to account for this possible structural shift. The reduced-form model we used was as follows: The term “ln” is a natural logarithm, and “i” is representative of a commodity group. The β‘s are parameters to be estimated, and ε is the random-error term. A complete list of the variables used to estimate the regression model is presented in table 6. We could not directly incorporate certain factors into the model primarily because of data limitations. We estimated the regression model using the SAS SURVEYREG procedure, since the data are from stratified samples. This procedure is appropriate for dealing with a stratified sample because it adjusts both the coefficients and the standard errors of the estimates to account for the sampling design. The econometric model was run for different samples— shipments of the primary commodities to or from an economic area, and for subsamples of individual commodities and shippers. We tried different specifications of our basic model to check the robustness of our key model results. We found that the results were not highly sensitive to model specification. While we used a reduced-form specification, it is still possible that some of the explanatory variables on the right-hand side of the equation may be endogenous. Since there are no available instruments in a reduced-form model, we could not perform the usual test. Rather, we checked the robustness of our results by excluding possible endogenous variables. In particular, when DENSITY was excluded from the model, our findings regarding the effects of mergers on rates and the effects of the other factors on rates were essentially unchanged. It is also likely that COST is related to the variables TON, DISTANCE, and OWNERSHIP, which could produce unreliable results. In other specifications of the model, we eliminated the COST variable, but our key findings were robust to such specifications. Summaries of the effects of the merger on rates, based on the econometric results, are presented in table 7. The rates for shipments to and from the Reno and Salt Lake City areas generally would have declined for all the shippers as a result of the merger, especially in the Salt Lake City area. Although the effects of the merger on rates depend on both the potential cost savings from the merger and the exercise of any enhanced market power by the railroads, the UP/SP merger is generally expected to lower rates in those areas where the Board imposed trackage rights. We also compared the effects of the merger on rates charged to potential 2-to-1 shippers served by BNSF to rates charged to shippers served solely by UP in the same general locations. In particular, the results show that the rates charged to the potential 2-to-1 shippers served by BNSF were lower than the rates charged to the shippers served solely by UP for shipments of chemicals from the Salt Lake City area. The rate differentials for the Reno area were positive, but none was statistically significant. The result that rates for the potential 2-to-1 shippers served by BNSF were generally lower than rates charged to shippers served solely by UP is consistent with demand-based differential pricing, which reflects the differing transportation alternatives available to shippers. We found that the effects of other factors on rail rates during the period are generally consistent with what has been found in previous studies. (See results in tables 8 through 11 for all commodities.) We used the econometric results for all the commodities because most of these effects are not commodity-specific and can be better captured across commodities. The impact of COST on rates was positive and significant for traffic in each of the selected areas, meaning that rates were lower (or higher) as costs decreased (or increased). TON had mixed results, meaning that larger shipment volumes sometimes resulted in higher or lower rates. DISTANCE generally decreased rates. DENSITY, which captures the volume of traffic on the route used for a particular shipment, unambiguously decreased rates. This effect is consistent with decreasing costs in railroad operations, since increased shipment levels over a rail route spread fixed costs over larger volumes and reduce rates.OWNERSHIP had mixed results. CONTRACT rates were generally lower. Finally, the impact of CRISIS on rates was generally inconclusive. This is not unexpected, since most shipments are under contract and the crisis affected primarily the services that were provided rather than the rates. To compare the changes in rates due to the merger that we obtained from the econometric analysis to the overall changes in rates, we separated the overall changes in rates into changes due to the merger and changes due to other factors, such as costs and volume of shipments. The overall changes in rates were estimated using a difference in means analysis that compares the rates in the postmerger period with rates in the premerger period. We found that the overall changes in rates could be in the opposite direction from the rate changes due to the merger. For instance, for coal shipments from the Salt Lake City area, the overall changes in rates were about 10 percent higher, while the rate changes due to the merger alone would have been about 10 percent lower. On the other hand, for shipments of chemicals to the Reno area, the overall changes in rates were about 6 percent lower, while the rate changes due to the merger alone would have been about 21 percent higher. These illustrations indicate that a complete analysis of merger-related rate changes could benefit from the application of an analytical approach that identifies and determines the separate effects of the various factors, including those associated with a merger, affecting rail rates. | Railroads have been a primary mode of freight transportation for many years, especially for bulk commodities such as coal and grain. Over the last 25 years, the freight railroad industry has undergone substantial consolidation largely to reduce costs and increase efficiency and competitiveness. Some companies that rely on rail shipments are concerned that the mergers have reduced railroad competition and led to higher rail rates and poorer service. This report reviews (1) the role the Surface Transportation Board plays in reviewing proposed railroad mergers and overseeing mergers that have been approved and how post-merger oversight is conducted, (2) how the Board mitigates potential harm to competition, and (3) how the Union Pacific/Southern Pacific merger affected rail rates in selected geographic areas. GAO found that the Board reviews railroad merger proposals and approves those that are consistent with the public interest, ensures that any potential merger-related harm to competition is mitigated to preserve competition, and oversees mergers that have been approved. The Board imposes conditions on mergers to mitigate potential harm to competition. The Board also focuses on the overall direction and magnitude of rate changes when analyzing rail rates as part of merger oversight. It does not isolate the effects of mergers on rates from other effects. When GAO used this approach to analyze how the Union Pacific/Southern Pacific merger affected rail rates, it found that the merger reduced rates in four of six commodities studied. However, for two of the commodities, the merger put upward pressure on rates, even though other factors caused overall rates to decrease. By focusing on overall rate decreases, the Board will be unable to determine whether the decrease is due to the merger or other factors. |
RS21135 -- The Enron Collapse: An Overview of Financial Issues Updated August 12, 2004 Enron Corp., the first nationwide natural gas pipeline network, shifted its business focus during the 1990s from the regulated transportation of natural gas totrading in unregulated energy markets. Until late 2001, nearly all observers -- including Wall Street professionals-- regarded this transformation as anoutstanding success. Enron's reported annual revenues grew from under $10 billion in the early 1990s to $139billion in 2001, placing it fifth on the Fortune500. Enron continued to transform its business but, as it diversified out of its core energy operations, it ran intoserious trouble. Like many other firms, Enronsaw an unlimited future in the Internet. During the late 1990s, it invested heavily in online marketers and serviceproviders, constructed a fiber opticcommunications network, and attempted to create a market for trading broadband communications capacity. Enronentered these markets near the peak of theboom and paid high prices, taking on a heavy debt load to finance its purchases. When the dot com crash came in2000, revenue from these investments driedup, but the debt remained. Enron also recorded significant losses in certain foreign operations. The firm made major investments in public utilities in India, South America, and the U.K.,hoping to profit in newly-deregulated markets. In these three cases, local politicians acted to shield consumers fromthe sharp price increases that Enronanticipated. By contrast, Enron's energy trading businesses appear to have made money, although that trading was probably less extensive and profitable than the companyclaimed in its financial reports. Energy trading, however, did not generate sufficient cash to allow Enron towithstand major losses in its dot com and foreignportfolios. Once the Internet bubble burst, Enron's prospects were dire. It is not unusual for businesses to fail after making bad or ill-timed investments. What turned the Enron case into a major financial scandal was the company'sresponse to its problems. Rather than disclose its true condition to public investors, as the law requires, Enronfalsified its accounts. It assigned business lossesand near-worthless assets to unconsolidated partnerships and "special purpose entities." In other words, the firm'spublic accounting statements pretended thatlosses were occurring not to Enron, but to the so-called Raptor entities, which were ostensibly independent firmsthat had agreed to absorb Enron's losses, butwere in fact accounting contrivances created and entirely controlled by Enron's management. In addition, Enronappears to have disguised bank loans as energyderivatives trades to conceal the extent of its indebtedness. When these accounting fictions -- which were sustained for nearly 18 months -- came to light, and corrected accounting statements were issued, over 80% ofthe profits reported since 2000 vanished and Enron quickly collapsed. The sudden collapse of such a largecorporation, and the accompanying losses of jobs,investor wealth, and market confidence, suggested that there were serious flaws in the U.S. system of securitiesregulation, which is based on the full andaccurate disclosure of all financial information that market participants need to make informed investment decisions. The suggestion was amply confirmed bythe succession of major corporate accounting scandals that followed. Enron raised fundamental issues about corporate fraud, accounting transparency, and investor protection. Several aspects of these issues are briefly sketchedbelow, with reference to CRS products that provide more detail. Federal securities law requires that the accounting statements of publicly traded corporations be certified by an independent auditor. Enron's auditor, ArthurAndersen, not only turned a blind eye to improper accounting practices, but was actively involved in devisingcomplex financial structures and transactions thatfacilitated deception. An auditor's certification indicates that the financial statements under review have been prepared in accordance with generally-accepted accounting principles(GAAP). In Enron's case, the question is not only whether GAAP were violated, but whether current accountingstandards permit corporations to play"numbers games," and whether investors are exposed to excessive risk by financial statements that lack clarity andconsistency. Accounting standards forcorporations are set by the Financial Accounting Standards Board (FASB), a non-governmental entity, though thereare also Securities and ExchangeCommission (SEC) requirements. (The SEC has statutory authority to set accounting standards for firms that sellsecurities to the public.) Some describeFASB's standards setting process as cumbersome and too susceptible to business and/or political pressures. In response to the auditing and accounting problems at Enron and other major corporations scandals, Congress enacted the Sarbanes-Oxley Act of 2002 ( P.L.107-204 ), containing perhaps the most far-reaching amendments to the securities laws since the 1930s. Very briefly,the law does the following: creates a Public Company Accounting Oversight Board to regulate independent auditors of publicly traded companies -- a private sectorentity operating under the oversight of the SEC; raises standards of auditor independence by prohibiting auditors from providing certain consulting services to their audit clients andrequiring preapproval by the client's board of directors for other nonaudit services; requires top corporate management and audit committees to assume more direct responsibility for the accuracy of financialstatements; enhances disclosure requirements for certain transactions, such as stock sales by corporate insiders, transactions with unconsolidatedsubsidiaries, and other significant events that may require "real-time" disclosure; directs the SEC to adopt rules to prevent conflicts of interest that affect the objectivity of stock analysts; authorizes $776 million for the SEC in FY2003 (versus $469 million in the Administration's budget request) and requires the SEC toreview corporate financial reports more frequently; and establishes and/or increases criminal penalties for a variety of offenses related to securities fraud, including misleading an auditor, mailand wire fraud, and destruction of records. See also CRS Report RL31554 , Corporate Accountability: Sarbanes-Oxley Act of 2002: ( P.L. 107-204 ) , by Michael Seitzinger; and CRS Report RS21120 , Auditing and its Regulators: Proposals for Reform After Enron, by [author name scrubbed]. Like many companies, Enron sponsored a retirement plan -- a "401(k)" -- for its employees to which they can contribute a portion of their pay on atax-deferred basis. As of December 31, 2000, 62% of the assets held in the corporation's 401(k) retirement planconsisted of Enron stock. Many individualEnron employees held even larger percentages of Enron stock in their 401(k) accounts. Shares of Enron, which inJanuary 2001 traded for more than $80/share,were worth less than 70 cents in January 2002. The catastrophic losses suffered by participants in the EnronCorporation's 401(k) plan have promptedquestions about the laws and regulations that govern these plans. In the 107th Congress, the House passed legislation ( H.R. 3762 ) that would have required account information to be provided moreoften to plan participants, improved access to investment planning advice, allowed plan participants to diversifytheir portfolios by selling company stockcontributed by employers after three years, and barred executives from selling company stock while a plan is "lockeddown." (The latter provision was enactedby the Sarbanes-Oxley Act.) Similar legislation has not advanced in the 108th Congress. See also CRS Report RL31507 , Employer Stock in Retirement Plans: Investment Risk and Retirement Security , by [author name scrubbed] ([phone number scrubbed]); and CRS Report RL31551 , Employer Stock in Pension Plans: Economic and Tax Issues, by Jane Gravelle. In the wake of Enron and other scandals, corporate boards of directors were subject to critical scrutiny. Boards, whose chief duty is to represent shareholders'interests, utterly failed to prevent or detect management fraud. Several provisions of Sarbanes-Oxley were designedto boost the power of independent directorsand the audit committee of the board to exercise effective oversight of management and the accounting process.Under Sarbanes-Oxley, the board's auditcommittee must have a majority of independent directors (not affiliated with management or the corporation) andis responsible for hiring, firing, overseeing,and paying the firm's outside auditor. The audit committee must include at least one director who is a financialexpert, that is, able to evaluate significantaccounting issues and/or disagreements between management and auditors. In 2003, the New York Stock Exchange and the Nasdaq adopted rules requiring listed corporations to have a majority of independent directors (not affiliatedwith management or the corporation) on their boards. In 2004, the SEC is considering a rule that would facilitatethe nomination of directors by shareholders. Securities analysts employed by investment banks provide research and make "buy," "sell," or "hold" recommendations. These recommendations are widelycirculated and are relied upon by many public investors. Analyst support was crucial to Enron because it requiredconstant infusions of funds from the financialmarkets. On November 29, 2001, after Enron's stock had fallen 99% from its high, and after rating agencies haddowngraded its debt to "junk bond"status,only two of 11 major firm analysts rated its stock a "sell." Was analyst objectivity -- towards Enron and other firms-- compromised by pressure to avoidalienating investment banking clients? The Sarbanes-Oxley Act directs the SEC to establish rules addressing analysts' conflicts of interest; these were issued in 2003. In December 2002, 10 majorinvestment banks reached a settlement with state and federal securities regulators under which they agreed toreforms to make their analysts independent of theirbanking operations, and to pay fines totaling about $1 billion. See also CRS Report RL31348(pdf) , Enron and Stock Analyst Objectivity , by [author name scrubbed]. One part of the fallout from Enron's demise involves its relations with banks. Prominent banking companies, notably Citigroup and J.P. Morgan Chase, wereinvolved in both the investment banking (securities) and the commercial banking (lending and deposit) businesseswith Enron. In 2003, the SEC fined the twobanks $120 and $135 million, respectively, for their roles in Enron's accounting frauds. Several aspects of Enron's relations with its bankers have raised several questions. (1) Do financial holding companies (firms that encompass both investmentand commercial banking operations) face a conflict of interest, between their duty to avoid excessive risk on loansfrom their bank sides versus their opportunityto glean profits from deals on their investment banking side? (2) Were the bankers enticed or pressured to providefunding for Enron and recommend itssecurities and derivatives to other parties? (3) Did the Dynegy rescue plan, proposed just before Enron's collapse,and involving further investments by J.P.Morgan Chase and Citigroup, represent protective self-dealing? (4) What is the proper accounting for banks'off-balance-sheet items including derivativepositions and lines of credit, such as they provided to Enron? (5) Did the Enron situation represent a warning thatGLBA may need fine-tuning in the way itmixes the different business practices of Wall Street and commercial banking? See also CRS Report RS21188, Enron's Banking Relationships and Congressional Repeal of Statutes Separating Bank Lending from Investment Banking , by[author name scrubbed]. Part of Enron's core energy business involved dealing in derivative contracts based on the prices of oil, gas, electricity and other variables. For example, Enronsold long-term contracts to buy or sell energy at fixed prices. These contracts allow the buyers to avoid, or hedge,the risks that increases (or drops) in energyprices posed to their businesses. Since the markets in which Enron traded are largely unregulated, with no reportingrequirements, little information is availableabout the extent or profitability of Enron's derivatives activities, beyond what is contained in the company's ownfinancial statements. While trading inderivatives is an extremely high-risk activity, no evidence has yet emerged that indicates that speculative losses werea factor in Enron's collapse. Since the Enron failure, several energy derivatives dealers have admitted to making "wash trades," which lack economic substance but give the appearance ofgreater market volume than actually exists, and facilitate deceptive accounting (if the fictitious trades are reportedas real revenue). In 2002, energy derivativestrading diminished to a fraction of pre-Enron levels, as major traders (and their customers and shareholders)re-evaluate the risks and utility of unregulatedenergy trading. Several major dealers have withdrawn from the market entirely. Internal Enron memoranda released in May 2002 suggest that Enron (and other market participants) engaged in a variety of manipulative trading practicesduring the California electricity crisis. For example, Enron was able to buy electricity at a fixed price in Californiaand sell it elsewhere at the higher marketprice, exacerbating electricity shortages within California. The evidence to date does not indicate that energyderivatives - as opposed to physical, spot-markettrades -- played a major role in these manipulative strategies. Numerous firms and individuals have been chargedwith civil and criminal violations related tothe manipulation of energy prices in California and elsewhere. Even if derivatives trading was not a major cause, Enron's failure raises the issue of supervision of unregulated derivatives markets. Would it be useful ifregulators had more information about the portfolios and risk exposures of major dealers in derivatives? AlthoughEnron's bankruptcy appears to have hadlittle impact on energy supplies and prices, a similar dealer failure in the future might damage the dealer's tradingpartners and its lenders, and couldconceivably set off widespread disruptions in financial and/or real commodity markets. Legislation proposed, but not enacted, in the 107th Congress ( H.R. 3914 , H.R. 4038 , S. 1951 , and S. 2724 )would have (among other things) given the CFTC more authority to pursue fraud (including wash transactions) inthe OTC market, and to require disclosure ofcertain trade data by dealers. In the 108th Congress, the Senate twice rejected legislation ( S.Amdt. 876 and S.Amdt. 2083 ) that wouldhave increased regulatory oversight of energy derivatives markets by the CFTC and FERC. See also CRS Report RS21401 , Regulation of Energy Derivatives , by [author name scrubbed]; and CRS Report RS20560 , The Commodity Futures Modernization Act( P.L. 106-554 ) , by [author name scrubbed]. | The sudden and unexpected collapse of Enron Corp. was the first in a series of majorcorporate accounting scandalsthat has shaken confidence in corporate governance and the stock market. Only months before Enron's bankruptcyfiling in December 2001, the firm waswidely regarded as one of the most innovative, fastest growing, and best managed businesses in the United States. With the swift collapse, shareholders,including thousands of Enron workers who held company stock in their 401(k) retirement accounts, lost tens ofbillions of dollars. It now appears that Enronwas in terrible financial shape as early as 2000, burdened with debt and money-losing businesses, but manipulatedits accounting statements to hide theseproblems. Why didn't the watchdogs bark? This report briefly examines the accounting system that failed to providea clear picture of the firm's true condition,the independent auditors and board members who were unwilling to challenge Enron's management, the Wall Streetstock analysts who failed to warn investorsof trouble ahead, the rules governing employer stock in company pension plans, and the unregulated energyderivatives trading that was the core of Enron'sbusiness. This report also summarizes the Sarbanes-Oxley Act (P.L. 107-204), the major response by the107th Congress to Enron's fall, and related legislativeand regulatory actions during the 108th Congress. It will be updated as events warrant. Other contributors to this report include [author name scrubbed], [author name scrubbed], [author name scrubbed], and [author name scrubbed]. |
Story highlights Health Canada will allow doctors to prescribe heroin as treatment for severely addicted people
The Trudeau government will sponsor a summit to address the issue of opioid addiction
(CNN) Health Canada has amended its regulations to allow Canadian doctors to prescribe heroin as a treatment for those who are severely addicted to the drug. Last week's change to the Controlled Drugs and Substances Act permits doctors to apply for permission under the federal Special Access Program to offer their addicted patients diacetylmorphine: pharmaceutical-grade heroin.
The government referred to a "medical need for emergency access to diacetylmorphine" in the regulation.
"A number of countries have allowed doctors to use diacetylmorphine-assisted treatment to support the small percentage of patients with opioid dependence who have not responded to other treatment options," the regulation states. "There is also a significant body of scientific evidence supporting its use."
The new rule reinstates an old one, explained Eugenia Oviedo-Joekes, an associate professor in the School of Population and Public Health at the University of British Columbia.
In October 2013, then-Health Minister Rona Ambrose removed diacetylmorphine from the federal Special Access Program and so banned doctors' access to prescription heroin. The new regulation clarifies that heroin can be prescribed to patients only under supervision in specialized circumstances, said Oviedo-Joekes: "They made it a bit more clear how this request should be handled."
Read More |||||
Canadian Prime Minister Justin Trudeau's government has taken a less draconian approach to fighting heroin addiction than the previous government did. (Chris Roussakis/AFP via Getty Images)
OTTAWA — The Canadian government has quietly approved new drug regulations that will permit doctors to prescribe pharmaceutical-grade heroin to treat severe addicts who have not responded to more conventional approaches.
The move means that Crosstown, a trail-blazing clinic in Vancouver, will be able to expand its special heroin-maintenance program, in which addicts come in as many as three times a day and receive prescribed injections of legally obtained heroin from a nurse free. The program is the only one of its kind in Canada and the United States but is similar to the approach taken in eight European countries.
The move by Prime Minister Justin Trudeau's government last week is another step in reversing the policies of the previous government, run by Conservatives, and taking a less draconian approach to the fight against addiction and drug abuse.
In April, the Trudeau government announced plans to legalize the sale of marijuana by next year, and it has appointed a task force to determine how marijuana will be regulated, sold and taxed. The government has also granted a four-year extension to the operation of Insite, a supervised injection site in Vancouver where addicts can shoot up street-obtained drugs in a controlled environment. The previous government had tried in vain for years to shut down that clinic.
[Canadian official slams marijuana policy on U.S. border as ‘ludicrous’]
The latest decision means that any physician in Canada can now apply to Health Canada for access to diacetylmorphine, as pharmaceutical-grade heroin is known, under a special-access program. The government says that this kind of treatment will be available for only a small minority of users “in cases where traditional options have been tried and proven ineffective” and that it is important to give health-care providers a variety of tools to face the opioid-overdose crisis.
Scott MacDonald, the lead physician at the Crosstown Clinic, welcomed the federal government’s decision. The clinic, which is funded by the British Columbia provincial government, opened in 2005 to conduct a clinical trial of prescription heroin and has operated ever since. It provides diacetylmorphine to 52 addicts under a special court-ordered exemption but expects that number to double over the next year if supplies can be obtained. The court order came after a constitutional challenge of a 2013 effort by the previous government to stop distribution of the drug.
Colin Carrie, a Conservative member of Parliament and the party's spokesman on health policy, said his party remains adamantly opposed to the use of prescription heroin as a treatment option for addicts. "Our policy is to take heroin out of the hands of addicts and not put it in their arms."
MacDonald says his patients are usually long-term users — one has been on heroin for 50 years — for whom standard treatments such as methadone and detox have failed after repeated attempts. “Our goal is to get people into care,” he said. (The clinic also treats another group of addicts with hydromorphone, a powerful painkiller.)
[Canada to introduce legislation in 2017 to legalize sale of marijuana]
The demands of the program are high. Patients must come into the clinic two or three times a day for injections, which is disruptive for those who wish to work or take care of their families. Still, the dropout rate is relatively low. The patients are healthier, and participation in the program drastically reduces their participation in criminal activities, sharply cutting the cost to the criminal justice system.
Crosstown’s approach has garnered increasing attention in the United States, with MacDonald appearing in June to testify before a Senate committee on Capitol Hill. But the approach remains controversial. After making a presentation recently in Boston, he got a positive response from some doctors but noted that “there were physicians who would not even come up and talk to me.”
Read more:
This new street drug is 10,000 times more potent than morphine, and now it’s showing up in Canada and the U.S.
Quiz: What do you actually know about Canada?
The many ways Canada’s Trudeau is the anti-Trump ||||| VANCOUVER — For Hugh Lampkin, fentanyl’s surge to all but replace heroin on the Vancouver drug scene calls to mind a curious image: a rainbow.
“Traditionally, heroin comes in about four different colours,” said the longtime drug advocate, describing a bland palette of beiges, browns and blacks.
“Well now you’re seeing multiple colours, like colours of the rainbow: green and pink and orange and white. … Right away, when you see these colours that’s a pretty good indicator that it’s fentanyl that you’re doing.”
As government data tracks a spike of fentanyl across Canada, people who use illicit drugs in Vancouver’s Downtown Eastside say there is virtually no heroin left on the street after it has been pushed out by the cheaper and more potent fentanyl.
Martin Steward of the Western Aboriginal Harm Reduction Society said fentanyl’s takeover is evident by how easily people are overdosing on small amounts of what is being sold as heroin, and simply by people’s physical response to the drug.
“I know people who use heroin and they’ll inject what they normally do. And the next time they’ll do exactly the same thing of what they think is heroin and they’re out. Like, they’re going under from it,” Steward said in an interview, referring to an overdose.
“They’re using the same thing, the same product, but getting a different result. That’s a forerunner for me to see that it’s not heroin.”
There have been 256 fatal overdoses from illicit drugs in the first four months of this year, already more than half the 480 that occurred for all of 2015. Fentanyl’s connection to those deaths has been surging at a staggering rate.
The B.C. Coroners Service reported last week that the presence of fentanyl in cases of illicit drug overdose deaths rose from a third in 2015 to nearly 50 per cent so far this year.
Speaking anecdotally, Lampkin said he doesn’t believe anyone in Vancouver has used real heroin in more than a year and that many users don’t appear to be aware of it.
He’s observed overdose victims needing three full vials of the overdose-reversing drug naloxone to recover, he said.
“I think it’s not so much as they’re moving to it as a case of not having any choice,” said Lampkin, who sits on the board for the Vancouver Area Network of Drug Users.
“The people who are controlling the supply, they’re passing off what should be heroin as fentanyl because of the close proximity of the high.”
Vancouver police report heroin-related drug seizures and criminal charges in the city have remained relatively stable over the past five years, but Lampkin said drugs are only tested when charges are laid or usually in the event of a fatal overdose.
Sgt. Darin Sheppard, who heads up a British Columbia RCMP division that investigates organized drug crime, said that while heroin is still present in the province, fentanyl is increasingly taking over the market.
“It’s a growing trend,” he said, pegging 2014 as the first year fentanyl was noticed in a significant way.
Mark Haden, a public health professor at the University of British Columbia, draws a parallel to alcohol prohibition, which he said led to stronger, more concentrated booze that was often toxic.
“Dealers will always want small packages. That’s the natural process of drug prohibition,” he said, dismissing the war-on-drugs policy approach taken by governments as shortsighted and ineffective.
There are multiple explanations offered for the rise of the dangerous opioid, centring on its low production cost and the simplicity of smuggling it across the border in its a compact, concentrated form.
Jane Buxton with the Centre for Disease Control said money plays a key role in fentanyl’s upward trend line.
“Whoever is importing or selling drugs, they’re doing it presumably for a profit and therefore if there’s a substance that is easy to access and cheap, and can be sold for a great profit, that’s what’s going to be focused on,” she said.
The manufacturer of the prescription opioid OxyContin designed a tamper-resistant version of the prescription drug that becomes inert when meddled with, making it impossible to grind and snort, for example.
The effectiveness in disabling OxyContin as a drug source has in turn contributed to a spike in black market opioids, Buxton said.
Still, it’s difficult to know exactly what is happening on the ground without effective and timely data collection, she added.
Michael Parkinson of the Waterloo Region Crime Prevention Council in Kitchener, Ont., lamented that no province, territory or the federal government gathers real-time data on opioid overdose fatalities.
That is seriously hampering their ability to craft fast and effective responses to drug crises, he added.
“(With) other causes of accidental death, for example influenza, we know how many people died or were hospitalized last week,” said Parkinson.
Alberta and B.C. now have more up-to-date numbers on fentanyl overdose deaths, he said, but other opioids aren’t included.
“It’s an international mystery. It really is. It’s scandalous,” Parkinson said, pointing out that there have been 4,984 deaths in Ontario due to opioids over a 13-year period.
“We get three people dropping off from anaphylaxis and it’s all hands on deck,” he added. “That hasn’t happened with opioid overdoses.” ||||| Canada has approved prescription heroin to be given to some patients in an effort to combat the effects of the ongoing opioid crisis. The news comes as some health experts and policymakers in both Canada and the U.S. are looking to implement more harm reduction strategies, which focus on diminishing risk associated with intravenous drug use.
On Friday Canada's health ministry announced that doctors will now be able to prescribe diacetylmorphine or prescription-grade heroin for the treatment of "chronic relapsing opioid dependence." The drugs will be given through Canada's Special Access Programme (SAP) which provides access to drugs not currently available on the market for the treatment patients with serious or life-threatening conditions when "conventional therapies have failed, are unsuitable, or unavailable."
"Scientific evidence supports the medical use of diacetylmorphine for the treatment of chronic relapsing opioid dependence in certain individual cases," Canadian health officials said in a statement sent to ABC News today. "Health Canada recognizes the importance of providing physicians with the power to make evidence-based treatment proposals in these exceptional cases."
Researchers in Canada have been using pilot programs to understand how giving prescription heroin or providing supervised injection sites could affect the health of intravenous drug users. These tactics are part of a harm reduction strategy aimed at reducing the risk surrounding opioid drug abuse without forcing an addict to stop using drugs. In the U.S. similar programs have been considered and the mayor of Ithaca, New York has plans to open the first supervised injection site in the country.
There were record number of deaths related to opioid overdoses in 2014 in the U.S. with 28,000 recorded deaths according to the U.S. Centers for Disease Control and Prevention. In Canada opioid-related deaths have risen sharply and make up half of all drug deaths, according to the Canadian Drug Policy Coalition.
Dr. Scott MacDonald developed a pilot program that studied the effects of providing prescription heroin to certain users in Vancouver and said researchers have seen huge success with the program.
"This is a kind of last resort to get them into care to get them off the streets," MacDonald said. "We see them come to us every day rather than stay on the streets... that engagement and retention in care is a significant benefit."
MacDonald said people who used to to be in and out of jail or the hospital have been able to reconnect with families, go back to school and retain employment.
"That's a major success," he said. In the pilot program users must be a long time heroin user, who has tried at least twice to stop using drugs. The drug users are allowed to come to the clinic between two to three times a day where they are provided a syringe and drugs for injection. Medical staff on site monitor the drug users and can intervene if they show signs of overdose.
Daniel Raymond, policy director for Harm Reduction New York, said that providing prescription heroin could viewed as an extension of medicine-based rehab programs that utilize drugs like morphine or buprenorphine that help medically address symptoms of opioid addiction and withdrawal.
"I think the idea is not so much the Marie Antoniette style let them have heroin," said Raymond. "We know people who struggle with opioid disorder. We've been using bufneoprohine, morphine...none of them have been sufficiently scaled up."
Raymond pointed out this treatment is only right for a small group of drug users.
"What we see from research is a small subset of people with entrenched treatment resistant drug problems," said Raymond. "It seems to stabilize them, it gets them off of the street."
Raymond said a move among health experts and other policy makers towards harm reduction shows a growing awareness that asking drug users to quit drugs isn't always a feasible goal.
"There may be some people who have accumulated a lifetime of trauma," Raymond said. For them "Stability is a goal in and of itself." | Canadian doctors dealing with patients who have proved unable to stop taking heroin can now go ahead and prescribe them heroin. Justin Trudeau's government has reinstated a policy that allows doctors to prescribe diacetylmorphine—pharmaceutical-grade heroin—to severely addicted patients if other methods of treatment fail, ABC News reports. In Vancouver, BC's Crosstown Clinic, which currently runs the only such program anywhere in the US and Canada, addicts come in daily for up to three injections, the Washington Post reports. Lead physician Scott MacDonald says his patients are hard-core, long-term users and that one has been using the drug for 50 years. The program "is a kind of last resort to get them into care, to get them off the streets," MacDonald says. "We see them come to us every day rather than stay on the streets." Giving addicts heroin clearly doesn't cure their habits, experts tell CNN, but it prevents many overdose deaths and reduces both health care costs and crime. Fatal overdoses of illicit drugs have soared in Canada this year, and the Vancouver Sun reported in May that many of those deaths were caused by the cheaper and more potent drug fentanyl. It has almost completely replaced genuine heroin on the streets, although dealers still claim they are selling the real thing. (An even stronger opioid is killing people in Ohio.) |
DOD is subject to various laws dating back to the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) as amended by the Superfund Amendments and Reauthorization Act (SARA) of 1986 that govern remediation (cleanup) of contamination on military installations. DOD must also follow federal accounting standards that establish requirements for DOD to recognize and report the estimated costs for the cleanup of training ranges in the United States and its territories. Increasing public concern about potential health threats has affected not only the present operations of these training ranges but also the management, cleanup, and control of this training range land that has been, or is in the process of being, transferred to other agencies and public hands. DOD defines a range as any land mass or water body that is used or was used for conducting training, research, development, testing, or evaluation of military munitions or explosives. DOD classifies its ranges into the following five types. Active ranges are currently in operation, construction, maintenance, renovation, or reconfiguration to meet current DOD component training requirements and are being regularly used for range activities. Examples of these ranges would include ranges used for bombing, missiles, mortars, hand grenades, and artillery testing and practice. Inactive ranges are ranges that are not currently being used as active ranges. However, they are under DOD control and are considered by the military to be a potential active range area in the future, and have not been put to a new use incompatible with range activities. Closed ranges have been taken out of service and are still under DOD control but DOD has decided that they will not be used for training range activities again. Transferred ranges have been transferred to non-DOD entities such as other federal agencies, state and local governments, and private parties, and are those usually associated with the formerly used defense sitesprogram. Transferring ranges are in the process of being transferred or leased to other non-DOD entities and are usually associated with the base realignment and closure program. Congress addressed environmental contamination at federal facilities under SARA in 1986. This legislation established, among other provisions, the Defense Environmental Restoration Program and the Defense Environmental Restoration Account as DOD’s funding source under the Act. The goals of the Defense Environmental Restoration Program include (1) identification, investigation, research and development, and cleanup of contamination from hazardous substances, pollutants, and contaminants and (2) correction of other environmental damage such as detection and disposal of unexploded ordnance which creates an imminent and substantial danger to the public health or welfare or to the environment. The Office of the Deputy Under Secretary of Defense for Environmental Security (DUSD(ES)) was created in 1993. That office has overall responsibility for environmental cleanup within DOD and includes the Office of Environmental Cleanup that manages the Defense Environmental Restoration Program. Carrying out any remediation or removal actions under applicable environmental laws, including SARA, would likely require the immediate or future expenditure of funds. Federal accounting standards determine how those expenditures are accounted for and reported. The Chief Financial Officers’ Act of 1990, as expanded by the Government Management and Reform Act of 1994, requires that major federal agencies, including DOD, prepare and submit annual audited financial statements to account for its liabilities, among other things. Two federal accounting standards, Statement of Federal Financial Accounting Standards (SFFAS) Nos. 5 and 6, establish the criteria for recognizing and reporting liabilities in the annual financial statements, including environmental liabilities. SFFAS No. 5, Accounting for Liabilities of the Federal Government, defines liability as a probable future outflow of resources due to a past government transaction or event. SFFAS No. 5 further states that recognition of a liability in the financial statements is required if it is both probable and measurable. Effective in 1997, SFFAS No. 5 defines probable as that which is more likely than not to occur (for example, greater than a 50 percent chance) based on current facts and circumstances. It also states that a future outflow is measurable if it can be reasonably estimated. The statement recognizes that this estimate may not be precise and, in such cases, it provides for recognizing the lowest estimate of a range of estimates if no amount within the range is better than any other amount. SFFAS No. 6, Accounting for Property, Plant, and Equipment, further defines cleanup costs as costs for removal and disposal of hazardous wastes or materials that because of quantity, concentration, or physical or chemical makeup may pose a serious present or potential hazard to human health or the environment. The Office of the Under Secretary of Defense (Comptroller) issues the DOD Financial Management Regulation containing DOD’s policies and procedures in the area of financial management, which require the reporting of environmental liabilities associated with the cleanup of closed, transferred, and transferring ranges in the financial statements.DOD has taken the position that the cleanup of these ranges is probable and measurable and as such should be reported as a liability in its financial statements. Under the presumption that active and inactive ranges will operate or be available to operate indefinitely, the DOD Financial Management Regulation does not specify when or if liabilities should be recognized in the financial statements for these ranges. The Senate Report accompanying the National Defense Authorization Act for Fiscal Year 2000 directed DOD to provide a report to the congressional defense committees, no later than March 1, 2001, that gives a complete estimate of the current and projected costs for all unexploded ordnance remediation. As of March 30, 2001, DOD had not issued its report. For the purposes of the March 2001 report, DOD officials had stated that they would estimate cleanup costs for active and inactive training ranges just as they would for closed, transferred, and transferring ranges. Thus, the cleanup costs shown in this report would have been significantly higher than the training range liabilities reported in the financial statements, which only include estimates for closed, transferred, and transferring ranges. However, in commenting on a draft of our report, DOD officials informed us that they would not be reporting the cleanup costs of active and inactive training ranges in their March report. As DOD downsizing and base closures have increased in recent years, large numbers of military properties have been, and are continuing to be, turned over to non-DOD ownership and control, resulting in the public being put at greater risk. DOD uses a risk-based approach when transferring ranges from its control to reduce threats to human health and the environment. DOD attempts to mitigate risk to human health on transferred and transferring ranges. In instances where DOD has not removed, contained, and/or disposed of unexploded ordnance and constituent contamination from training ranges prior to transfer, it implements institutional controls to restrict access to transferring ranges and to transferred ranges where risks are found. Institutional controls include implementing community education and awareness programs, erecting fences or barriers to control access, and posting signs warning of the dangers associated with the range. Figure 1 shows signs posted at Fort McClellan, Alabama, warning of unexploded ordnance. Fort McClellan has been designated for closure under the base realignment and closure program and, as such, is in the process of transferring base properties out of DOD control. DOD officials have estimated that approximately 16 million acres of potentially contaminated training ranges have been transferred to the public or other agencies. The risk to the public was further discussed by an Environmental Protection Agency (EPA) official in a letter dated April 22, 1999, to DUSD(ES). The EPA official cautioned that many training ranges known or suspected to contain unexploded ordnance and other hazardous constituents have already been transferred from DOD control, and many more are in the process of being transferred, and the risks from many of these have not been adequately assessed. The letter went on to state that risks correspondingly increase as ranges that were once remote are encroached by development or as the public increases its use of these properties. An example of the development of sites adjacent to training ranges is the planned construction of two schools and a stadium by the Cherry Creek School District adjacent to the Lowry Bombing Range, a transferred range, near Denver. Construction is expected to begin in May 2001. Most training range contamination is a result of weapons systems testing and troop training activities conducted by the military services. Unexploded ordnance consists of many types of munitions, including hand grenades, rockets, guided missiles, projectiles, mortars, rifle grenades, and bombs. Figure 2 shows examples of some of the typical unexploded ordnance that has been removed from training ranges. Risks from this unexploded ordnance can encompass a wide range of possible outcomes or results, including bodily injury or death, health risks associated with exposure to chemical agents, and environmental degradation caused by the actual explosion and dispersal of chemicals or other hazardous materials to the air, soil, surface water, and groundwater. For example, according to an EPA report, EPA surveyed 61 current or former DOD facilities containing 203 inactive, closed, transferred, and transferring ranges and identified unexploded ordnance “incidents” at 24 facilities. These incidents included five accidental explosions, which resulted in two injuries and three fatalities. According to an EPA official, the three fatalities identified in their limited survey were two civilian DOD contractors and one military service member. Although DOD reported its unexploded ordnance cleanup liability on training ranges at about $14 billion in its fiscal year 2000 agencywide financial statements, it is likely that the financial statements are substantially understated. Further, significant cleanup costs will not be included in the planned March 2001 report. DOD officials and Members of Congress have expressed concern over the potential liability the government may be faced with but are still uncertain how large the liability may be. Various estimates have shown that cleanup of closed, transferred, and transferring training ranges could exceed $100 billion. For example: In preparation for DOD’s planned issuance of the Range Rule, DOD began an analysis of the potential costs that may be incurred if the Rule was implemented. The Rule was intended to provide guidance to perform inventories and provide cleanup procedures at closed, transferred, and transferring ranges. The Rule was withdrawn in November 2000 and the cost analysis was never formally completed. However, a senior DOD official said that initial estimates in the cost analysis that was developed in 2000 put the cleanup costs of training ranges at about $40 billion to $140 billion for closed, transferred, and transferring training ranges. DOD estimated that its potential liability for cleanup of unexploded ordnance might exceed $100 billion as noted in a conference report to the National Defense Authorization Act for Fiscal Year 2001 (Report 106-945, October 6, 2000). DOD will not respond fully to the Senate Report request for reporting the costs of cleaning up unexploded ordnance on its training ranges. DOD officials informed us that due to time constraints, the training range liability to be reported in the March 2001 report would not be complete or comprehensive because the required information could not be collected in time for analysis and reporting. A DUSD(ES) official said that the March 2001 report will include a discussion of the limitations and omissions. DOD officials stated that they have deferred the collection and analysis of key data elements. Some of the items that were excluded are the costs to clean up the soil and groundwater resulting from unexploded ordnance and constituent contamination. These omitted costs could be significant. Further, the March 2001 report will not include information on water ranges. DOD’s 1996 Regulatory Impact Analysis reported that DOD had approximately 161 million acres of water training ranges, almost 10 times the size of the estimated closed, transferred, and transferring land ranges. In commenting on a draft of this report, DOD stated that the 161 million acres of water ranges are active training ranges, the majority of which are open-ocean, deep water, restricted access areas and most are outside the territorial waters of the United States. DOD also stated that the majority of water ranges are not likely to cause an imminent and substantial danger to public health and safety or the environment. However, until a complete and accurate inventory is performed, DOD will be unable to determine whether some water ranges meet the reporting requirement of SFFAS No. 5 and, thus, must be reported in the financial statements. The DOD Comptroller has revised the DOD Financial Management Regulation to clarify DOD’s fiscal year 2000 financial statement reporting requirements for training range cleanup costs. The revision includes guidance that requires the reporting of the cleanup costs of closed, transferred, and transferring ranges as liabilities in the financial statements. DOD has indicated that the costs to clean up these training ranges is probable and measurable and as such should be reported as a liability in the financial statements. We concur with DOD that these costs should be reported in the financial statements as liabilities because they are probable and measurable. Specifically, they are probable because DOD is legally responsible for cleaning up closed, transferred, and transferring ranges which were contaminated as a result of past DOD action. For example, under SARA, DOD is responsible for the cleanup of sites that create an imminent and substantial danger to public health and safety or the environment. In addition, these training range cleanup efforts are measurable. DOD has prior experience in training range cleanup under the formerly used defense sites program and has used this experience to develop a methodology to estimate future cleanup costs. However, as explained later in this report, DOD has not based its reported financial statement liability for cleanup of these ranges on a complete inventory or consistent cost methodology, resulting in estimates that range from $14 billion to over $100 billion. In addition, we believe that certain active and inactive sites may have contamination that should also be recorded as a liability in the financial statements because these sites meet criteria in federal accounting standards for recording a liability. The DOD Financial Management Regulation does not include instructions for recognizing a liability for training range cleanup costs on active and inactive ranges in the financial statements. Although cleanup of active and inactive ranges would not generally be recognized as a liability in the financial statements, there are circumstances when an environmental liability should be recognized and reported for these ranges. A liability should be recognized on active and inactive ranges if the contamination is government related, the government is legally liable, and the cost associated with the cleanup efforts is measurable. For example, contaminants from an active training range at the Massachusetts Military Reservation threaten the aquifer that produces drinking water for nearby communities. The problem was so severe that in January 2000, EPA issued an administrative order under the Safe Drinking Water Act requiring DOD to cleanup several areas of the training range. According to a DOD official, the cleanup effort could cost almost $300 million. As a result, the cleanup of this contamination is probable (since it is legally required) and measurable. Thus, this liability should be recognized in the financial statements under SFFAS No. 5. Although DOD and the services have collected information on other environmental contamination under the Defense Environmental Restoration Program for years, they have not performed complete inventories of training ranges to identify the types and extent of contamination present. To accurately compute the training range liabilities, the military services must first perform in-depth inventories of all of their training ranges. Past data collection efforts were delayed because the services were waiting for the promulgation of the Range Rule which has been withdrawn. DOD recently began collecting training range data to meet the reporting requirements for the Senate Report. However, as stated previously, DOD has limited its data collection efforts and will not be reporting on the cleanup of water ranges or the unexploded ordnance constituent contamination of the soil and water. The Army, under direction from DUSD(ES), proposed guidance for the identification of closed, transferred, and transferring ranges with the preparation and attempted promulgation of the Range Rule. In anticipation of the Range Rule, DOD prepared a Regulatory Impact Analysis report in 1996, recognizing that the cleanup of its closed, transferred and transferring training ranges was needed and that the cleanup costs could run into the tens of billions of dollars. To address inventories of its active and inactive ranges, DOD issued Directive 4715.11 for ranges within the United States and Directive 4715.12 for ranges outside the United States in August 1999. These directives required that the services establish and maintain inventories of their ranges and establish and implement procedures to assess the environmental impact of munitions use on DOD ranges. However, the directives did not establish the guidance necessary to inventory the ranges nor establish any completion dates. Although the directives assigned responsibility for developing guidance to perform the inventories, DOD has not developed the necessary guidance specifying how to gather the inventory information or how to maintain inventories of the active and inactive training ranges. Since fiscal year 1997, federal accounting standards have required the recognition and reporting of cleanup costs, as mentioned earlier. However, DOD did not report costs for cleaning up closed, transferred, and transferring training ranges until the services estimated and reported the training range cleanup costs in DOD’s agencywide financial statements for fiscal year 1999. Agencywide financial statements are prepared in accordance with the DOD Financial Management Regulation, which is issued by the DOD Comptroller and incorporates Office of Management and Budget guidance on form and content of financial statements. In an attempt to comply with the mandates in the Senate Report, DOD embarked on a special effort to collect training range data necessary to estimate potential cleanup costs. The Senate Report directed DOD to report all known projected unexploded ordnance remediation costs, including training ranges, by March 1, 2001, and to report subsequent updates in the Defense Environmental Restoration Program annual report to Congress. While the Senate Report did not expressly direct DOD to identify an inventory of training ranges at active facilities, installations subject to base realignment and closure, and formerly used defense sites, the data necessary to fully estimate costs of unexploded ordnance— normally located on training ranges—could only be attained in conjunction with the performance of a complete and accurate inventory that includes training ranges. Although the Senate Report’s directives were dated May 1999, DOD did not provide formal guidance to the services for collecting training range data until October 2000—17 months later. As a first step in February 2000, the Under Secretary of Defense for Acquisition, Technology, and Logistics assigned the responsibility to the Office of the Director of Defense Research and Engineering, in coordination with DUSD(ES), for obtaining the range data and preparing the report. On October 23, 2000, DUSD(ES) issued specific guidance to the military services instructing them to gather range information and detailing some of the specific information needed. Although DOD instituted an Unexploded Ordnance Inventory Working Group in March 2000 to work with the services to develop specific guidance, service officials told us that DOD had not clearly told them what was required or when it was required until shortly before the official tasking was issued on October 23, 2000. Once officially tasked to gather range information, the services were given until January 5, 2001, to gather and provide it to DOD for analysis by a DOD contractor. Lacking specific guidance from DOD to inventory their ranges, but recognizing that they would eventually be tasked to gather range information in anticipation of the Range Rule or for the Senate Report, each of the services developed its own survey questionnaires to begin gathering range information before the formal guidance was issued. The Navy took a proactive approach and began developing a questionnaire in late 1999. The questionnaire was issued to the Navy commands in December 1999. The Army and the Air Force also developed their own questionnaires and issued them in September 2000. Because the formal guidance was issued after the services had begun their initial data collection, the services had to collect additional data from their respective units or other sources. According to DOD officials, the training range inventory information gathered from these questionnaires for the March 2001 report will also be used in the future as a basis for financial statement reporting. Although the scope of ranges in the United States and its territories is not fully known—because DOD does not have a complete inventory of training ranges—DOD estimates that over 16 million acres of land on closed, transferred, and transferring ranges are potentially contaminated with unexploded ordnance. DOD also estimates that it has about 1,500 contaminated sites. Many former military range sites were transferred to other federal agencies and private parties. Training ranges must be identified and investigated to determine type and extent of contamination present, risk assessments performed, cleanup plans developed, and permits obtained before the actual cleanup is begun. These precleanup costs can be very expensive. For example, the Navy estimates that these investigative costs alone are as much as $3.96 million per site. Identifying the complete universe of current and former training ranges is a difficult task. Ranges on existing military bases are more easily identifiable and accessible. More problematic, however, are those ranges that were in existence decades ago, that have been transferred to other agencies or the public, and records of the ranges’ existence or the ordnance used cannot always be found. Special investigative efforts may be necessary to identify those locations and ordnance used. In preparing for World War I and World War II, many areas of the country were used as training ranges. In some instances, documentation on the location of and/or the types of ordnance used on these ranges is incomplete or cannot be found. For example, unexploded ordnance was unexpectedly found by a hiker in 1999 at Camp Hale in Colorado, a site used for mountain training during World War II and since transferred to the U.S. Forest Service. Because additional live rifle grenades were found in 2000, the Forest Service has closed thousands of acres of this forest to public use pending further action. This site also serves as an example of the difficulty in identifying and cleaning up unexploded ordnance in rough mountain terrain and dense ground cover. In addition to not having an accurate and complete inventory of its training ranges, DOD has just recently focused on development of a consistent methodology for estimating its training range cleanup cost estimates. However, DOD is using different methodologies for estimating cleanup costs for the annual financial statements and the March 2001 report. While DOD is using a standard methodology for estimating and reporting its cleanup costs for the March 2001 report, that methodology was not used to estimate the training range cleanup costs for the fiscal year 2000 financial statements. In addition, each of the services is using different methodologies for calculating cleanup cost estimates for reporting its liabilities in the financial statements. Without a consistent methodology, cleanup costs reported in the financial statements and other reports will not be comparable and have limited value to management when evaluating cleanup costs of each the services’ training ranges and budgeting for the future. Because the military services do not apply a consistent cost methodology to compute the liabilities for their financial statements, any comparison among the training range liabilities across the services will not be meaningful. DOD is reporting a liability of about $14 billion for fiscal year 2000 for cleaning up closed, transferred, and transferring training ranges. Of the $14 billion, the Navy is reporting a liability of $53.6 million. The Navy, based on limited surveys completed in 1995 through 1997, estimated the number and size of its training ranges and applied a $10,000 an acre cleanup cost factor to compute its liability. The Navy based its estimates on the assumption of cleaning up its closed, transferred, and transferring ranges to a “low” cleanup/remediation level. The low cleanup/remediation level means that the training ranges would be classified as “limited public access” and be used for things such as livestock grazing or wildlife preservation, but not for human habitation. The Army recognized the largest training range cleanup liability for fiscal year 2000. It reported a $13.1 billion liability for cleaning up closed, transferred, and transferring ranges. The $13.1 billion was comprised of $8 billion to clean up transferred ranges, $4.9 billion for the cleanup of closed ranges, and $231 million for the cleanup of transferring ranges.The Army used an unvalidated cost model to compute the $8 billion costs of cleaning up transferred ranges and used a different cost methodology for estimating the $4.9 billion for closed ranges. The Air Force reported a liability of $829 million for both fiscal years 1999 and 2000 based on a 1997 estimate of 42 closed ranges, using a historical cost basis for estimating its liability. According to DOD officials, DOD has standardized its methodology for estimating and reporting the unexploded ordnance cleanup costs that will be reported in the March 2001 report. DOD’s cost model used to compute the unexploded ordnance cleanup costs from its training ranges has not been validated. The original cost model was initially developed by the Air Force in 1991 and has been used by government agencies and the private sector to estimate other environmental cleanup costs not associated with training range cleanup. A new module was recently added to the cost model to estimate costs for removing unexploded ordnance and its constituents from former training ranges. The new module uses cost data developed by the U.S. Army Corps of Engineers from past experiences in cleaning up training ranges on formerly used defense sites. DOD officials told us that they believe that this model is the best one available to compute the cleanup costs. However, the assumptions and cost factors used in the model were not independently validated to ensure accurate and reliable estimates. DOD Instruction 5000.61 requires that cost models such as this must be validated to ensure that the results produced can be relied upon. We did not evaluate this model, but we were informed that DOD is in the process of developing and issuing a contract to have this model validated. A DOD official also informed us that DOD is currently considering requiring that the cost model be used as a standard for the military services’ valuation of their cleanup cost estimates used to report liabilities in the financial statements. Until DOD standardizes and validates its costing methodology used for estimating and reporting all cleanup cost estimates for training range cleanup and requires its use DOD-wide, it has no assurance that the military services will compute their cleanup costs using the same methodology. As a result, the services will in all probability continue to produce unreliable and differing estimates for their various reporting requirements. DOD lacks leadership in reporting on the cleanup costs of training ranges. DUSD(ES) was created in 1993 as the office responsible for environmental cleanup within DOD. However, this office has focused its principal efforts on the cleanup of other types of environmental contamination, not unexploded ordnance. Although requirements for reporting a training range environmental liability have existed for years, DOD has not established adequate or consistent policies to reliably develop the cost of the cleanup of training ranges and to oversee these costing efforts. Similar to the problems noted previously in this report concerning the inventory delays and lack of guidance, the Defense Science Board, in 1998, reported that DOD had not met its management responsibility for unexploded ordnance cleanup. It reported that there were no specific DOD-wide unexploded ordnance cleanup goals, objectives, or management plans. The report went on to say that unexploded ordnance cleanup decisions are made within the individual services, where remediation requirements are forced to compete against traditional warfighting and toxic waste cleanup requirements. This competition has resulted in unexploded ordnance cleanup efforts being relegated to “house-keeping duties” at the activity or installation level, according to the Board’s report. To address DOD’s unmet management responsibilities for unexploded ordnance cleanup, the Defense Science Board recommended the establishment of an Office of Secretary of Defense focal point for oversight of unexploded ordnance cleanup activities within DOD. This recommendation was made even though DUSD(ES) had overall responsibility for environmental cleanup under the Defense Environmental Restoration Program. According to the Director of DOD’s Environmental Cleanup Program, a single focal point for managing the cleanup of unexploded ordnance has still not been formally designated. A focal point with the appropriate authority could be a single point of contact who could manage and oversee the development of a complete and accurate training range inventory, the development of a consistent cost methodology across all services, and the reporting of the training range liability for the financial statements and other required reports. The Department of Energy (DOE) has been successful in its identification and reporting of thousands of environmentally contaminated sites, with cleanup liabilities reported at $234 billion in fiscal year 2000. Initially, in the early 1990s, DOE was unable to report the estimated cleanup costs. However, through substantial effort and support of DOE leadership, DOE was able to receive a clean, or unqualified, audit opinion, for its fiscal year 1999 and 2000 financial statements. DOE’s efforts provide a useful example to DOD in its efforts to identify and report cost estimates on its contaminated sites. After 50 years of U.S. production of nuclear weapons, DOE was tasked with managing the largest environmental cleanup program in the world. DOE has identified approximately 10,500 release sites from which contaminants could migrate into the environment. DOE has made substantial progress in defining the technical scope, schedules, and costs of meeting this challenge, and in creating a plan to undertake it. DOE officials told us that in order to build a reliable database and management program for contaminated sites, the process requires a significant investment in time and manpower. DOE officials stated that they began their data collection and management program process in the early 1990s and are continuing to build and update their database. However, they emphasized that their efforts, similar to DOD’s current efforts, started with an initial data call to collect preliminary information to identify the sites. They said the next step involved sending teams to each of the sites to actually visit and observe the site, sometimes taking initial samples, to further identify and confirm the contaminants, and to help assess the risk associated with the site contamination. The information gathered was entered into a central database in 1997 to be used for management and reporting purposes. In 1999, DOE completed entering baseline data for all known cleanup sites. In addition to the above steps, once a site was selected for cleanup, a much more involved process was done to further test for and remove the contaminants. However, until a site is fully cleaned up, each site is reviewed and cost estimates are reviewed annually and any changes in conditions are recorded in the central database. DOE officials told us that in addition to providing the necessary leadership and guidance to inventory and manage their sites, another key to this success was establishing a very close working relationship between the program office and the financial reporting office to ensure consistent and accurate reporting of their cleanup liabilities. As military land, including training ranges, is transferred to the public domain, the public must have confidence that DOD has the necessary leadership and information to address human health and environmental risks associated with training range cleanup. Also, the Congress needs related cost information to make decisions on funding needed. DOD’s recent efforts to develop the information needed to report training range cleanup costs for the required March 2001 report represent an important first step in gathering the needed data. However, accurate and complete reporting can only be achieved if DOD compiles detailed inventory information on all of its training ranges and uses a consistent and valid cost methodology. Because of the complexity of the data gathering process and the many issues involved in the cleanup of training ranges, top management leadership and focus is essential. A senior-level official with appropriate management authority and resources is key to effectively leading these efforts to produce meaningful and accurate reports on training range cleanup costs. We recommend that the Secretary of Defense designate a focal point with the appropriate authority to oversee and manage the reporting of training range liabilities. We also recommend that the Secretary of Defense require the designated focal point to work with the appropriate DOD organizations to develop and implement guidance for inventorying all types of training ranges, including active, inactive, closed, transferred, and transferring training ranges. We recommend that this guidance, at a minimum, include the following requirements: key site characterization information for training ranges be collected for unexploded ordnance removal; identification of other constituent contamination in the soil and/or water; performance time frames, including the requirements to perform the necessary site visits to confirm the type and extent of contamination; and the necessary policies and procedures for the management and maintenance of the inventory information. We further recommend that the Secretary of Defense require the designated focal point to work with the appropriate DOD organizations to develop and implement a consistent and standardized methodology for estimating training range cleanup costs to be used in reporting its training range cleanup liabilities in DOD’s agency-wide annual financial statements and other reports as required. In addition, we recommend that the Secretary of Defense require that the designated focal point validate the cost model in accordance with DOD Instruction 5000.61. Further, we recommend that the Secretary of Defense require the DOD Comptroller to revise the DOD Financial Management Regulation to include guidance for recognizing and reporting a liability in the financial statements for the cleanup costs on active and inactive ranges when such costs meet the criteria for a liability found in the federal accounting standards. In commenting on a draft of this report, DOD stated that it has made significant progress in estimating and reporting environmental liabilities on its financial statements; however, much work remains to be done. DOD’s response also indicated that as the department increases its knowledge related to this area, the appropriate financial and functional policies will be updated to incorporate more specific guidance for recognizing and reporting environmental liabilities. DOD concurred with our recommendations, but provided several comments in response to our recommendation that the Secretary of Defense require the DOD Comptroller to revise the DOD Financial Management Regulation to include guidance for recognizing and reporting a liability in the financial statements for the cleanup costs on active and inactive ranges when such costs meet the criteria for a liability. DOD stated that it revised Volume 6B, Chapter 10, of the DOD Financial Management Regulation to clarify instances when a liability should be recognized for an active or inactive range on an active installation. However, this revision of the DOD Financial Management Regulation does not address the recognition of an environmental liability at active and inactive ranges in accordance with the criteria of SFFAS No. 5. For example, as stated in our report, the total $300 million cleanup cost estimate on the active range at the Massachusetts Military Reservation should be recognized as a liability in accordance with the criteria in SFFAS No. 5. DOD further stated that since it intends to continue to use its active and inactive ranges in the foreseeable future, the removal of ordnance to maintain safety and usability is considered an ongoing maintenance expense. DOD stated that this expense is not accrued as a liability except in those few specific instances in which an environmental response action—beyond what is necessary to keep the range in operation—is probable and the costs of such a response is measurable. Although this position is consistent with SFFAS No. 5, it is not specifically indicated in the DOD Financial Management Regulation. Finally, DOD stated that as the Department gains additional experience in this area, it will review appropriate chapters in the DOD Financial Management Regulation to determine what, if any, additional specific guidance may need to be included regarding recognizing and reporting liabilities. While we agree that such a review is appropriate, we continue to recommend that the DOD Financial Management Regulation be revised to include guidance in those instances when active and inactive ranges meet the criteria in SFFAS No. 5. DOD also provided several technical comments, which we have incorporated in the report as appropriate. We are sending copies of this report to the Honorable John Spratt, Ranking Minority Member, House Committee on the Budget, and to other interested congressional committees. We are also sending copies to the Honorable Donald H. Rumsfeld, Secretary of Defense; the Honorable David R. Oliver, Acting Under Secretary of Defense for Acquisition, Technology, and Logistics; and the Honorable Mitchell E. Daniels, Jr., Director of the Office of Management and Budget. Copies will be made available to others upon request. Please contact me at (202) 512-9095 if you or your staff have any questions about this report. Other GAO contacts and key contributors to this report are listed in appendix III. Our objectives were to review DOD’s ongoing efforts to (1) gather and collect information on its training ranges and issues affecting the successful completion of the inventory and (2) recognize environmental liabilities associated with the cleanup of unexploded ordnance from its training ranges, including DOD’s efforts to develop and implement a methodology to develop cost estimates. The focus of our review was on DOD efforts to gather and collect information on its training ranges and the environmental costs associated with the cleanup of the training ranges. As a result, other sites containing unexploded ordnance were not included in the scope of our review. These sites include munitions manufacturing facilities, munitions burial pits, and open burn and open detonation sites used to destroy excess, obsolete, or unserviceable munitions. To accomplish these objectives, we: reviewed relevant standards and guidance applicable to environmental liabilities including Statement of Federal Financial Accounting Standards (SFFAS) No. 5, Accounting for Liabilities of the Federal Government; SFFAS No. 6, Accounting for Property, Plant, and Equipment; and DOD Financial Management Regulation, Volume 6B, Chapter 10, and Volume 4, Chapters 13 and 14; reviewed DOD guidance to the military services for performing the training range inventory survey; reviewed the military services’ survey documents used to collect information on training ranges; interviewed officials from the Deputy Under Secretary of Defense for Environmental Security (DUSD(ES)); Director Defense Research and Engineering; U.S. Army Corps of Engineers; and the Army, Navy, and Air Force involved in planning and conducting the data collection efforts and analyzing the data; interviewed an official from the Office of the Under Secretary of Defense (Comptroller); interviewed officials from the U.S. Environmental Protection Agency; interviewed environmental officials from the states of Colorado and Alabama; interviewed officials from the Department of Energy; interviewed the contractor selected by DOD, which assisted in planning and analyzing the data and preparing the cost analysis for the March 2001 report; and visited two locations—Lowry Bombing Range, Denver, and Ft. McClellan, Anniston, Alabama—to gain insight into the complexities involved in estimating liabilities for training range cleanup. We did not audit DOD’s financial statements and therefore we do not express an opinion on any of DOD’s environmental liability estimates for fiscal year 1999 or 2000. We conducted our work in accordance with generally accepted government auditing standards from May 2000 through March 2001. On March 29, 2001, DOD provided us with written comments on our recommendations, which are discussed in the “Agency Comments and Our Evaluation” section and are reprinted in appendix II. DOD also provided comments on several other matters, which we have incorporated in the report as appropriate but have not reprinted. Staff making key contributions to this report were Paul Begnaud, Roger Corrado, Francine DelVecchio, and Stephen Donahue. The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are also accepted. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Orders by mail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Orders by visiting: Room 1100 700 4th St., NW (corner of 4th and G Sts. NW) Washington, DC 20013 Orders by phone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm E-mail: fraudnet@gao.gov 1-800-424-5454 (automated answering system) | Because of concerns about the long-term budgetary implications associated with the environmental cleanup of the Department of Defense (DOD) training ranges, GAO examined (1) the potential magnitude of the cost to clean up these ranges in compliance with applicable laws and regulations, (2) the scope and reliability of DOD's training range inventory, and (3) the methodologies used to develop cost estimates. GAO found that DOD lacks complete and accurate data with which to estimate training range cleanup costs. DOD has not done a complete inventory of its ranges to fully identify the types and extent of unexploded ordnance present and the associated contamination. Recently, DOD began to compile training range data, but these initial efforts have been delayed because DOD did not issue formal guidance to the services for collecting the information until October 2000. Because DOD has not completed an inventory of its ranges, the services have used varying methods to estimate the size and condition of the ranges necessary to estimate the cost of cleanup for financial statement purposes. As a result, environmental liability costs are not consistently calculated and reported across the services. |
Advertisement Continue reading the main story
WASHINGTON — Risking a new breach in relations with Pakistan, the Obama administration is leaning toward designating the Haqqani network, the insurgent group responsible for some of the most spectacular assaults on American bases in Afghanistan in recent years, as a terrorist organization.
With a Congressional reporting deadline looming, Secretary of State Hillary Rodham Clinton and top military officials are said to favor placing sanctions on the network, which operates in Afghanistan and Pakistan, according to half a dozen current and former administration officials.
A designation as a terrorist organization would help dry up the group’s fund-raising activities in countries like Saudi Arabia and United Arab Emirates, press Pakistan to carry out long-promised military action against the insurgents, and sharpen the administration’s focus on devising policies and operations to weaken the group, advocates say.
But no final decision has been made. A spirited internal debate has American officials, including several at the White House, worried about the consequences of such a designation not only for relations with Pakistan, but also for peace talks with the Taliban and the fate of Sgt. Bowe Bergdahl, the only American soldier known to be held by the militants.
Continue reading the main story Video
Perhaps the most important consideration, administration and Congressional officials say, is whether the designation would make any difference in the group’s ability to raise money or stage more assaults as the American-led NATO force draws down in Afghanistan. Several Haqqani leaders have already been designated individually as “global terrorists,” so the issue now is what would be gained by designating the entire organization.
An administration official involved in the debate, who declined to speak on the record because of the continuing decision-making process, said, “The optics of designating look great, and the chest-thumping is an understandable expression of sentiment, but everyone has to calm down and say, ‘What does it actually do?’ ”
Advertisement Continue reading the main story
Mrs. Clinton, in the Cook Islands at the start of a trip to Asia, declined to discuss the internal debate but said she would meet the Congressional deadline in September. “I’d like to underscore that we are putting steady pressure on the Haqqanis,” she said. “That is part of what our military does every day.”
A National Security Council spokeswoman, Caitlin Hayden, would not comment on the administration’s internal deliberations, but hinted in an e-mail on Friday at the White House’s preferences for using other means to pressure the group. “We’ve taken steps to degrade the Haqqani Taliban network’s ability to carry out attacks, including drying up their resources, targeting them with our military and intelligence resources, and pressing Pakistan to take action,” the e-mail said.
Critics also contend that a designation by the Treasury Department or the United Nations, or under an existing executive order, could achieve the same result as adding the network to the much more prominent State Department list, with far fewer consequences.
Advertisement Continue reading the main story
Advertisement Continue reading the main story
The internal debate has been so divisive that the United States intelligence community has been assigned to prepare classified analyses on the possible repercussions of a designation on Pakistan. “The whole thing is absurd,” said one senior American official who has long favored designating the group, expressing frustration with the delay.
The administration has debated the designation for more than a year, with senior military officers like Gen. John R. Allen, commander of American and NATO troops in Afghanistan, and many top counterterrorism officials arguing for it.
This year, bipartisan pressure in Congress to add the group to the terrorist list has grown. “It is well past time to designate this network as a terrorist group,” Senator Dianne Feinstein, the California Democrat who is chairwoman of the Intelligence Committee, said in July.
With virtually unanimous backing, Congress approved legislation that President Obama signed into law on Aug. 10 giving Mrs. Clinton 30 days to determine whether the Haqqani network is a terrorist group. If she says it is not, she must explain her reasoning in a report to lawmakers by Sept. 9.
On one level, the decision seems clear-cut. Since 2008, Haqqani suicide attackers in Afghanistan have struck the Indian Embassy, hotels and restaurants and the headquarters of the NATO-led International Security Assistance Force and the American Embassy.
A recent report by the Combating Terrorism Center at West Point described how the Haqqani network had evolved into a “sophisticated, diverse and transnational crime network.”
In a paper for the Heritage Foundation, Lisa Curtis, a senior research fellow at the foundation and a former C.I.A. analyst on South Asia, said, “The U.S. should stand by its counterterrorism principles and identify this deadly terrorist organization for what it is.”
American officials confirmed this week that a senior member of the Haqqani family leadership, Badruddin Haqqani, the network’s operational commander, was killed last week in a drone strike in Pakistan’s tribal areas.
Opponents cite several reasons that designating the Haqqani network a terrorist organization could further complicate relations between the United States and Pakistan, just as relations are getting back on track after months of grueling negotiations that finally reopened NATO supply routes through Pakistan.
One reason, officials said, is that such a move would seem to bring Pakistan a step closer to being designated as a state sponsor of terrorism. American officials say the Pakistani military’s Inter-Services Intelligence Directorate is covertly aiding the insurgents. Pakistani officials have said that the agency maintains regular contact with the Haqqanis, but deny that it provides operational support. They contend that the Obama administration is trying to deflect attention from its own failings in Afghanistan.
In his meetings at the Central Intelligence Agency in early August, Pakistan’s new spy chief, Lt. Gen. Zahir ul-Islam, told the C.I.A. director, David H. Petraeus, that his country would not protest the designation, if it was given. Two other Pakistani officials said this week that the decision was “an internal American issue.” American analysts believe Pakistan would be reluctant to publicly protest the designation because to do so would substantiate American beliefs that Pakistan supports the Haqqanis.
Critics also voice concern that designating the Haqqani network could undermine peace talks with the Taliban and complicate efforts to win the release of Sergeant Bergdahl.
The main American effort to open negotiations with the Taliban remains centered on the talks in Qatar, where Taliban representatives are supposed to be opening an office. But those talks were suspended by the insurgents in March, largely over a delayed prisoner swap for Sergeant Bergdahl, held by the Haqqani network since 2009. The United States would have released five insurgents from Guantánamo Bay, Cuba, to win his release.
“A designation makes negotiating with the Taliban harder, and would add another layer of things to do to build confidence in order to restart negotiations,” said Shamila N. Chaudhary, a South Asia analyst at the Eurasia Group who was the director for Pakistan and Afghanistan at the National Security Council. ||||| A Pakistani Taliban militant holds a rocket-propelled grenade at the Taliban stronghold of Shawal in Waziristan, Pakistan. The Obama administration is deeply divided over whether to designate the Pakistan-based Haqqani network as a terrorist group. (Ishtiaq Mahsud/AP)
Just days before a congressional deadline, the Obama administration is deeply divided over whether to designate the Pakistan-based Haqqani network as a terrorist group, with some officials worried that doing so could complicate efforts to restart peace talks with the Taliban and undermine already-fraught relations with Pakistan.
In early August, Congress gave Secretary of State Hillary Rodham Clinton 30 days to determine whether the Haqqani group, considered the most lethal opponent of U.S. forces in Afghanistan, meets the criteria for designation — a foreign organization engaging in terrorist activity that threatens U.S. citizens or national security.
If she says it does not, Clinton must explain her rationale in a report that is due to Congress on Sept. 9. Acknowledgment that the group meets the criteria, however, would probably force the administration to take action, which is strongly advocated by the military but has been resisted by the White House and some in the State Department.
Senior officials have repeatedly called the Haqqani network the most significant threat to the U.S. goal of exiting a relatively peaceful Afghanistan by the end of 2014 and have accused Pakistan of direct support for its leadership. The network has conducted a series of lethal, high-profile attacks against U.S. targets.
In recent weeks, the military has reiterated its call for Pakistan to prove its counterterrorism commitment by attacking Haqqani sanctuaries in its North Waziristan tribal area. The CIA has escalated drone attacks on Haqqani targets, including a strike last week that administration officials said killed the son of the network’s founder and its third-ranking official.
But just as there are reasons to designate the network a terrorist group, there are several factors weighing against the move, according to officials who spoke on the condition of anonymity about the administration’s closed-door deliberations.
Those factors include a tenuous rapprochement with Pakistan that led in early July to the reopening of vital U.S. military supply lines into Afghanistan; hopes that the autumn end of this year’s Afghan fighting season will bring the Taliban back to the negotiating table after the suspension of talks in March; and a reconfigured U.S. offer on a prisoner exchange that could lead to the release of the only U.S. service member being held by the militants.
After a White House meeting last week in which President Obama’s top national security advisers aired divergent views, Clinton is said to remain undecided as aides prepare a list of options. She has avoided taking action on the issue since assuring lawmakers late last year that she was undertaking a “final” review.
U.S. commanders in Afghanistan have long argued that labeling the Haqqani group a Foreign Terrorist Organization — a relatively short list of about four dozen entities that does not include the Taliban — is one of the most important steps the administration could take to win the war.
In a series of meetings and video conferences with Washington, Gen. John Allen, the top U.S. commander in Afghanistan, has said he “needs more tools” to fight the Haqqanis and asked specifically for the designation, one administration official said.
A recent report by the Combating Terrorism Center at the U.S. Military Academy at West Point called the Haqqani network “an efficient, transnational jihadi industry” that has “penetrated key business sectors, including import-export, transport, real estate and construction in Afghanistan, Pakistan, the Arab Gulf and beyond” and could be effectively undermined by designation as a terrorist organization.
Designation is “a key first step toward actively targeting the group’s international financial activity and support network,” according to the American Enterprise Institute’s Critical Threats unit. The unit is headed by Frederick W. Kagan, a leading counterterrorism adviser to the U.S. military in Afghanistan and to Allen’s predecessor, current CIA Director David H. Petraeus.
But others in the White House and State Department argue that the designation would be largely for show and would have little substantive effect.
Individual Haqqani leaders have already been designated as terrorists, and U.S. entities are prohibited from dealing with them. Separate designations, by the Treasury Department or the United Nations, or under an existing executive order, could achieve the same result as adding the network to the far more prominent State Department list.
Drawing a line between the Haqqanis and the Taliban will only make peace negotiations harder, said a second U.S. official who opposes designation. Administration policy “heavily depends on a political solution,” this official said. “Why not do everything we can to promote that? Why create one more obstacle, which is largely symbolic in nature?”
These officials fundamentally disagree with the assessment that the Haqqanis are a separate entity from the Taliban and are irreconcilable, and argue that the military is using the Haqqanis as an excuse to mask its own difficulties in the war.
For its part, Pakistan’s powerful military insists that it has no preference in what one senior officer called “an internal matter for the United States to decide.” But while it denies U.S. charges of complicity with or control over the Haqqanis, there is little doubt the Pakistanis are closest to the Haqqani group within the Taliban organization, and they have pressed for its inclusion in any peace negotiations.
“From our point of view, reconciliation has to be very broad-based,” the military official said. “The Haqqanis are not an individual entity. . . . They’re a part of the conversation, whatever that is.”
Although intelligence assessments differ in degree and the groups appear operationally independent to some degree, the Haqqani organization is generally considered to be one of three subgroups under the overall leadership of Taliban chief Mohammad Omar and the top-level council he heads in Quetta, in southern Pakistan.
Talks between U.S. and Taliban officials that began in late 2010 were suspended in March when the militants charged that the Americans had altered the terms of a potential prisoner swap in which five Taliban members held at the U.S. naval base at Guantanamo Bay, Cuba, would be exchanged in two groups for Sgt. Bowe Bergdahl, the U.S. soldier held since 2009 by the Haqqanis.
Underlying the stated reason for the suspension, U.S. officials believe that the Taliban is split on many levels over the talks — between field commanders and Pakistan-based leaders, between different factions and among individuals vying for power in a future Afghanistan.
But Haqqani provision of a proof-of-life video of Bergdahl, delivered through Taliban negotiators in February 2011, as well as a public profession of fealty to Omar by the Haqqani leadership in September, convinced some officials that a deal with one was tantamount to a deal with the other.
In June, the Americans transmitted a new offer to the Taliban, through the government of Qatar, in which Bergdahl’s release would come with the release of the second Guantanamo group rather than the first. They do not expect a response until after the end of the summer fighting season. | Hillary Clinton has a big decision to make by Sept. 9: whether to formally designate the Haqqani Network in Pakistan a terrorist organization. It might seem like a no-brainer considering the network is blamed for most of the anti-US attacks in Afghanistan, but the Washington Post and the New York Times say administration officials fear it could wind up being a largely symbolic move that would do nothing but foul up fragile US-Pakistan relations just as they're showing signs of improvement. The Post has Clinton still "undecided," while the Times has her undecided but leaning in favor of the designation. The latter would please military officials, who think it would help in the fight against the network. Other US officials say it might jeopardize hopes of getting the Taliban back to peace talks as well as the prospects of freeing captive American soldier Bowe Bergdahl. |
Here’s another way that Mark Zuckerberg is following his idol Steve Jobs: He will have a $1 salary, starting in 2013.
According to Facebook’s S-1 filing for its $5 billion IPO, in the first quarter of this year (which presumably means sometime in January) Zuckerberg requested that his base salary be reduced to $1 per year, effective January 1, 2013. His 2011 base salary was $500,000, and he also received a $220,500 bonus for the first half of the year. The S-1 also lists $783,529 in “other compensation”, which includes $692,679 for “costs related to personal use of aircraft chartered in connection with his comprehensive security program and on which family and friends flew during 2011”
Of course, Zuckerberg’s real compensation is his 28 percent stake in the company. ||||| Facebook just filed its S-1, and as we predicted yesterday, it's got a letter from CEO and founder Mark Zuckerberg in it.
Among the highlights: "we don’t build services to make money; we make money to build better services."
It's like an updated spin on Google's "Don't Be Evil" message from its own 2004 IPO filing.
Later, he explains that Facebook is going public to return money to early investors, not because the need it (the company has amost $4 billion in cash):
"We’re going public for our employees and our investors. We made a commitment to them when we gave them equity that we’d work hard to make it worth a lot and make it liquid, and this IPO is fulfilling our commitment."
He also talks about "The Hacker Way," explaining that hackers aren't evil people who break into computers. Rather, it's an approach to the world:
"The Hacker Way is an approach to building that involves continuous improvement and iteration. Hackers believe that something can always be better, and that nothing is ever complete. They just have to go fix it — often in the face of people who say it’s impossible or are content with the status quo."
Here's the full text:
Facebook was not originally created to be a company. It was built to accomplish a social mission — to make the world more open and connected.
We think it’s important that everyone who invests in Facebook understands what this mission means to us, how we make decisions and why we do the things we do. I will try to outline our approach in this letter.
At Facebook, we’re inspired by technologies that have revolutionized how people spread and consume information. We often talk about inventions like the printing press and the television — by simply making communication more efficient, they led to a complete transformation of many important parts of society. They gave more people a voice. They encouraged progress. They changed the way society was organized. They brought us closer together.
Today, our society has reached another tipping point. We live at a moment when the majority of people in the world have access to the internet or mobile phones — the raw tools necessary to start sharing what they’re thinking, feeling and doing with whomever they want. Facebook aspires to build the services that give people the power to share and help them once again transform many of our core institutions and industries.
There is a huge need and a huge opportunity to get everyone in the world connected, to give everyone a voice and to help transform society for the future. The scale of the technology and infrastructure that must be built is unprecedented, and we believe this is the most important problem we can focus on.
We hope to strengthen how people relate to each other.
Even if our mission sounds big, it starts small — with the relationship between two people.
Personal relationships are the fundamental unit of our society. Relationships are how we discover new ideas, understand our world and ultimately derive long-term happiness.
At Facebook, we build tools to help people connect with the people they want and share what they want, and by doing this we are extending people’s capacity to build and maintain relationships.
People sharing more — even if just with their close friends or families — creates a more open culture and leads to a better understanding of the lives and perspectives of others. We believe that this creates a greater number of stronger relationships between people, and that it helps people get exposed to a greater number of diverse perspectives.
By helping people form these connections, we hope to rewire the way people spread and consume information. We think the world’s information infrastructure should resemble the social graph — a network built from the bottom up or peer-to-peer, rather than the monolithic, top-down structure that has existed to date. We also believe that giving people control over what they share is a fundamental principle of this rewiring.
We have already helped more than 800 million people map out more than 100 billion connections so far, and our goal is to help this rewiring accelerate.
We hope to improve how people connect to businesses and the economy.
We think a more open and connected world will help create a stronger economy with more authentic businesses that build better products and services.
As people share more, they have access to more opinions from the people they trust about the products and services they use. This makes it easier to discover the best products and improve the quality and efficiency of their lives.
One result of making it easier to find better products is that businesses will be rewarded for building better products — ones that are personalized and designed around people. We have found that products that are “social by design” tend to be more engaging than their traditional counterparts, and we look forward to seeing more of the world’s products move in this direction.
Our developer platform has already enabled hundreds of thousands of businesses to build higher-quality and more social products. We have seen disruptive new approaches in industries like games, music and news, and we expect to see similar disruption in more industries by new approaches that are social by design.
In addition to building better products, a more open world will also encourage businesses to engage with their customers directly and authentically. More than four million businesses have Pages on Facebook that they use to have a dialogue with their customers. We expect this trend to grow as well.
We hope to change how people relate to their governments and social institutions.
We believe building tools to help people share can bring a more honest and transparent dialogue around government that could lead to more direct empowerment of people, more accountability for officials and better solutions to some of the biggest problems of our time.
By giving people the power to share, we are starting to see people make their voices heard on a different scale from what has historically been possible. These voices will increase in number and volume. They cannot be ignored. Over time, we expect governments will become more responsive to issues and concerns raised directly by all their people rather than through intermediaries controlled by a select few.
Through this process, we believe that leaders will emerge across all countries who are pro-internet and fight for the rights of their people, including the right to share what they want and the right to access all information that people want to share with them.
Finally, as more of the economy moves towards higher-quality products that are personalized, we also expect to see the emergence of new services that are social by design to address the large worldwide problems we face in job creation, education and health care. We look forward to doing what we can to help this progress.
Our Mission and Our Business
As I said above, Facebook was not originally founded to be a company. We’ve always cared primarily about our social mission, the services we’re building and the people who use them. This is a different approach for a public company to take, so I want to explain why I think it works.
I started off by writing the first version of Facebook myself because it was something I wanted to exist. Since then, most of the ideas and code that have gone into Facebook have come from the great people we’ve attracted to our team.
Most great people care primarily about building and being a part of great things, but they also want to make money. Through the process of building a team — and also building a developer community, advertising market and investor base — I’ve developed a deep appreciation for how building a strong company with a strong economic engine and strong growth can be the best way to align many people to solve important problems.
Simply put: we don’t build services to make money; we make money to build better services.
And we think this is a good way to build something. These days I think more and more people want to use services from companies that believe in something beyond simply maximizing profits.
By focusing on our mission and building great services, we believe we will create the most value for our shareholders and partners over the long term — and this in turn will enable us to keep attracting the best people and building more great services. We don’t wake up in the morning with the primary goal of making money, but we understand that the best way to achieve our mission is to build a strong and valuable company.
This is how we think about our IPO as well. We’re going public for our employees and our investors. We made a commitment to them when we gave them equity that we’d work hard to make it worth a lot and make it liquid, and this IPO is fulfilling our commitment. As we become a public company, we’re making a similar commitment to our new investors and we will work just as hard to fulfill it.
The Hacker Way
As part of building a strong company, we work hard at making Facebook the best place for great people to have a big impact on the world and learn from other great people. We have cultivated a unique culture and management approach that we call the Hacker Way.
The word “hacker” has an unfairly negative connotation from being portrayed in the media as people who break into computers. In reality, hacking just means building something quickly or testing the boundaries of what can be done. Like most things, it can be used for good or bad, but the vast majority of hackers I’ve met tend to be idealistic people who want to have a positive impact on the world.
The Hacker Way is an approach to building that involves continuous improvement and iteration. Hackers believe that something can always be better, and that nothing is ever complete. They just have to go fix it — often in the face of people who say it’s impossible or are content with the status quo.
Hackers try to build the best services over the long term by quickly releasing and learning from smaller iterations rather than trying to get everything right all at once. To support this, we have built a testing framework that at any given time can try out thousands of versions of Facebook. We have the words “Done is better than perfect” painted on our walls to remind ourselves to always keep shipping.
Hacking is also an inherently hands-on and active discipline. Instead of debating for days whether a new idea is possible or what the best way to build something is, hackers would rather just prototype something and see what works. There’s a hacker mantra that you’ll hear a lot around Facebook offices: “Code wins arguments.”
Hacker culture is also extremely open and meritocratic. Hackers believe that the best idea and implementation should always win — not the person who is best at lobbying for an idea or the person who manages the most people.
To encourage this approach, every few months we have a hackathon, where everyone builds prototypes for new ideas they have. At the end, the whole team gets together and looks at everything that has been built. Many of our most successful products came out of hackathons, including Timeline, chat, video, our mobile development framework and some of our most important infrastructure like the HipHop compiler.
To make sure all our engineers share this approach, we require all new engineers — even managers whose primary job will not be to write code — to go through a program called Bootcamp where they learn our codebase, our tools and our approach. There are a lot of folks in the industry who manage engineers and don’t want to code themselves, but the type of hands-on people we’re looking for are willing and able to go through Bootcamp.
The examples above all relate to engineering, but we have distilled these principles into five core values for how we run Facebook:
Focus on Impact
If we want to have the biggest impact, the best way to do this is to make sure we always focus on solving the most important problems. It sounds simple, but we think most companies do this poorly and waste a lot of time. We expect everyone at Facebook to be good at finding the biggest problems to work on.
Move Fast
Moving fast enables us to build more things and learn faster. However, as most companies grow, they slow down too much because they’re more afraid of making mistakes than they are of losing opportunities by moving too slowly. We have a saying: “Move fast and break things.” The idea is that if you never break anything, you’re probably not moving fast enough.
Be Bold
Building great things means taking risks. This can be scary and prevents most companies from doing the bold things they should. However, in a world that’s changing so quickly, you’re guaranteed to fail if you don’t take any risks. We have another saying: “The riskiest thing is to take no risks.” We encourage everyone to make bold decisions, even if that means being wrong some of the time.
Be Open
We believe that a more open world is a better world because people with more information can make better decisions and have a greater impact. That goes for running our company as well. We work hard to make sure everyone at Facebook has access to as much information as possible about every part of the company so they can make the best decisions and have the greatest impact.
Build Social Value
Once again, Facebook exists to make the world more open and connected, and not just to build a company. We expect everyone at Facebook to focus every day on how to build real value for the world in everything they do.
Thanks for taking the time to read this letter. We believe that we have an opportunity to have an important impact on the world and build a lasting company in the process. I look forward to building something great together.
||||| Zuckerberg Is the Billion-Share Man: Who Owns What, Who Makes What in the Facebook IPO
In what is probably the most anticipated document ever received by the U.S. Securities and Exchange Commission, Facebook has filed to take its company public in a $5 billion initial public offering later this year.
Having endured lots of speculation around who owns exactly how much, there are now some hard numbers from the S-1 filing that hit the SEC Web site today.
Here are some details of the cap table, which is made up of both Class A and Class B shares, which carry different voting powers. Class B are the common shares.
Not surprisingly, co-founder and CEO Mark Zuckerberg owns the most equity of any single person. His 1.1 billion Class B shares give him almost a 57 percent stake — about half of which he owns and half of which are owned by others but over which he exercises proxy voting authority.
The 27-year-old entrepreneur also holds 42.2 million Class A shares, which represents a 36.1 percent stake of that group.
Facebook’s filing said he will sell some shares in the IPO, although it doesn’t specify how many. However, it noted that most of the proceeds from the sale will go toward paying taxes on his purchase of 120 million options of Class B common stock he also has.
Peter Thiel, the PayPal founder and CEO who sold that company to eBay, owns a 2.5 percent stake, which has decreased over time from the more than 10 percent he owned as Facebook’s first angel investor. Not bad for a $500,000 investment made in 2004.
Jim Breyer of Accel Partners controls more than 201 million Class B shares, amounting to a little more than 11 percent of Facebook’s equity, having led Accel’s participation in Facebook’s $25 million Series A way back in 2006. Breyer is also a personal investor; 11.7 million shares are his, and about 190 million shares are Accel’s.
Digital Sky Technologies, the Russian investment firm, has 5.4 percent of the Class B equity, or 94.6 million shares, owing to its $200 million investment in 2009, plus an additional $500 million in 2011. DST has also been buying Facebook shares from existing shareholders, allowing some to cash out their equity. It also has 36.7 million Class A shares, or 31.4 percent.
Goldman Sachs has a sizable 66-million share slice of Class A shares, or 56.3 percent. Last year, it was involved in the $1.5 billion round that included DST.
There’s also a batch of individuals with single-digit stakes and smaller ones of note.
Dustin Moskovitz: The Facebook co-founder owns 7.6 percent of the company or 133.8 million Class B shares.
Zuckerberg’s father, Edward Zuckerberg, a dentist, was rewarded with two million Class B shares in consideration for his providing early start-up capital to his son in 2004 and 2005. He was given an option to purchase the shares, but the option expired a year after it was given to him, without his exercising it. The board of directors — minus Mark Zuckerberg — issued the 2 million shares to Glate LLC, a company controlled by the elder Zuckerberg.
There are also a few names of note who don’t appear in the filing:
Microsoft had bought a stake amounting to 1.6 percent, stemming from its $240 million investment in 2007, which included a strategic alliance for advertising. Its stake, now likely less, is not mentioned in the filing.
Eduardo Saverin, the Brazilian co-founder, also has a stake worth a few points, but he is not mentioned in its cap tables.
And other VC investors, Greylock Partners and Meritech Capital Partners, are barely mentioned as well. T. Rowe Price owns 6 million Class A shares, or 5.2 percent, as well as 12.1 million Class B shares.
The filing shows that the top five highest compensated employees are:
Mark Zuckerberg’s base salary is $500,000 a year. Effective January 1, 2013, his salary will be reduced to $1 per year. He has options to buy 120 million additional shares of the class B common stock at a strike price of six cents a share, which expires in November of 2015.
COO Sheryl Sandberg made $382,000 in salary and bonuses in 2011 and received $30.5 million in stock awards. She has options to buy 4.7 million shares, of which 3.5 million have a strike price of $10.39, and 1.2 million a strike price of $15. Sandberg already holds 1.9 million shares of Class B common shares and holds 39.3 million restricted stock units, too.
David Ebersman, the quiet CFO whom AllThingsD profiled yesterday, made $382,000 in salary and bonuses and has 2.2 million Class B shares and 7.5 million RSUs.
Mike Schroepfer, vice president of engineering, made $334,000 in salary and bonuses last year. He holds 2.1 million in Class B shares and 6.1 million RSUs.
VP and general counsel Theodore W. Ullyot also makes $275,000 per year and is eligible for a $400,000 annual retention bonus during the first five years of his employment, through 2013. He has about 1.9 million shares and exercisable options and holds 3.8 million RSUs.
Then, there are Facebook’s outside directors: Venture capitalist Marc Andreessen; Erskine Bowles, the former White House Chief of Staff under Bill Clinton; Breyer; Washington Post CEO Don Graham; Netflix CEO Reed Hastings; and Thiel. Each were paid $16,700 in fees for sitting on the board. Bowles got 601,400 shares, while Hastings got 593,400 shares.
Andreessen also has 5.3 millon RSUs that vest over four years. Graham has one million RSUs. Bowles and Hastings also each have 20,000 RSUs. ||||| Dow Jones Reprints: This copy is for your personal, non-commercial use only. To order presentation-ready copies for distribution to your colleagues, clients or customers, use the Order Reprints tool at the bottom of any article or visit www.djreprints.com
Facebook Inc.'s impending initial public offering could yield 27-year-old founder Mark Zuckerberg a fortune valued at $21 billion to $28 billion.
According to IPO paperwork Facebook filed Wednesday, Mr. Zuckerberg owns about 28% of the soon-to-be-public company, and is its single largest shareholder. If Facebook raises money at a high-end valuation of $100 billion, Mr. Zuckerberg's stock would be worth $28 billion.
On top of his stock, last year Mr. Zuckerberg was paid $1.49 million in salary, bonus and other compensation for his role as chief executive, according to the regulatory filing.
Facebook's filing added that Mr. Zuckerberg will sell ... | So just how rich will Mark Zuckerberg be after Facebook's IPO? Very, very rich. Today's filing shows that he is the company's biggest shareholder with a 28% stake, reports the Wall Street Journal. If Facebook gets its high-end valuation of $100 billion when the stock debuts this spring, he's worth $28 billion, which the Journal notes would put him at No. 9 on the Forbes list of the world's wealthiest people. Last year, Zuckerberg earned about $1.5 million in salary, bonus, and other compensation, though he's going to take a base salary of $1 starting next year, notes TechCrunch. (Read his full letter explaining Facebook's mission in the SEC filing here.) AllThingsD, meanwhile, has the nitty gritty on what other top Facebook execs made last year, including COO Sheryl Sandberg: $382,000 in salary and bonuses, plus $30.5 million in stock awards. |
Moreover, Mr. Trump’s surge is coming very late in the campaign, at a point where advertising rates climb and the chance to invest in a long-term digital and campaign infrastructure is long past.
Advertisement Continue reading the main story
And Mrs. Clinton’s own fund-raising operation is rapidly expanding as well. In a Twitter post on Wednesday, a spokesman for Mrs. Clinton said that her campaign and a joint fund-raising operation with the Democratic National Committee had $102 million on hand, not including cash held directly by the party.
But Mr. Trump’s announcement suggests that after months of dithering and false starts, he has begun to exploit an opportunity: marrying his powerful credibility among grass-roots Republicans with targeted small-donor fund-raising, particularly online, where Mr. Trump’s website features buttons soliciting $50, $25 and even $10 contributions.
At the end of May, Mr. Trump reported barely more than $1.3 million in cash, alarming Republicans, who feared a financial rout by Mrs. Clinton.
Mitt Romney, the party’s 2012 nominee and a wealthy man in his own right, was never able to stoke intense enthusiasm among small donors and relied disproportionately on big ones. During July of that year, for example, Mr. Romney and the Republican National Committee reported raising a total of just $19 million from contributions of less than $200.
Mr. Trump was able to ramp up quickly in part through a digital operation set up by the R.N.C. since that campaign. Even before Mr. Trump was the nominee, the party built out its email list and tested ways of targeting small donors.
With that in place, party officials unleashed a pent-up desire by rank-and-file Republicans to donate to a candidate who has bluntly attacked lobbyists and big donors. While Mr. Trump accepted online donations during the primary season, he did not send out an email solicitation until late June — which brought in $3 million alone, an indication of the well of money available to him.
The campaign has also raised money by promising to match small donations out of Mr. Trump’s pocket, a tactic available only to wealthy candidates.
“There was always that potential, but you didn’t have candidates who were as uniquely positioned in the same way that Trump is,” said Patrick Ruffini, a Republican strategist who ran digital fund-raising at the Republican National Committee under President George W. Bush.
Video
But Mr. Trump’s surge also emphasizes the complication for Republicans in having him at the head of their party. He is relying more on small-donor fund-raising in part because he has faced opposition from some of the party’s biggest patrons, such as Meg Whitman, a California business executive, who said Monday that she was so disgusted with Mr. Trump that she would vote for Mrs. Clinton.
Advertisement Continue reading the main story
To bolster his low-dollar fund-raising, Mr. Trump and his team are now working to assuage the broader pool of affluent Republican donors and fund-raisers. In recent weeks, Mr. Trump has laid off his criticisms of the party’s donor class and scheduled an array of formal fund-raising events for Republican donors in money centers like Florida and New York.
Moreover, even as his name and followers are helping fund Republican get-out-the-vote efforts around the country, Mr. Trump is feuding with the party’s senior leadership, pointedly refusing to endorse prominent Republicans facing Trump-inspired primary opponents, such as the one challenging Representative Paul D. Ryan of Wisconsin, the House speaker.
Newsletter Sign Up Continue reading the main story Please verify you're not a robot by clicking the box. Invalid email address. Please re-enter. You must select a newsletter to subscribe to. Sign Up You agree to receive occasional updates and special offers for The New York Times's products and services. Thank you for subscribing. An error has occurred. Please try again later. View all New York Times newsletters.
And it is the Republican National Committee that is providing much of the technical expertise that has allowed Mr. Trump to quickly increase his low-dollar fund-raising, some Republican officials said.
Even as relations fray between Mr. Trump and some fellow Republicans, the party and Mr. Trump each needs the other. And Mr. Trump, as the nominee and the fund-raising tent pole for the party, may have the upper hand.
“Under normal circumstances, the party would have money as leverage,” Mr. Ruffini said. “They could cut off fund-raising to a candidate who misbehaves. And that leverage has been taken completely away.” ||||| Breaking News Emails Get breaking news alerts and special reports. The news and stories that matter, delivered weekday mornings.
/ Updated By Chuck Todd and Mark Murray
First Read is a morning briefing from Meet the Press and the NBC Political Unit on the day's most important political stories and why they matter.
Hillary Clinton has owned the 2016 airwaves for two-straight months
Exactly two months ago, Hillary Clinton’s campaign went up with its first general election TV ads in battleground states, and in that time it has spent $61 million over the airwaves, while pro-Clinton outside groups have chipped in an additional $43 million. That’s a combined $104 million in total ad spending for Team Clinton.
But in that same time frame, Donald Trump’s campaign still hasn’t spent a single cent on a general-election ad, with two pro-Trump outside groups coming to the rescue with $12.4 million over the airwaves. That’s a nearly 9-to-1 advantage in ad spending. And it raises some important questions for the Trump campaign. When will it FINALLY start airing advertisements (with him trailing in key states and nationally 84 days to go until Election Day)? What is Trump doing with his campaign money (after the New York Times reported two weeks ago that Trump and the GOP had raised a combined $82 million last month)? And will any other outside groups come to Trump’s defense? Political scientists, you now have an amazing case study on your hands: What happens in a presidential race when one side owns the airwaves for two-straight months?
Oh, and get this: The Green Party’s Jill Stein ($189,000) and Libertarian nominee Gary Johnson ($15,000) have spent more on ads than the Trump campaign ($0) in this general election.
Total Clinton ad spending so far:
Clinton campaign: $61 million
Clinton outside groups: $43 million
Total Team Clinton: $104 million
Total Trump ad spending so far:
Trump campaign: $0
Trump outside groups: $12.4 million
Total Team Trump: $12.4 million
SOURCE: Advertising Analytics/NBC News
NBC News
NBC|SurveyMonkey tracking poll
Clinton 50%, Trump 41%: Meanwhile, the latest weekly national NBC|SurveyMonkey online tracking poll shows Hillary Clinton leading Donald Trump by nine points, 50%-41% -- virtually unchanged from last week’s 51%-41% advantage for Clinton. Also from the poll: “[M]ore than four in 10 voters said she has the personality and temperament to serve effectively as president. This includes 39 percent of Democrats and Democratic-leaners and 23 percent of Independents who don't lean toward any party. Trump scores much lower. Just 17 percent of all voters say that Trump has the personality and temperament to serve effectively as president. Even among Republican and Republican-leaners, only 19 percent said Trump has the personality to serve effectively.”
The Education Gap -- revisited
Want to know why the 2016 presidential race, according to the most recent round of NBC/WSJ/Marist polling, is closer in Iowa and Ohio than in Colorado or Virginia? Or why Team Clinton (the campaign + Super PAC) has stopped advertising in Colorado and Virginia? So much of it has to do with the composition of whites without college degrees in a particular state. The higher the percentage, the better for Trump. The lower, the worse. The numbers below are from our Hart-McInturff team that conducts the national NBC/WSJ poll -- the first column shows the percentage of whites age 25+ without a degree and the second column is the percentage that group is among all adults age 25+ in each state.
NBC News
Trump’s national security speech: Criticizer-in-chief but no coherent policy
NBC’s Benjy Sarlin wraps up Donald Trump’s national-security speech from yesterday. “[T]he national security framework he described was so contradictory and filled with so many obvious falsehoods that it's virtually impossible to tell what he would do as president… That's because Trump previously supported every single foreign policy decision he now decries. Despite claiming daily that he opposed the Iraq War from the start, Trump endorsed deposing Saddam Hussein in a 2002 interview and there's no record of him opposing the war until after it had began. As for exiting the Iraq War, he said repeatedly in 2007 and 2008 that America should withdraw immediately and later recommended the same course for Afghanistan. Turning to Libya, Trump recorded a video in 2011 demanding the Obama administration remove Gadhafi from power on humanitarian grounds. He went on to lie about his support for the Libya intervention in a Republican debate only to admit to it when confronted with footage of his old statements in a CBS interview. Finally, Trump called Mubarak's departure ‘a good thing’ at the time before turning against the idea years later.”
Trump calls for “extreme vetting”
NBC’s Ali Vitali has more on Trump’s speech from yesterday. “Donald Trump on Monday promised ‘extreme vetting’ of immigrants, including ideological screening that that will allow only those who ‘share our values and respect our people’ into the United States. Among the traits that Trump would screen for are those who have ‘hostile attitudes’ toward the U.S., those who believe ‘Sharia law should supplant American law,’ people who ‘don't believe in our Constitution or who support bigotry and hatred.’ Those who Trump will allow in are ‘only those who we expect to flourish in our country.’ The Republican nominee did not disavow his prior proposal to temporarily ban all Muslims from the United States ‘until our country's representatives can figure out what is going on.’ The position, released in December 2015, is still on the nominee's website. He did, however, call for a temporary suspension ‘from some of the most dangerous and volatile regions of the world that have a history of exporting terrorism’ in order to succeed in the goal of extreme ideological vetting.”
Clinton camp announces its transition leadership team
Finally, the Clinton campaign this morning announced its transition leadership team -- with former Interior Secretary Ken Salazar serving as chair. The transition co-chairs are former National Security Adviser Tom Donilon, former Michigan Gov. Jennifer Granholm, the Center for American Progress’ Neera Tanden, and former ’08 Clinton campaign chief Maggie Williams.
On the trail
Hillary Clinton holds a voter-registration even in Philadelphia at 1:15 pm ET… Tim Kaine hits North Carolina… Donald Trump holds a rally in West Bend, WI at 8:30 pm ET… And Mike Pence is in New Mexico.
Countdown to Election Day: 84 days |||||
Jill Stein speaks to supporters of Sen. Bernie Sanders (I-Vt.) outside City Hall in Philadelphia on July 26. (Michael Robinson Chavez/The Washington Post)
The campaign of the long-shot Green Party candidate, Jill Stein, has spent $189,000 more on TV advertising for the general election than the Republican nominee, Donald Trump. Trump is also being out-spent by the campaign of the Libertarian candidate, according to a report from NBC News. Gary Johnson has outspent Trump by $15,000.
Why so close? Because Johnson has spent almost nothing — just $15,000 in total. And Trump has spent exactly nothing.
(The Washington Post)
Some PACs supporting Trump have reserved ad time, spending a little over $8 million. That's less than 10 percent of what the campaign of Hillary Clinton and PACs supporting her have spent.
Clinton, for example, has a national TV buy that's running during the Olympics. The campaign has spent $13.6 million on NBC and its affiliates to advertise during Olympics coverage, according to the AP. This is similar to what President Obama and former Massachusetts governor Mitt Romney did in 2012. "I'd love to know what they're waiting for," one Romney veteran said of the Trump campaign.
Especially since waiting only makes the problem worse. The longer the Trump campaign waits to reserve ad time, the less ad time exists to buy and the more ads cost. During a presidential general-election year, candidates are competing with each other for ad space, but also with people running for the Senate, the House and local office. If you want to buy ads on local news in Pennsylvania, for example, you're competing with a lot of other campaigns for those 30 seconds. To the glee of station owners, that helps drive up prices.
In 2012, Romney's campaign and its allies spent $16 million in New Hampshire alone. They spent $5 million in Virginia in just the week before the Republican convention that year. Trump is at $0.
So what's Trump spending his money on? We won't have detailed numbers for his July spending until later this month, but we know where he spent his money through June.
The biggest chunk was actually on advertising, including on television, in early primary states. That's primary spending, not general — and even then, it wasn't that much. By January, the pro-Jeb Bush group Right to Rise had already spent $50 million in advertising. Through December, Trump's campaign had spent a little over $1 million on a broad variety of advertising costs.
His biggest spending category in June was on fundraising consultants, since that was the first month he actually started officially raising money.
(The Washington Post)
(What's "management"? Things like general consulting costs — and, in June, more than $200,000 on legal consulting.)
Trump has cash in his campaign accounts to spend. He's bought ads in the past. Why he's not doing so now — why the Green Party candidate is spending more than the Republican Party candidate, with about 90 days left until the election — is a question that defies a logical answer. ||||| Hillary Clinton leads Donald Trump by 9 points — 50 percent to 41 percent — in the latest NBC News|SurveyMonkey Weekly Election Tracking Poll.
The numbers were virtually unchanged since last week’s poll. Generally low favorability and negative attitudes among voters plague both candidates, however, as they make appeals to voters in key swing states in the weeks ahead.
A majority of voters continue to hold unfavorable impressions of both current nominees; Clinton’s negative feelings were held by slightly less voters (59 percent) than Trump (64 percent). These results are according to the latest from the NBC News|SurveyMonkey Weekly Election Tracking Poll conducted online from August 8 through August 14, 2016 among registered voters.
Clinton continues her lead over the field in a four-way general election match-up with 43 percent against Trump (37 percent), Libertarian Gary Johnson (11 percent) and Green party candidate Jill Stein (4 percent).
Though Clinton’s lead over Trump remains significant following the convention, a closer look at voter attitudes reveals that many still harbor negative feelings about both candidates. When asked to select all the qualities that describe each candidate including options for honesty, values and temperament, majorities of voters chose “none of the above” to describe both candidates.
Clinton’s high point, however, is her perceived leadership strengths and ability to serve the country’s interests well. Her campaign has consistently emphasized her leadership qualities and worked to draw a stark contrast between her and Trump on issues of national security. Her campaign’s efforts may be working well as more than four in 10 voters said she has the personality and temperament to serve effectively as president. This includes 78 percent of Democrats and Democratic-leaners and 25 percent of Independents who don’t lean toward any party.
Trump scores much lower. Just 17 percent of all voters say that Trump has the personality and temperament to serve effectively as president. Even among Republican and Republican-leaners, only 36 percent said Trump has the personality to serve effectively.
These results come after 50 Republican national security advisers issued a letter stating that none of them would vote for the Republican candidate because he lacks the character and experience to be president.
Unsurprisingly, Trump does better than Clinton in the honesty category, but she still does not score particularly high marks. Just 16 percent of voters say that Trump is honest and trustworthy, but only 11 percent believe the same about Clinton. Even among Democrats and Democratic-leaners of her own party, just 23 percent say she is honest and trustworthy, whereas 35 percent of Republicans and Republican-leaners say Trump is honest and trustworthy.
These numbers indicate that the post-convention bounce for Clinton is more likely a shift in the race. Her lead has been nearly unchanged for the past three weeks. Still, voters don’t feel particularly positive about their general election choices.
Correction: A SurveyMonkey processing error led to the presentation of incorrect numbers for sub-group breakdowns on two multiple-select questions. The numbers have been corrected..
The NBC News|SurveyMonkey Weekly Election Tracking poll was conducted online August 8 through August 14, 2016 among a national sample of 15,179 adults who say they are registered to vote. Respondents for this non-probability survey were selected from the nearly three million people who take surveys on the SurveyMonkey platform each day. Results have an error estimate of plus or minus 1.2 percentage points. For full results and methodology, click here. | Hillary Clinton has spent $61 million on TV ads in two months, a hefty sum that does not include $43 million spent by pro-Clinton groups. That's exactly $104 million more than Donald Trump. With less than three months until the election, the Republican candidate has yet to spend a dollar on TV ads, though pro-Trump groups have contributed $12.4 million. Even Green Party candidate Jill Stein has spent more, about $189,000, while Libertarian Gary Johnson has spent $15,000, reports NBC News. If Trump were already well ahead, keeping money in the bank might make sense. But an NBC weekly national poll finds he's nine points behind Clinton, 50% to 41%, with a slightly higher unfavorability rating of 64% compared to Clinton's 59%. Trump—of whom only 19% of Republican and Republican-leaning poll respondents say has the personality and temperament to serve as president—is hardly short on cash. As the New York Times reports, Trump and the Republicans raised $82 million last month alone. So what’s he spending his money on? Philip Bump at the Washington Post says it's "a question that defies a logical answer." He adds Trump's lack of spending is particularly strange because "the longer the Trump campaign waits to reserve ad time, the less ad time exists to buy and the more ads cost" since candidates in House and Senate races are also in the mix. "Political scientists, you now have an amazing case study on your hands," as NBC puts it. |
BY: Follow @alanagoodman
LISTEN HERE
Newly discovered audio recordings of Hillary Clinton from the early 1980s include the former first lady’s frank and detailed assessment of the most significant criminal case of her legal career: defending a man accused of raping a 12-year-old girl.
In 1975, the same year she married Bill, Hillary Clinton agreed to serve as the court-appointed attorney for Thomas Alfred Taylor, a 41-year-old accused of raping the child after luring her into a car.
The recordings, which date from 1983-1987 and have never before been reported, include Clinton's suggestion that she knew Taylor was guilty at the time. She says she used a legal technicality to plead her client, who faced 30 years to life in prison, down to a lesser charge. The recording and transcript, along with court documents pertaining to the case, are embedded below.
The full story of the Taylor defense calls into question Clinton’s narrative of her early years as a devoted women and children’s advocate in Arkansas—a narrative the 2016 presidential frontrunner continues to promote on her current book tour.
Her comments on the rape trial are part of more than five hours of unpublished interviews conducted by Arkansas reporter Roy Reed with then-Arkansas Gov. Bill Clinton and his wife in the mid-1980s.
The interviews, archived at the University of Arkansas in Fayetteville, were intended for an Esquire magazine profile that was never published, and offer a rare personal glimpse of the couple during a pivotal moment in their political careers.
But Hillary Clinton’s most revealing comments—and those most likely to inflame critics—concern the decades-old rape case.
‘The Prosecutor Had Evidence’
Twenty-seven-year-old Hillary Rodham had just moved to Fayetteville, and was running the University of Arkansas’ newly-formed legal aid clinic, when she received a call from prosecutor Mahlon Gibson.
"The prosecutor called me a few years ago, he said he had a guy who had been accused of rape, and the guy wanted a woman lawyer," said Clinton in the interview. "Would I do it as a favor for him?"
The case was not easy. In the early hours of May 10, 1975, the Springdale, Arkansas police department received a call from a nearby hospital. It was treating a 12-year-old girl who said she had been raped.
The suspect was identified as Thomas Alfred Taylor, a 41-year-old factory worker and friend of the girl’s family.
And though the former first lady mentioned the ethical difficulties of the case in Living History, her written account some three decades later is short on details and has a far different tone than the tapes.
"It was a fascinating case, it was a very interesting case," Clinton says in the recording. "This guy was accused of raping a 12-year-old. Course he claimed that he didn’t, and all this stuff" (LISTEN HERE).
Describing the events almost a decade after they had occurred, Clinton’s struck a casual and complacent attitude toward her client and the trial for rape of a minor.
"I had him take a polygraph, which he passed – which forever destroyed my faith in polygraphs," she added with a laugh.
Clinton can also be heard laughing at several points when discussing the crime lab’s accidental destruction of DNA evidence that tied Taylor to the crime.
From a legal ethics perspective, once she agreed to take the case, Clinton was required to defend her client to the fullest even if she did believe he was guilty.
"We’re hired guns," Ronald D. Rotunda, a professor of legal ethics at Chapman University, told the Washington Free Beacon. "We don’t have to believe the client is innocent…our job is to represent the client in the best way we can within the bounds of the law."
However, Rotunda said, for a lawyer to disclose the results of a client’s polygraph and guilt is a potential violation of attorney-client privilege.
"You can’t do that," he said. "Unless the client says: ‘You’re free to tell people that you really think I’m a scumbag, and the only reason I got a lighter sentence is because you’re a really clever lawyer.’"
Clinton was suspended from the Arkansas bar in March of 2002 for failing to keep up with continuing legal education requirements, according to Arkansas judicial records.
Public records provide few details of what happened on the night in question. The Washington County Sherriff’s Office, which investigated the case after the Springdale Police Department handled the initial arrest, said it was unable to provide an incident report since many records from that time were not maintained and others were destroyed in a flood.
A lengthy yet largely overlooked 2008 Newsday story focused on Clinton's legal strategy of attacking the credibility of the 12-year-old victim.
The girl had joined Taylor and two male acquaintances, including one 15-year-old boy she had a crush on, on a late-night trip to the bowling alley, according to Newsday.
Taylor drove the group around in his truck, pouring the girl whisky and coke on the way.
The group later drove to a "weedy ravine" near the highway where Taylor raped the 12-year-old.
Around 4 a.m., the girl and her mother went to the hospital, where she was given medical tests and reported that she had been assaulted.
Taylor was arrested on May 13, 1975. The court initially appointed public defender John Barry Baker to serve as his attorney. But Taylor insisted he wanted a female lawyer.
The lawyer he would end up with: Hillary Rodham.
According to court documents, the prosecution’s case was based on testimony from the 12-year-old girl and the two male witnesses as well as on a "pair of men’s undershorts taken from the defendant herein."
In a July 28, 1975, court affidavit, Clinton wrote that she had been informed the young girl was "emotionally unstable" and had a "tendency to seek out older men and engage in fantasizing."
"I have also been told by an expert in child psychology that children in early adolescence tend to exaggerate or romanticize sexual experiences and that adolescents in disorganized families, such as the complainant’s, are even more prone to exaggerate behavior," Clinton said.
Clinton said the child had "in the past made false accusations about persons, claiming they had attacked her body" and that the girl "exhibits an unusual stubbornness and temper when she does not get her way."
But the interview reveals that an error by the prosecution would render unnecessary these attacks on the credibility of a 12-year-old rape victim.
‘We had a lot of fun with Maupin’
"You know, what was sad about it," Clinton told Reed, "was that the prosecutor had evidence, among which was [Taylor’s] underwear, which was bloody."
Clinton wrote in Living History that she was able to win a plea deal for her client after she obtained forensic testimony that "cast doubt on the evidentiary value of semen and blood samples collected by the sheriff's office."
She did that by seizing on a missing link in the chain of evidence. According to Clinton’s interview, the prosecution lost track of its own forensic evidence after the testing was complete.
"The crime lab took the pair of underpants, neatly cut out the part that they were gonna test, tested it, came back with the result of what kind of blood it was what was mixed in with it – then sent the pants back with the hole in it to evidence," said Clinton (LISTEN HERE). "Of course the crime lab had thrown away the piece they had cut out."
Clinton said she got permission from the court to take the underwear to a renowned forensics expert in New York City to see if he could confirm that the evidence had been invalidated.
"The story through the grape vine was that if you could get [this investigator] interested in the case then you had the foremost expert in the world willing to testify, so maybe it came out the way you wanted it to come out," she said.
She said the investigator examined the cut-up underwear and told her there was not enough blood left on it to test.
When Clinton returned to Arkansas, she said she gave the prosecutor a clipping of the New York forensic investigator’s "Who’s Who."
"I handed it to Gibson, and I said, ‘Well this guy’s ready to come up from New York to prevent this miscarriage of justice,’" said Clinton, breaking into laughter.
"So we were gonna plea bargain," she continued.
When she went before Judge Cummings to present the plea, he asked her to leave the room while he interrogated her client, she said.
"I said, ‘Judge I can’t leave the room, I’m his lawyer,’" said Clinton, laughing. "He said, ‘I know but I don’t want to talk about this in front of you.’"
"So that was Maupin [Cummings], we had a lot of fun with Maupin," Clinton added.
Reed asked what happened to the rapist.
"Oh, he plea bargained. Got him off with time served in the county jail, he’d been in the county jail for about two months," said Clinton.
When asked why Taylor wanted a female lawyer, Clinton responded, "Who knows. Probably saw a TV show. He just wanted one."
Taylor, who pleaded to unlawful fondling of a chid, was sentenced to one year in prison, with two months reduced for time served. He died in 1992.
‘Is This About That Rape of Me?’
Neither Reed nor a spokesman for Hillary Clinton returned a request for comment.
The Taylor case was a minor episode in the lengthy career of Clinton, who writes in Living History, before moving on to other topics, that the trial inspired her co-founding of the first rape crisis hotline in Fayetteville.
Clinton and her supporters highlight her decades of advocacy on behalf of women and children, from her legal work at the Children's Defense Fund to her women's rights initiatives at the Bill, Hillary, and Chelsea Clinton Foundation.
And yet there are parallels between the tactics Clinton employed to defend Taylor and the tactics she, her husband, and their allies have used to defend themselves against accusations of wrongdoing over the course of their three decades in public life.
In the interview with Reed, Clinton does not mention the hotline, nor does she discuss the plight of the 12-year-old girl who had been attacked.
Now 52, the victim resides in the same town where she was born.
Divorced and living alone, she blames her troubled life on the attack. She was in prison for check forgery to pay for her prior addiction to methamphetamines when Newsday interviewed her in 2008. The story says she harbored no ill will toward Clinton.
According to her, that is not the case.
"Is this about that rape of me?" she asked when a Free Beacon reporter knocked on her door and requested an interview.
Declining an interview, she nevertheless expressed deep and abiding hostility toward the Newsday reporter who spoke to her in 2008—and toward her assailant’s defender, Hillary Rodham Clinton.
Listen to the audio:
Read the full case file: ||||| In 1975, Hillary Rodham, 27, was a court-appointed attorney for an indigent defendant accused of raping a 6th grader in Arkansas. She wrote about it in her 2003 book, "Living History," focusing on technical evidentiary aspects of the case she presented to the court. Not surprisingly, she omitted the damning parts.
The victim was 12 years old. An older man, Thomas Alfred Taylor, was accused of raping her in his car. The man requested a female public defender, which Hillary at first resisted. Once she took the case, however, her defense was aggressive.
In "Living History" and in the Newsday piece, we learn of the issues raised about blood and semen samples, standard criminal defense tactics. But Hillary left out a key piece of the defense.
Newsday explains the omission:
However, that account leaves out a significant aspect of her defense strategy - attempting to impugn the credibility of the victim, according to a Newsday examination of court and investigative files and interviews with witnesses, law enforcement officials and the victim. Rodham, records show, questioned the sixth grader's honesty and claimed she had made false accusations in the past. She implied that the girl often fantasized and sought out "older men" like Taylor, according to a July 1975 affidavit signed "Hillary D. Rodham" in compact cursive.
A man Hillary owes $688,000 to this year says this is just "the best defense possible."
Here's one twist - the victim, now 46, had not realized Hillary Clinton was the defense attorney when Newsday caught up with her (her name was not Clinton then). The victim is not pushing this story - if anything, she is forgiving, even if she has only just become aware of the cynical Clinton attacks on her credibility.
The victim, now 46, told Newsday that she was raped by Taylor, denied that she wanted any relationship with him and blamed him for contributing to three decades of severe depression and other personal problems. "It's not true, I never sought out older men - I was raped," the woman said in an interview in the fall. Newsday is withholding her name as the victim of a sex crime.
There are some interesting nuggets in the well-sourced piece, such as:
"Taylor was alleged to have raped this girl in a car right near a very busy highway - I told her [Rodham] it seems sort of improbable and she immediately agreed," said Baker, who remembered Rodham as "smart, capable and very focused."
And how about this:
But the record shows that Rodham was also intent on questioning the girl's credibility. That line of defense crystallized in a July 28, 1975, affidavit requesting the girl undergo a psychiatric examination at the university's clinic. "I have been informed that the complainant is emotionally unstable with a tendency to seek out older men and to engage in fantasizing," wrote Rodham, without referring to the source of that allegation. "I have also been informed that she has in the past made false accusations about persons, claiming they had attacked her body." Dale Gibson, the investigator, doesn't recall seeing evidence that the girl had fabricated previous attacks. The assistant prosecutor who handled much of the case for Mahlon Gibson died several years ago. The prosecutor's files on the case, which would have included such details, were destroyed more than decade ago when a flood swept through the county archives, Mahlon Gibson said. Those files also would have included the forensics evidence referenced in "Living History." The victim was visibly stunned when handed the affidavit by a reporter this fall. "It kind of shocks me - it's not true," she said. "I never said anybody attacked my body before, never in my life."
Hillary Clinton never misses an opportunity to remind us of how much of a warrior she is on behalf of vulnerable children. Children vulnerable to the system. Vulnerable to the callousness of adults.
A 6th grader, Hillary? She "fantasized?" She "sought it out from older men?"
Hillary Clinton, saying shame on others.
UPDATE - To be more clear at what the gray area is here, officers of the court have a responsibility - an ethical responsibility - to adhere to principles of scrupulous honesty. When Hillary signed that affidavit, she was giving a sworn oath that she had knowledge and evidence that the 6th grader had a history of making false charges. That's what the affidavit says. But nobody, including the victim who has no axe to grind, believes this has any truth. That's the core of the Newsday story.
That is the difference between zealous defense and breaching ethical responsibility.
And every lawyer here knows it. | Hillary Clinton is in the spotlight right now, thanks to her new book and will-she-or-won't-she 2016 speculation, and it sure looks like someone's using the opportunity to roll out some opposition research. The right-leaning Washington Free Beacon yesterday released newly discovered recordings from the 1980s in which Clinton boasts about a 1975 case in which she served as a public defender for a man accused of raping a 12-year-old, and "got him off with time served in the county jail." The recordings are of interviews intended for a never-published Esquire profile. Clinton makes clear that she believes the man did the deed. "I had him take a polygraph, which he passed," she recalls in the recording, "which forever destroyed my faith in polygraphs." But she managed to destroy the prosecution's case because the crime lab made the odd decision to slice out a key part of the defendant's bloody underwear for testing and then discard it. This left the prosecution without evidence, so Clinton got them to agree to let the man plead to a lesser crime. The case itself has been reported before—Newsday revealed in 2008 that Clinton had attacked the victim's credibility, saying the girl had a "tendency to seek out older men and engage in fantasizing." |
Three types of Internet pharmacies selling prescription drugs directly to consumers have emerged in recent years. First, some Internet pharmacies operate much like traditional drugstores or mail-order pharmacies: they dispense drugs only after receiving prescriptions from consumers or their physicians. Other Internet pharmacies provide customers medication without a physical examination by a physician. In place of the traditional face-to-face physician/patient consultation, the consumer fills out a medical questionnaire that is reportedly evaluated by a physician affiliated with the pharmacy. If the physician approves the questionnaire, he or she authorizes the online pharmacy to send the medication to the patient. This practice tends to be largely limited to “lifestyle” prescription drugs, such as those that alleviate allergies, promote hair growth, treat impotence, or control weight. Finally, some Internet pharmacies dispense medication without a prescription. Regardless of their methods, all Web sites selling prescription drugs are governed by the same complex network of laws and regulations at both the state and federal levels that govern traditional drugstores and mail-order drug services. In the United States, prescription drugs must be prescribed and dispensed by licensed health care professionals, who can help ensure proper dosing and administration and provide important information on the drug’s use to customers. To legally dispense a prescription drug, a pharmacist licensed with the state and working in a pharmacy licensed by the state must be presented a valid prescription from a licensed health care professional. Every state requires resident pharmacists and pharmacies to be licensed. The regulation of the practice of pharmacy is rooted in state pharmacy practice acts and regulations enforced by the state boards of pharmacy, which are responsible for licensing pharmacists and pharmacies. The state boards of pharmacy also are responsible for routinely inspecting pharmacies, ensuring that pharmacists and pharmacies comply with applicable state and federal laws, and investigating and disciplining those that fail to comply. In addition, 40 states require out-of-state pharmacies—called nonresident pharmacies—that dispense prescription drugs to state residents to be licensed or registered. Some state pharmacy boards regulate Internet pharmacies according to the same standards that apply to nonresident pharmacies. State pharmacy boards’ standards may require that nonresident pharmacies do the following: maintain separate records of prescription drugs dispensed to customers in the state so that these records are readily retrievable from the records of prescription drugs dispensed to other customers; provide a toll-free telephone number for communication between customers in the state and a pharmacist at the nonresident pharmacy and affix this telephone number to each prescription drug label; provide the location, names, and titles of all principal corporate officers; provide a list of all pharmacists who are dispensing prescription drugs to customers in the state; designate a pharmacist who is responsible for all prescription drugs dispensed to customers in the state; provide a copy of the most recent inspection report issued by the home provide a copy of the most recent license issued by the home state. States also are responsible for regulating the practice of medicine. All states require that physicians practicing in the state be licensed to do so. State medical practice laws generally outline standards for the practice of medicine and delegate the responsibility of regulating physicians to state medical boards. State medical boards license physicians and grant them prescribing privileges.In addition, state medical boards investigate complaints and impose sanctions for violations of the state medical practice laws. While states have jurisdiction within their borders, the sale of prescription drugs on the Internet can occur across state lines. The sale of prescription drugs between states or as a result of importation falls under the jurisdiction of the federal government. FDA is responsible for ensuring the safety, effectiveness, and quality of domestic and imported pharmaceutical products under the FDCA. Specifically, FDA establishes standards for the safety, effectiveness, and manufacture of prescription drugs that must be met before they are approved for the U.S. market. FDA can take action against (1) the importation, sale, or distribution of an adulterated, misbranded, or unapproved drug; (2) the illegal promotion of a drug; (3) the sale or dispensing of a prescription drug without a valid prescription; and (4) the sale and dispensing of counterfeit drugs. If judicial intervention is required, Justice will become involved to enforce the FDCA. Justice also enforces other consumer protection statutes for which the primary regulatory authorities are administrative agencies such as FDA and FTC. FTC has responsibility for preventing deceptive or unfair acts or practices in commerce and has authority to bring an enforcement action when an Internet pharmacy makes false or misleading claims about its products or services. Finally, Justice’s DEA regulates controlled substances, which includes issuing all permits for the importation of pharmaceutical controlled substances and registering all legitimate importers and exporters, while Customs and the Postal Service enforce statutes and regulations governing the importation and domestic mailing of drugs. The very nature of the Internet makes identifying all pharmacies operating on it difficult. As a result, the precise number of Internet pharmacies selling prescription drugs directly to consumers is unknown. We identified 190 Internet pharmacies selling prescription drugs directly to consumers, 79 of which dispense prescription drugs without a prescription or on the basis of a consumer’s having completed an online questionnaire (see table 1). Also, 185 of the identified Internet pharmacies did not disclose the states where they were licensed to dispense prescription drugs, and 37 did not provide an address or telephone number permitting the consumer to contact them if problems arose. Obtaining prescription drugs from unlicensed pharmacies without adequate physician supervision, including an examination, places consumers at risk of harmful side effects, possibly even death, from drugs that may be inappropriate for them. Estimates of the number of Internet pharmacies range from 200 to 400. However, it is difficult to determine the precise number of Internet pharmacies selling prescription drugs directly to consumers because Internet sites can be easily created and removed and some Internet pharmacies operate for a period of time at one Internet address and then close and reappear under another name. In addition, many Internet pharmacies have multiple portal sites (independent Web pages that connect to a single pharmacy). We found 95 sites that at first appeared to be discrete Internet pharmacies but were actually portal sites. As consumers click on the icons and links provided, they are brought to an Internet site that is completely different from the one they originally visited. Consumers may be unaware of these site changes unless they are paying close attention to the Internet site address bar on their browser. Some Internet pharmacies had as many as 18 portal sites. About 58 percent, or 111, of the Internet pharmacies we identified told consumers that they had to provide a prescription from their physician to purchase prescription drugs. Prescriptions may be submitted to an Internet pharmacy in various ways, including by mail or fax and through contact between the consumer’s physician or current pharmacy and the Internet pharmacy. The Internet pharmacy then verifies that a licensed physician actually has issued the prescription to the patient before it dispenses any drugs. Internet pharmacies that require a prescription from a physician generally operate similarly to traditional drugstore or mail-order pharmacies. In some instances, the Internet site is owned by or affiliated with a traditional drugstore. We identified 54 Internet pharmacies that issued prescriptions and dispensed medications on the basis of an online questionnaire. Generally, these short, easy-to-complete questionnaires asked about the consumer’s health profile, medical history, current medication use, and diagnosis. In some instances, pharmacies provided the answers necessary to obtain the prescription by placing checks next to the “correct” answers. Information on many of the Internet sites indicated that a physician reviews the questionnaire and then issues a prescription. The cost of the physician’s review ranged from $35 to $85, with most sites charging $75.Moreover, certain illegal and unethical prescribing and dispensing practices are occurring through some Internet pharmacies that focus solely on prescribing and dispensing certain “lifestyle” drugs, such as diet medications and drugs to treat impotence. We also identified 25 Internet pharmacies that dispensed prescription drugs without prescriptions. In the United States, it is illegal to sell or dispense a prescription drug without a prescription. Nevertheless, to obtain a drug from these Internet pharmacies, the consumer was asked only to complete an order form indicating the type and quantity of the drug desired and to provide credit card billing information. Twenty-one of these 25 Internet pharmacies were located outside the United States; the location of the remaining 4 could not be determined. Generally, it is illegal to import prescription drugs that are not approved by FDA and manufactured in an FDA-approved facility.Obtaining prescription drugs from foreign-based Internet pharmacies places consumers at risk from counterfeit or unapproved drugs, or drugs that were manufactured and stored under poor conditions. The Internet pharmacies that we identified varied significantly in the information that they disclosed on their Web sites. For instance, 153 of the 190 Internet pharmacies we reviewed provided a mailing address or telephone number (see table 1). The lack of adequate identifying information prevents consumers from contacting Internet pharmacies if problems should arise. More importantly, most Internet pharmacies did not disclose the states where they were licensed to dispense prescription drugs. We contacted all U.S.-based Internet pharmacies to obtain this information.We then asked pharmacy boards in the 12 states with the largest numbers of licensed Internet pharmacies (70 in all) to verify their licensure status. Sixty-four pharmacies required a prescription to dispense drugs; of these, 22, or about 34 percent, were not licensed in one or more of the states in which they had told us they were licensed and in which they dispensed drugs. Internet pharmacies that issued prescriptions on the basis of online questionnaires disclosed even less information on their Web sites. Only 1 of the 54 Internet pharmacies disclosed the name of the physician responsible for reviewing questionnaires and issuing prescriptions. We attempted to contact 45 of these Internet pharmacies to obtain their licensure status; we did not attempt to contact 9 because they were located overseas. We were unable to reach 13 because they did not provide, and we could not obtain, a mailing address or telephone number. In addition, 18 would not return repeated telephone calls, 3 were closed, and 2 refused to tell us where they were licensed. As a result, we were able to obtain licensure information for only nine Internet pharmacies affiliated with physicians that prescribe online. We found that six of the nine prescribing pharmacies were not licensed in one or more of the states in which they had told us they were licensed and in which they dispensed prescription drugs. The ability to buy prescription drugs from Internet pharmacies not licensed in the state where the customer is located and without appropriate physician supervision, including an examination, means that important safeguards related to the doctor/patient relationship and intrinsic to conventional prescribing are bypassed. We also found that only 44 Internet pharmacies (23 percent) posted a privacy statement on their Web sites. As recent studies have indicated, consumers are concerned about safeguarding their personal health information online and about potential transfers to third parties of the personal information they have given to online businesses.The majority of these pharmacies stated that the information provided by the patient would be kept confidential and would not be sold or traded to third parties. Our review of state privacy laws revealed that at least 21 states have laws protecting the privacy of pharmacy information. While the federal Health Insurance Portability and Accountability Act of 1996 called for nationwide protections for the privacy and security of electronic health information, including pharmacy data, regulations have not yet been finalized. State pharmacy and medical boards have policies created to regulate brick and mortar pharmacies and traditional doctor/patient relationships. However, the traditional regulatory and enforcement approaches used by these boards may not be adequate to protect consumers from the potentially dangerous practices of some Internet pharmacies. Nevertheless, 20 states have taken disciplinary action against Internet pharmacies and physicians that have engaged in illegal or unethical practices. Many of these states have also introduced legislation to address illegal or unethical sales practices of Internet pharmacies and physicians prescribing on the Internet. Appendix II contains details on state actions to regulate pharmacies and physicians practicing on the Internet. The advent of Internet pharmacies poses new challenges for the traditional state regulatory agencies that oversee the practices of pharmacies. While 12 pharmacy boards reported that they have taken action against Internet pharmacies for illegally dispensing prescription drugs, many said they have encountered difficulties in identifying, investigating, and taking disciplinary action against illegally operating Internet pharmacies that are located outside state borders but shipping to the state.State pharmacy board actions consisted of referrals to federal agencies, state Attorneys General, or state medical boards. Almost half of the state pharmacy boards reported that they had experienced problems with or received complaints about Internet pharmacies. Specifically, 24 state pharmacy boards told us that they had experienced problems with Internet pharmacies not complying with their state pharmacy laws. The problems most commonly cited were distributing prescription drugs without a valid license or prescription, or without establishing a valid physician/patient relationship. Moreover, 20 state boards (40 percent) reported they had received at least 78 complaints, ranging from 1 to 15 per state, on Internet pharmacy practices. Many of these complaints were about Internet pharmacies that were dispensing medications without a valid prescription or had dispensed the wrong medication. State pharmacy boards also reported that they have encountered difficulties in identifying Internet pharmacies that are located outside their borders. About 74 percent of state pharmacy boards reported having serious problems determining the physical location of an Internet pharmacy affiliated with an Internet Web site. Sixteen percent of state pharmacy boards reported some difficulty, and 10 percent reported no difficulty. Without this information, it is difficult to identify the companies and people responsible for selling prescription drugs. More importantly, state pharmacy boards have limited ability and authority to investigate and act against Internet pharmacies located outside their state but doing business in their state without a valid license. In our survey, many state pharmacy boards cited limited resources, and jurisdictional and technological limitations, as obstacles to enforcing their laws with regard to pharmacies not located in their states. Because of jurisdictional limits, states have found that their traditional investigative tools—interviews, physical or electronic surveillance, and serving subpoenas to produce documents and testimony—are not necessarily adequate to compel disclosure of information from a pharmacy or pharmacist located out of state. Similarly, the traditional enforcement mechanisms available to state pharmacy boards—disciplinary actions or sanctions against licensees—are not necessarily adequate to control a pharmacy or pharmacist located out of state.In the absence of the ability to investigate and take disciplinary action against a nonresident pharmacy, state pharmacy boards have been limited to referring unlicensed or unregistered Internet pharmacies to their counterpart boards in the states where the pharmacies are licensed. State medical boards have concerns about the growing number of Internet pharmacies that issue prescriptions on the basis of a simple online questionnaire rather than a face-to-face examination. The AMA is also concerned that prescriptions are being provided to patients without the benefit of a physical examination, which would allow evaluation of any potential underlying cause of a patient’s dysfunction or disease, as well as an assessment of the most appropriate treatment. Moreover, medical boards are receiving complaints about physicians prescribing on the Internet. Twenty of the 45 medical boards responding to our survey reported that they had received complaints about physicians prescribing on the Internet during the last year.The most frequent complaint was that the physician did not perform an examination of the patient. As a result, medical boards in eight states have taken action against physicians for Internet prescribing violations. Disciplinary actions and sanctions have ranged from monetary fines and letters of reprimand to probation and license suspension. Thirty-nine of the 45 medical boards responding to our survey concluded that a physician who issued a prescription on the basis of a review of an online questionnaire did not satisfy the standard of good medical practice required under their states’ laws. Moreover, ten states have introduced or enacted legislation regarding the sale of prescription drugs on the Internet; including five states that have introduced legislation to prohibit physicians and other practitioners from prescribing prescription drugs on the Internet without conducting an examination or having a prior physician/patient relationship. Twelve states have adopted rules or statements that clarify their positions on the use of online questionnaires for issuing prescriptions. Generally, these statements either prohibit online prescribing or state that prescribing solely on the basis of answers to a questionnaire is inappropriate and unprofessional (see app. II). As in the case of state pharmacy boards, state medical boards have limited ability and authority to investigate and act against physicians located outside of their state but prescribing on the Internet to state residents. Further, they too have had difficulty identifying these physicians. About 55 percent of state medical boards that responded to our survey told us they had difficulty determining both the identity and location of physicians prescribing drugs on the Internet, and 36 percent had difficulty determining whether the physician was licensed in another state. Since February 1999, six state Attorneys General have brought legal action against Internet pharmacies and physicians for providing prescription drugs to consumers in their states without a state license and for issuing prescriptions solely on the basis of information provided in online questionnaires. Most of the Internet pharmacies that were sued voluntarily stopped shipping prescription drugs to consumers in those states. As a result, at least 18 Internet pharmacies have stopped selling prescription drugs to residents in Illinois, Kansas, Michigan, Missouri, New Jersey, and Pennsylvania.Approximately 15 additional states are investigating Internet pharmacies for possible legal action. Investigating and prosecuting online offenders raise new challenges for law enforcement. For instance, Attorneys General also have complained that the lack of identifying information on pharmacy Web sites makes it difficult to identify the companies and people responsible for selling prescription drugs. Moreover, even if a state successfully sues an Internet pharmacy for engaging in illegal or unethical practices, such as prescribing on the basis of an online questionnaire or failing to adequately disclose identifying information, the Internet pharmacy is not prohibited from operating in other states. To stop such practices, each affected state must individually bring action against the Internet pharmacy. As a result, to prevent one Internet pharmacy from doing business nationwide, the Attorney General in every state would have to file a lawsuit in his or her respective state court. Five federal agencies have authority to regulate and enforce U.S. laws that could be applied to the sale of prescription drugs on the Internet. Since Internet pharmacies first began operation in early 1999, FDA, Justice, DEA, Customs, and FTC have increased their efforts to respond to public health concerns about the illegal sale of prescription drugs on the Internet.FDA has taken enforcement actions against Internet pharmacies selling prescription drugs, Justice has prosecuted Internet pharmacies and physicians for dispensing medications without a valid prescription, DEA has investigated Internet pharmacies for illegal distribution of controlled substances, Customs has increased its seizure of packages that contain drugs entering the country, and FTC has negotiated settlements with Internet pharmacies for making deceptive health claims. While these agencies’ contributions are important, their efforts sometimes do not support each other. For instance, to conserve its resources FDA routinely releases packages of prescription drugs that Customs has detained because they may have been obtained illegally from foreign Internet pharmacies. Such uncoordinated program efforts can waste scarce resources, confuse and frustrate enforcement program administrators and customers, and limit the overall effectiveness of federal enforcement efforts. FDA has recently increased its monitoring and investigation of Internet pharmacies to determine if they are involved in illegal sales of prescription drugs. FDA has primary responsibility for regulating the sale, importation, and distribution of prescription drugs, including those sold on the Internet. In July 1999, FDA testified before the Congress that it did not generally regulate the practice of pharmacy or the practice of medicine. Accordingly, FDA activities regarding the sale of drugs over the Internet had until then focused on unapproved drugs. As of April 2000, however, FDA had 54 ongoing investigations of Internet pharmacies that may be illegally selling prescription drugs. FDA has also referred to Justice for possible criminal prosecution approximately 33 cases involving over 100 Internet pharmacies that may be illegally selling prescription drugs. FDA’s criminal investigations of online pharmacies have, to date, resulted in the indictment and/or arrest of eight individuals, two of whom have been convicted. In addition, FDA is seeking $10 million in fiscal year 2001 to fund 77 staff positions that would be dedicated to investigating and taking enforcement actions against Internet pharmacies. Justice has increased its prosecution of Internet pharmacies illegally selling prescription drugs. Under the FDCA, a prescription drug is considered misbranded if it is not dispensed pursuant to a valid prescription under the professional supervision of a licensed practitioner. In July 1999, Justice testified before the Congress that it was examining its legal basis for prosecuting noncompliant Internet pharmacies and violative online prescribing practices. Since that time, according to FDA officials, 22 of the 33 criminal investigations FDA referred to Justice have been actively pursued. Two of the 33 cases were declined by Justice and are being prosecuted as criminal cases by local district attorneys, and 9 were referred to the state of Florida. In addition, Justice filed two cases involving the illegal sale of prescription drugs over the Internet in 1999 and is investigating approximately 20 more cases. Since May 2000, Justice has brought charges against, or obtained convictions of, individuals in three cases involving the sale of prescription drugs by Internet pharmacies without a prescription or the distribution of misbranded drugs. While DEA has no efforts formally dedicated to Internet issues, it has initiated 20 investigations of the use of the Internet for the illegal sale of controlled substances during the last 15 months. DEA has been particularly concerned about Internet pharmacies that are affiliated with physicians who prescribe controlled substances without examining patients. For instance, in July 1999 a DEA investigation led to the indictment of a Maryland doctor on 34 counts of providing controlled substances to patients worldwide in response to requests made over the Internet. Because Maryland requires that doctors examine patients before prescribing medications, the doctor’s prescriptions were not considered to be legitimately provided. The physician’s conduct on the Internet also violated an essential requirement of federal law, which is that controlled substances must be dispensed only with a valid prescription. The U.S. Customs Service, which is responsible for inspecting packages shipped to the United States from foreign countries, has increased its seizures of prescription drugs from overseas. Customs officials report that the number of drug shipments seized increased about 450 percent between 1998 and 1999—from 2,139 to 9,725. Most of these seizures involved controlled substances. Because of the large volume, Customs is able to examine only a fraction of the packages entering the United States daily and cannot determine how many of its drug seizures involve prescription drugs purchased from Internet pharmacies. Nevertheless, Customs officials believe that the Internet is playing a role in the increase in illegal drug importation. According to Customs officials, fiscal year 2000 seizures are on pace to equal or surpass 1999 levels. FTC reports that it is monitoring Internet pharmacies for compliance with the Federal Trade Commission Act, conducting investigations, and making referrals to state and federal authorities. FTC is responsible for combating unfair or deceptive trade practices, including those on the Internet, such as misrepresentation of online pharmacy privacy practices. In 1999, FTC referred two Internet pharmacies to state regulatory boards. This year, FTC charged individuals and Internet pharmacies with making false promotional claims and other violations. Recently, the operators of these Internet pharmacies agreed to settle out of court. According to the settlement agreement, the defendants are barred from misrepresenting medical and pharmaceutical arrangements and any material fact about the scope and nature of the defendants’ goods, services, or facilities. The sale of prescription drugs to U.S. residents by foreign Internet pharmacies poses the most difficult challenge for U.S. law enforcement authorities because the seller is not located within U.S. boundaries. Many prescription drugs available from foreign Internet pharmacies are either products for which there is no U.S.-approved counterpart or foreign versions of FDA-approved drugs. In either case, these drugs are not approved for use in the United States, and therefore it is illegal for a foreign Internet pharmacy to ship these products to the United States. In addition, federal law prohibits the sale of prescription drugs to U.S. citizens without a valid prescription. Although FDA officials said that the agency has jurisdiction over a resident in a foreign country who sells to a U.S. resident in violation of the FDCA, from a practical standpoint, FDA is hard-pressed to enforce U.S. laws against foreign sellers.As a result, FDA enforcement efforts against foreign Internet pharmacies have been limited mostly to requesting the foreign government to take action against the seller of the product. FDA has also posted information on its Web site to help educate consumers about safely purchasing drugs from Internet pharmacies. FDA officials have sent 23 letters to operators of foreign Internet pharmacies warning them that they may be engaged in illegal activities, such as offering to sell prescription drugs to U.S. citizens without a valid, or in some cases without any, prescription. Copies of each letter were sent to regulatory officials in the country in which the pharmacy was based. In response, two Internet pharmacies said they will cease their sales to U.S. residents, and a third said it has ceased its sales regarding one drug but is still evaluating how it will handle other products. FDA has since requested that Customs detain packages from these Internet pharmacies. Customs has been successful in working with one foreign government to shut down its Internet pharmacies that were illegally selling prescription drugs to U.S. consumers. In January 2000, Customs assisted Thailand authorities in the execution of search and arrest warrants against seven Internet pharmacies, resulting in the arrest of 22 Thai citizens for violating Thailand’s drug and export laws and 6 people in the United States accused of buying drugs from the Thailand Internet pharmacy. U.S. and Thailand officials seized more than 2.5 million doses of prescription drugs and 245 parcels ready for shipment to the United States. According to FDA, it is illegal for a foreign-based Internet pharmacy to sell prescription drugs to consumers in the United States if those drugs are unapproved or are not dispensed pursuant to a valid prescription. But FDA permits patients and their physicians to obtain small quantities of drugs sold abroad, but not approved in the United States, for the treatment of a serious condition for which effective treatment may not be available domestically. FDA’s approach has been applied to products that do not represent an unreasonable risk and for which there is no known commercialization or promotion to U.S. residents. Further, a patient seeking to import such a product must provide to FDA the name of the licensed physician in the United States responsible for his or her treatment with the unapproved drug or provide evidence that the product is for continuation of a treatment begun in a foreign country. FDA has acknowledged that its guidance concerning importing prescription drugs through the mail has been inconsistently applied. At many Customs mail centers, FDA personnel rely on Customs officials to detain suspicious drug imports for FDA screening. Although prescription drugs ordered from foreign Internet pharmacies may not meet FDA’s criteria for importation under the personal use exemption, FDA personnel routinely release illegally imported prescription drugs detained by Customs officials. FDA has determined that the use of agency resources to provide comprehensive coverage of illegally imported drugs for personal use is generally not justified. Instead, the agency’s enforcement priorities are focused on drugs intended for the commercial market and on fraudulent products and those that pose an unreasonable health risk. FDA’s inconsistent application of its personal use exemption frustrates Customs officials and does little to deter foreign Internet pharmacies trafficking in prescription drugs. Accordingly, FDA plans to take the necessary actions to eliminate, or at least mitigate to the extent possible, the inconsistent interpretation and application of its guidance and work more closely with Customs. FDA’s approach to regulation of imported prescription drugs could be affected by enactment of pending legislation intended to allow American consumers to import drugs from certain other countries. Specifically, the appropriations bill for FDA (H.R. 4461) includes provisions that could modify the circumstances under which the agency may notify individuals seeking to import drugs into the United States that they may be in violation of federal law. According to an FDA official, it is not currently clear how these provisions, if enacted, could affect FDA’s ability to prevent the importation of violative drugs. Initiatives at the state and federal levels offer several approaches for regulating Internet pharmacies. The organization representing state boards of pharmacy, NABP, has developed a voluntary program for certifying Internet pharmacies. In addition, state and federal officials believe that they need more authority, as well as information regarding the identity of Internet pharmacies, to protect the public’s health. The organization representing state Attorneys General, NAAG, has asked the federal government to expand the authority of its members to allow them to take action in federal court. In addition, the administration has announced a new initiative that would grant FDA broad new authority to better identify, investigate, and prosecute Internet pharmacies for the illegal sale of prescription drugs. Concerned that consumers have no assurance of the legitimacy of Internet pharmacies, NABP is attempting to provide consumers with an instant mechanism for verifying the licensure status of Internet pharmacies. NABP’s Verified Internet Pharmacy Practice Sites (VIPPS) is a voluntary program that certifies online pharmacies that comply with criteria that attempt to combine state licensing requirements with standards developed by NABP for pharmacies practicing on the Internet. To obtain VIPPS certification, an Internet pharmacy must comply with the licensing and inspection requirements of the state where it is physically located and of each state to which it dispenses pharmaceuticals; demonstrate compliance with 17 standards by, for example, ensuring patient rights to privacy, authenticating and maintaining the security of prescription orders, adhering to recognized quality assurance policy, and providing meaningful consultation between customers and pharmacists; undergo an on-site inspection; develop a postcertification quality assurance program; and submit to continuing random inspections throughout a 3-year certification period. VIPPS-certified pharmacies are identified by the VIPPS hyperlink seal displayed on both their and NABP’s Web sites.Since VIPPS began in the fall of 1999, its seals have been presented to 11 Internet pharmacies, and 25 Internet pharmacies have submitted applications to display the seal. NAAG strongly supports the VIPPS program but maintains that the most important tool the federal government can give the states is nationwide injunctive relief. Modeled on the federal telemarketing statute, nationwide injunctive relief is an approach that would allow state Attorneys General to take action in federal court; if they were successful, an Internet pharmacy would be prevented from illegally selling prescription drugs nationwide. Two federal proposals would amend the FDCA to require an Internet pharmacy engaged in interstate commerce to include certain identifying language on its Web site. The Internet Pharmacy Consumer Protection Act (H.R. 2763) would amend the FDCA to require an Internet pharmacy engaged in interstate commerce to include a page on its Web site providing the following information: the name, address, and telephone number of the pharmacy’s principal each state in which the pharmacy is authorized by law to dispense the name of each pharmacist and the state(s) in which the individual is if the site offers to provide prescriptions after medical consultation, the name of each prescriber, the state(s) in which the prescriber is licensed, and the health professions in which the individual holds such licenses. Also, under this act a state would have primary enforcement responsibility for any violation involving the purchase of a prescription drug made within the state, provided the state had requirements at least as stringent as those specified in the act and adequate procedures for enforcing those requirements. In addition, the administration has developed a bill aimed at providing consumers the protections they enjoy when they go to a drugstore to have their prescriptions filled. For example, when consumers walk into a drugstore to have a prescription filled, they know the identity and location of the pharmacy, and the license on the wall provides visual assurance that the pharmacy meets certain health and safety requirements in that state. Under the Internet Prescription Drug Sales Act of 2000, Internet pharmacies would be required to be licensed in each state where they do business; comply with all applicable state and federal requirements, including the requirement to dispense drugs only pursuant to a valid prescription; and disclose identifying information to consumers. Internet pharmacies also would be required to notify FDA and all applicable state boards of pharmacy prior to launching a new Web site.Internet pharmacies that met all of the requirements would be able to post on their Web site a declaration that they had made the required notifications. FDA would designate one or more private nonprofit organizations or state agencies to verify licensing information included in notifications and to examine and inspect the records and facilities of Internet pharmacies. Internet pharmacies that do not meet notification and disclosure requirements or that sell prescription drugs without a valid prescription could face penalties as high as $500,000 for each violation. While it supports the Internet Prescription Drug Sales Act of 2000, Justice officials have recommended that it be modified. Prescription drug sales from Internet pharmacies often rely on credit card transactions processed by U.S. banks and credit card networks. To enhance its ability to investigate and stop payment for prescription drugs purchased illegally, Justice has recommended that federal law be amended to permit the Attorney General to seek injunctions against certain financial transactions traceable to unlawful online drug sales. According to Justice officials, if the Department and financial institutions can stop even some of the credit card orders for the illicit sale of prescription drugs and controlled substances, the operations of some “rogue” Internet pharmacies may be disrupted significantly. The unique qualities of the Internet pose new challenges for enforcing state pharmacy and medical practice laws because they allow pharmacies and physicians to reach consumers across state and international borders and remain anonymous. Internet pharmacies that fail to obtain licensure in the states where they operate may violate state law. But the Internet pharmacies that are affiliated with physicians that prescribe on the basis of an online questionnaire and those that dispense drugs without a prescription pose the most potential harm to consumers. Dispensing prescription drugs without adequate physician supervision increases the risk of consumers’ suffering adverse events, including side effects from inappropriately prescribed medications and misbranded or contaminated drugs. Some states have taken action to stop Internet pharmacies that offer online prescribing services from selling prescription drugs to residents of their state. But the real difficulty lies in identifying responsible parties and enforcing laws across state boundaries. Enforcement actions by federal agencies have begun addressing the illegal prescribing and dispensing of prescription drugs by domestic Internet pharmacies and their affiliated physicians. Enactment of federal legislation requiring Internet pharmacies to disclose, at a minimum, who they are, where they are licensed, and how they will secure personal health information of consumers would assist state and federal authorities in enforcing existing laws. In addition, federal agencies have taken actions to address the illegal sale of prescription drugs from foreign Internet pharmacies. Cooperative efforts between federal agencies and a foreign government resulted in closing down some Internet pharmacies illegally selling prescription drugs to U.S. consumers. However, it is unclear whether these efforts will stem the flow of prescription drugs obtained illegally from other foreign sources. As a result, the sale of prescription drugs from foreign-based Internet pharmacies continues to pose difficulties for federal regulatory authorities. To help ensure that consumers and state and federal regulators can easily identify the operators of Web sites selling prescription drugs, the Congress should amend the FDCA to require that any pharmacy shipping prescription drugs to another state disclose certain information on its Internet site. The information disclosed should include the name, business address, and telephone number of the Internet pharmacy and its principal officers or owners, and the state(s) where the pharmacy is licensed to do business. In addition, where permissible by state law, Internet pharmacies that offer online prescribing services should also disclose the name, business address, and telephone number of each physician providing prescribing services, and the state(s) where the physician is licensed to practice medicine. The Internet Pharmacy Consumer Protection Act and the administration’s proposal would require Internet pharmacies to disclose this type of information. We obtained comments on a draft of this report, from FDA, Justice, FTC, and Customs, as well as NABP and FSMB. In general, they agreed that Internet pharmacies should be required to disclose pertinent information on their Web sites and thought that our report provided an informative summary of efforts to regulate Internet pharmacies. Some reviewers also provided technical comments, which we incorporated where appropriate. However, FDA suggested that our matter for consideration implied that online questionnaires were acceptable as long as the physician’s name was properly disclosed. We did not intend to imply that online prescribing was proper medical practice. Rather, our report notes that most state medical boards responding to our survey have already concluded that a physician who issues a prescription on the basis of a review of an online questionnaire has not satisfied the standard of good medical practice required by state law. In light of this, federal action does not appear necessary. The disclosure of the responsible parties should assist state regulatory bodies in enforcing their laws. FTC suggested that our matter for congressional consideration be expanded to recommend that the Congress grant states nationwide injunctive relief. Our report already discusses NAAG’s proposal that injunctive relief be modeled after the federal telemarketing statute. While the NAAG proposal may have some merit, an assessment of the implications of this proposal was beyond the scope of our study. FTC also recommended that the Congress enact federal legislation that would require consumer-oriented commercial Web sites that collect personal identifying information from or about consumers online, including Internet pharmacies, to comply with widely accepted fair information practices. Again, our study did not evaluate whether a federal consumer protection law was necessary or if existing state laws and regulations may already offer this type of consumer protection. NABP did not agree entirely with our assessment of the regulatory effectiveness of the state boards of pharmacy. It indicated that the boards, with additional funding and minor legislative changes, can regulate Internet pharmacies. Our study did not assess the regulatory effectiveness of individual state pharmacy boards. Instead, we summarized responses by state pharmacy boards to our questions about their efforts to identify and take action against Internet pharmacies that are not complying with state law, and the challenges they face in regulating these pharmacies. Our report notes that many states identified limited resources and jurisdictional limitations as obstacles to enforcing their laws. NABP also suggested that our matter for congressional consideration include a requirement for independent verification of the information that Internet pharmacies are required to disclose on their Web sites. In our view, the current state regulatory framework would permit state boards to verify this information should they choose to do so. We are sending copies of this report to the Honorable Donna E. Shalala, Secretary of Health and Human Services; the Honorable Jane E. Henney, Commissioner of FDA; the Honorable Janet Reno, Attorney General; the Honorable Donnie R. Marshall, Administrator of the DEA; the Honorable Robert Pitofsky, Chairman of the FTC; the Honorable Raymond W. Kelly, Commissioner of the U.S. Customs Service; the Honorable Kenneth C. Weaver, Chief Postal Inspector; appropriate congressional committees; and other interested parties. We will make copies available to others upon request. If you or your staffs have any questions about this report or would like additional information, please call me at (202) 512-7119 or John Hansen at (202) 512-7105. See appendix V for another GAO contact and staff acknowledgments. To obtain information on the number of pharmacies practicing on the Internet, we conducted searches of the World Wide Web and obtained a list of 235 Internet pharmacies that the National Association of Boards of Pharmacy (NABP) had identified by searching the Web and a list of 94 Internet pharmacies identified by staff of the House Committee on Commerce by searching the Web. After eliminating duplicate Web sites, we reviewed 296 potential sites between November and December 1999. Sites needed to meet two criteria to be included in our survey. First, they had to sell prescription drugs directly to consumers. Second, they had to be anchor sites (actual providers of services) and not portal sites (independent Web pages that connect to a provider). Most portal sites are paid a commission by anchor sites for displaying an advertisement or taking the user to the service provider’s site through a “click through.” We excluded 129 Web sites from our survey because they did not meet these criteria. See table 2 for details on our analysis of the Web sites that we excluded. In April 2000, we obtained a list of 326 Web sites that FDA identified during March 2000. We reviewed all the sites on FDA’s list and compared it to the list of Internet pharmacies we had previously compiled. We found 117 Internet pharmacies that duplicated pharmacies on our list. We also excluded 186 Web sites that did not meet our two criteria and added the remaining 23 Internet pharmacies to our list. To categorize Internet pharmacies, we analyzed information on the Web site to determine if the Internet pharmacy (1) required a prescription from the user’s physician to dispense a prescription drug, (2) in the absence of a prescription, required the user to complete an online questionnaire to obtain a prescription, or (3) dispensed prescription drugs without a prescription. We also collected data on the types of information available on each Internet pharmacy Web site, including information about the pharmacy’s licensure status, its mailing address and telephone number, and the cost of issuing a prescription. Using the domain name from the uniform resource locator, we performed online queries of Network Solutions, Inc. (one of the primary registrars for domain names) to obtain the name, address, and telephone number of the registrant of each Internet pharmacy. We then telephoned all U.S.-based Internet pharmacies to obtain information on the states in which they dispensed prescription drugs and the states in which they were licensed or registered. See table 3 for details on our licensure information inquiry. Finally, we clustered Internet pharmacies by state and asked the pharmacy boards in the 12 states—10 of these had the largest number of licensed/registered Internet pharmacies—to verify the licensure status of each pharmacy that told us it was licensed in the state. To assess state efforts to regulate Internet pharmacies and physicians prescribing over the Internet, we conducted two mail surveys in December 1999. To obtain information on state efforts to identify, monitor, and regulate Internet pharmacies, we surveyed pharmacy boards in all 50 states and the District Columbia. After making follow-up telephone calls, we received 50 surveys from the pharmacy boards in 49 states and the District Columbia, or 98 percent of those we surveyed. The survey and survey results are presented in appendix III. We also interviewed the executive directors and representatives of the state pharmacy boards in nine states— Alabama, Iowa, Maryland, New York, North Dakota, Oregon, Texas, Virginia, Washington—and the District of Columbia. In addition, we interviewed and obtained information from representatives of the NABP, the American Pharmaceutical Association, the National Association of Attorneys General, pharmaceutical manufacturers, as well as representatives of several Internet pharmacies. To obtain information on state efforts to oversee physician prescribing practices on the Internet, we surveyed the 62 medical boards and boards of osteopathy in the 50 states and the District of Columbia.After follow-up telephone calls, we received 45 surveys from the medical boards in 39 states, or 73 percent of those we surveyed. The survey and survey results are presented in appendix IV. We also interviewed officials with the medical boards in five states: California, Colorado, Maryland, Virginia, and Wisconsin. In addition, we interviewed and obtained information from representatives of the American Medical Association and the Federation of State Medical Boards (FSMB). To assess federal efforts to oversee pharmacies and physicians practicing on the Internet, we obtained information from officials from the Food and Drug Administration; the Federal Trade Commission; the Department of Justice, including the Drug Enforcement Administration; the U.S. Customs Service; and the U.S. Postal Service. We also reviewed the report of the President’s Working Group on Unlawful Conduct on the Internet. The availability of prescription drugs on the Internet has attracted the attention of several professional associations. As a result, over the past year, several associations have convened meetings of representatives of professional, regulatory, law enforcement, and private sector entities to discuss issues related to the practice of pharmacy and medicine on the Internet. We attended the May 1999 NABP annual conference, its September 1999 Executive Board meeting, and its November 1999 Internet Healthcare Summit 2000 to obtain information on the regulatory landscape for Internet pharmacy practice sites and the Verified Internet Pharmacy Practice Sites program. In January 2000, we attended a meeting convened by the FSMB of top officials from various government, medical, and public entities to discuss the efforts of state and federal agencies to regulate pharmacies and physicians practicing on the Internet. We also attended sessions of the March 2000 Symposium on Healthcare Internet and E- Commerce and the April 2000 Drug Information Association. We conducted our work from May 1999 through September 2000 in accordance with generally accepted government auditing standards. Neither in-state nor out-of-state physicians may prescribe to state residents without meeting the patient, even if the patient completes an online questionnaire. Internet exchange does not qualify as an initial medical examination, and no legitimate patient/physician relationship is established by it. Physicians prescribing a specific drug to residents without being licensed in the state may be criminally liable. Physicians prescribing on the Internet must follow standards of care. AG filed suit against four out-of-state online pharmacies for selling, prescribing, dispensing, and delivering prescription drugs without the pharmacies or physicians being licensed and with no physical examination. Referred one physician to the medical board in another state and obtained an injunction against a physician; the Kansas Board of Healing Arts also filed a lawsuit against a physician for the unauthorized practice of medicine. AG filed lawsuits against 10 online pharmacies and obtained restraining orders against the companies to stop them from doing business in Kansas; filed lawsuits against 7 companies and individuals selling prescription drugs over the Internet. Dispensing medication without physical examination represents conduct that is inconsistent with the prevailing and usually accepted standards of care and may be indicative of professional or medical incompetence. AG filed notices of intended action against 10 Internet pharmacies for illegally dispensing prescription drugs. Referred Internet pharmacy(ies) to AG for possible criminal prosecution. AG filed suit and obtained permanent injunctions against two online pharmacies and physicians for practicing without state licenses. Interviewed two physicians and suggested they stop prescribing over the Internet; they complied. AG filed suits charging nine Internet pharmacies with consumer fraud violations for selling prescription drugs over the Internet without a state license. Adopted regulations prohibiting physicians from prescribing or dispensing controlled substances or dangerous drugs to patients they have not examined and diagnosed in person; pharmacy board adopted rules for the sale of drugs online, requiring licensure or registration of pharmacy and disclosure. An Ohio doctor was indicted on 64 felony counts of selling dangerous drugs and drug trafficking over the Internet. The Medical Board may have his license revoked. AG filed lawsuits against three online companies and various pharmacies and physicians for practicing without proper licensing. The following individuals made important contributions to this report: John C. Hansen directed the work; Claude B. Hayeck collected information on federal efforts and, along with Darryl Joyce, surveyed state pharmacy boards; Renalyn A. Cuadro assisted in the surveys of Internet pharmacies and state medical boards; Susan Lawes guided survey development; Joan Vogel compiled and analyzed state pharmacy and medical board survey data; and Julian Klazkin served as attorney adviser. The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Ordersbymail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Ordersbyvisiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Ordersbyphone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: fraudnet@gao.gov 1-800-424-5454 (automated answering system) | The first Internet pharmacies began online service in early 1999. Public health officials are concerned about Internet pharmacies that do not adhere to state licensing requirements and standards. Public officials are also concerned about the validity of prescriptions and international drugs that are not approved in the United States being sent by mail. The unique qualities of the Internet pose new challenges for enforcing state pharmacy and medical practice laws because they allow pharmacies and physicians to reach consumers across state and international borders and remain anonymous. Congress is considering legislation to strengthen oversight of Internet pharmacies. |
Skip in Skip x Embed x Share CLOSE USA TODAY Sports' Nancy Armour details the disturbing findings from the latest study linking CTE to playing football and what it means for the NFL. USA TODAY Sports
Former NFL quarterback Boomer Esiason says he might have CTE. (Photo: Kirby Lee-USA TODAY Sports, Kirby Lee-USA TODAY Sports)
Former NFL quarterback Boomer Esiason seems resigned to the fact that he could have a brain injury from his years of playing football.
"If I died tomorrow and my brain was taken and researched and it was found that I had CTE, which, most likely I have," he said Monday on his radio show "Boomer and Carton."
"All football players probably have it, the way I read it and the way I see it."
INVESTIGATION: Dolphins WR Jarvis Landry under investigation for possible domestic battery
BOLDIN TO BILLS: Anquan Boldin signs contract to join Buffalo Bills
Esiason was discussing the recent study by the Boston University School of Medicine and the VA Boston Healthcare System that showed 110 out of 111 former NFL football players who donated their brains to science showed signs of chronic traumatic encephalopathy (CTE).
While the NFL said more research is needed, the league is currently paying out a $1 billion lawsuit to former players and wives of deceased former players. The suit, filed in 2011, said the league didn't warn former players about the dangers of concussions.
Esiason indicated that the studies, lawsuits and awareness will make football better moving forward.
"The more we learn about our brains, the better it is for the guys who are playing today," Esiason said. "The good news for the guys who are playing today, especially those who are playing for a long period of time, is they get paid a hell of a lot more money than we did. They have much better benefits and retirement benefits than we do." ||||| Key Points
Question What are the neuropathological and clinical features of a case series of deceased players of American football neuropathologically diagnosed as having chronic traumatic encephalopathy (CTE)?
Findings In a convenience sample of 202 deceased players of American football from a brain donation program, CTE was neuropathologically diagnosed in 177 players across all levels of play (87%), including 110 of 111 former National Football League players (99%).
Meaning In a convenience sample of deceased players of American football, a high proportion showed pathological evidence of CTE, suggesting that CTE may be related to prior participation in football.
Abstract
Importance Players of American football may be at increased risk of long-term neurological conditions, particularly chronic traumatic encephalopathy (CTE).
Objective To determine the neuropathological and clinical features of deceased football players with CTE.
Design, Setting, and Participants Case series of 202 football players whose brains were donated for research. Neuropathological evaluations and retrospective telephone clinical assessments (including head trauma history) with informants were performed blinded. Online questionnaires ascertained athletic and military history.
Exposures Participation in American football at any level of play.
Main Outcomes and Measures Neuropathological diagnoses of neurodegenerative diseases, including CTE, based on defined diagnostic criteria; CTE neuropathological severity (stages I to IV or dichotomized into mild [stages I and II] and severe [stages III and IV]); informant-reported athletic history and, for players who died in 2014 or later, clinical presentation, including behavior, mood, and cognitive symptoms and dementia.
Results Among 202 deceased former football players (median age at death, 66 years [interquartile range, 47-76 years]), CTE was neuropathologically diagnosed in 177 players (87%; median age at death, 67 years [interquartile range, 52-77 years]; mean years of football participation, 15.1 [SD, 5.2]), including 0 of 2 pre–high school, 3 of 14 high school (21%), 48 of 53 college (91%), 9 of 14 semiprofessional (64%), 7 of 8 Canadian Football League (88%), and 110 of 111 National Football League (99%) players. Neuropathological severity of CTE was distributed across the highest level of play, with all 3 former high school players having mild pathology and the majority of former college (27 [56%]), semiprofessional (5 [56%]), and professional (101 [86%]) players having severe pathology. Among 27 participants with mild CTE pathology, 26 (96%) had behavioral or mood symptoms or both, 23 (85%) had cognitive symptoms, and 9 (33%) had signs of dementia. Among 84 participants with severe CTE pathology, 75 (89%) had behavioral or mood symptoms or both, 80 (95%) had cognitive symptoms, and 71 (85%) had signs of dementia.
Conclusions and Relevance In a convenience sample of deceased football players who donated their brains for research, a high proportion had neuropathological evidence of CTE, suggesting that CTE may be related to prior participation in football.
Introduction
Quiz Ref IDChronic traumatic encephalopathy (CTE) is a progressive neurodegeneration associated with repetitive head trauma.1-8 In 2013, based on a report of the clinical and pathological features of 68 men with CTE (including 36 football players from the current study), criteria for neuropathological diagnosis of CTE and a staging scheme of pathological severity were proposed.6 Two clinical presentations of CTE were described; in one, the initial features developed at a younger age and involved behavioral disturbance, mood disturbance, or both; in the other, the initial presentation developed at an older age and involved cognitive impairment.9 In 2014, a methodologically rigorous approach to assessing clinicopathological correlation in CTE was developed using comprehensive structured and semistructured informant interviews and online surveys conducted by a team of behavioral neurologists and neuropsychologists.10 In 2015, the neuropathological criteria for diagnosis of CTE were refined by a panel of expert neuropathologists organized by the National Institute of Neurological Disorders and Stroke and the National Institute of Biomedical Imaging and Bioengineering (NINDS-NIBIB).8
Using the NINDS-NIBIB criteria to diagnose CTE and the improved methods for clinicopathological correlation, the purpose of this study was to determine the neuropathological and clinical features of a case series of deceased football players neuropathologically diagnosed as having CTE whose brains were donated for research.
Methods
Study Recruitment
Quiz Ref IDIn 2008, as a collaboration among the VA Boston Healthcare System, Bedford VA, Boston University (BU) School of Medicine, and Sports Legacy Institute (now the Concussion Legacy Foundation [CLF]), a brain bank was created to better understand the long-term effects of repetitive head trauma experienced through contact sport participation and military-related exposure. The purpose of the brain bank was to comprehensively examine the neuropathology and clinical presentation of brain donors considered at risk of development of CTE. The institutional review board at Boston University Medical Campus approved all research activities. The next of kin or legally authorized representative of each brain donor provided written informed consent. No stipend for participation was provided. Inclusion criteria were based entirely on exposure to repetitive head trauma (eg, contact sports, military service, or domestic violence), regardless of whether symptoms manifested during life. Playing American football was sufficient for inclusion. Because of limited resources, more strict inclusion criteria were implemented in 2014 and required that football players who died after age 35 years have at least 2 years of college-level play. Donors were excluded if postmortem interval exceeded 72 hours or if fixed tissue fragments representing less than half the total brain volume were received (eFigure in the Supplement).
Clinical data were collected into a Federal Interagency Traumatic Brain Injury Research–compliant database. Since tracking began in 2014, for 98 (81%) brain donations to the VA-BU-CLF Brain Bank, the next of kin approached the brain bank near the time of death. The remaining brain donors were referred by medical examiners (11 [9%]), recruited by a CLF representative (7 [6%]), or participated in the Brain Donation Registry during life (5 [4%]) (eFigure in the Supplement).
Clinical Evaluation
Retrospective clinical evaluations were performed using online surveys and structured and semistructured postmortem telephone interviews between researchers and informants. Researchers conducting these evaluations were blinded to the neuropathological analysis, and informants were interviewed before receiving the results of the neuropathological examination. A behavioral neurologist, neuroscientist, or neuropsychologist (J.M., D.H.D., T.M.S., M.L.A., or R.A.S.) obtained a detailed history, including a timeline of cognitive, behavioral, mood, and motor symptomology. Additionally, other neuropsychiatric symptoms, exposures and symptoms consistent with posttraumatic stress disorder, features of a substance use disorder, neurodegenerative diagnoses made in life (Alzheimer disease [AD], frontotemporal dementia, vascular dementia, dementia with Lewy bodies, Parkinson disease, CTE, or dementia of unknown etiology), headaches that impaired function, symptoms and diagnoses made in life of sleep disorders, and causes of death were assessed. Clinicians qualitatively summarized the participants’ clinical presentation (eg, presence and course of symptoms, functional independence) into a narrative and presented the case to a multidisciplinary consensus team of clinicians, during which it was determined whether the participant met criteria for dementia. To resolve discrepancies in methods that evolved over time, only clinical variables ascertained after January 2014 using a standardized informant report were included because of the larger subset of participants recruited during this time frame (n = 125).
Prior to January 2014, demographics, educational attainment, athletic history (type of sports played, level, position, age at first exposure, and duration), military history (branch, location of service, and duration of combat exposure), and traumatic brain injury (TBI) history (including number of concussions) were queried during the telephone interview. Beginning in January 2014, demographics, educational attainment, and athletic and military history were queried using an online questionnaire. Informant-reported race was collected as part of demographic information so that neuropathological differences across race could be assessed. To be considered a National Football League (NFL) athlete, a participant must have played in at least 1 regular-season NFL game. Professional position and years of play were verified using available online databases (http://www.pro-football-reference.com, http://databasefootball.com, http://www.justsportsstats.com). History of TBI was queried using informant versions of the Ohio State University TBI Identification Method Short Form11 and 2 questionnaires adapted from published studies that address military-related head injuries and concussions.12,13 With the addition of these questionnaires, informants were read a formal definition of concussion prior to being asked about concussion history, which was not the case prior to January 2014.
Neuropathological Evaluation
Pathological processing and evaluation were conducted using previously published methods.14,15 Brain volume and macroscopic features were recorded during initial processing. Twenty-two sections of paraffin-embedded tissue were stained for Luxol fast blue, hematoxylin and eosin, Bielschowsky silver, phosphorylated tau (ptau) (AT8), α-synuclein, amyloid-β, and phosphorylated transactive response DNA binding protein 43 kDa (pTDP-43) using methods described previously.16 In some cases, large coronal slabs of the cerebral hemispheres were also cut at 50 μm on a sledge microtome and stained as free-floating sections using AT8 or CP-13.16,17
A neuropathological diagnosis was made using criteria for CTE recently defined by the 2015 NINDS-NIBIB Consensus Conference8 and well-established criteria for other neuropathological diseases, including AD,18,19 Lewy body disease,20 frontotemporal lobar degeneration,21-25 and motor neuron disease.26,27 Neuropathological criteria for CTE require at least 1 perivascular ptau lesion consisting of ptau aggregates in neurons, astrocytes, and cell processes around a small blood vessel; these pathognomonic CTE lesions are most often distributed at the depths of the sulci in the cerebral cortex and are distinct from the lesions of aging-related tau astrogliopathy.8 Supportive features for the diagnosis of CTE include ptau pretangles and neurofibrillary tangles (NFTs) in superficial cortical layers (layers II/III) of the cerebral cortex; pretangles, NFTs or extracellular tangles in CA2 and CA4 of the hippocampus; subpial ptau astrocytes at the glial limitans; and dot-like ptau neurites.8
Chronic traumatic encephalopathy ptau pathology was classified into 4 stages using previously proposed criteria.6 Briefly, stage I CTE is characterized by 1 or 2 isolated perivascular epicenters of ptau NFTs and neurites (ie, CTE lesions) at the depths of the cerebral sulci in the frontal, temporal, or parietal cortices. In stage II, 3 or more CTE lesions are found in multiple cortical regions and superficial NFTs are found along the sulcal wall and at gyral crests. Multiple CTE lesions, superficial cortical NFTs, and diffuse neurofibrillary degeneration of the entorhinal and perirhinal cortices, amygdala, and hippocampus are found in stage III CTE. In stage IV CTE, CTE lesions and NFTs are densely distributed throughout the cerebral cortex, diencephalon, and brain stem with neuronal loss, gliosis, and astrocytic ptau pathology. Chronic traumatic encephalopathy pathology in stages I and II is considered to be mild and in stages III and IV is considered to be severe.
Neuropathological evaluation was blinded to the clinical evaluation and was reviewed by 4 neuropathologists (V.A., B.H., T.D.S., and A.M.); any discrepancies in the neuropathological diagnosis were solved by discussion and consensus of the group. In addition to diagnoses, the density of ptau immunoreactive NFTs, neurites, diffuse amyloid-β plaques, and neuritic amyloid-β plaques; vascular amyloid-β; pTDP-43; and α-synuclein immunoreactive Lewy bodies were measured semiquantitatively (0-3, with 3 being most severe) across multiple brain regions.
Descriptive statistics were generated using SPSS software version 20 (IBM Inc).
Results
Among the 202 deceased brain donors (median age at death, 66 years [interquartile range [IQR], 47-76 years]), CTE was neuropathologically diagnosed in 177 (87%; median age at death, 67 years [IQR, 52-77 years]; mean years of football participation, 15.1 [SD, 5.2]; 140 [79%] self-identified as white and 35 [19%] self-identified as black), including 0 of 2 pre–high school, 3 of 14 high school (21%), 48 of 53 college (91%), 9 of 14 semiprofessional (64%), 7 of 8 Canadian Football League (88%), and 110 of 111 NFL (99%) players.
The median age at death for participants with mild CTE pathology (stages I and II) was 44 years (IQR, 29-64 years) and for participants with severe CTE pathology (stages III and IV) was 71 years (IQR, 64-79 years). The most common cause of death for participants with mild CTE pathology was suicide (12 [27%]) and for those with severe CTE pathology was neurodegenerative (ie, dementia-related and parkinsonian-related causes of death) (62 [47%]). The severity of CTE pathology was distributed across the highest level of play, with all former high school players having mild pathology (3 [100%]) and the majority of former college (27 [56%]), semiprofessional (5 [56%]), Canadian Football League (6 [86%]), and NFL (95 [86%]) players having severe pathology. The mean duration of play for participants with mild CTE pathology was 13 years (SD, 4.2 years) and for participants with severe CTE pathology was 15.8 years (SD, 5.3 years) (Table 1).
In all cases, perivascular clusters of ptau immunoreactive NFTs diagnostic for CTE (ie, CTE lesions)8 were found in the cerebral cortex (Figure 1 and Figure 2). In cases with mild CTE pathology (stages I and II), isolated perivascular CTE lesions were found at the sulcal depths of the cerebral cortex, most commonly in the superior and dorsolateral frontal cortices, but also in the lateral temporal, inferior parietal, insula, and septal cortices (Figure 1). Neurofibrillary tangles were sparse in other cortical regions, and there was no diffuse neurofibrillary degeneration of the medial temporal lobe structures (Figure 1, open arrowheads). Neurofibrillary tangles were also found in the locus coeruleus, substantia nigra, and substantia innominata (Figure 3) in mild CTE. In cases with severe CTE pathology, perivascular CTE lesions were large and confluent (Figure 2). Neurofibrillary tangles were widely distributed in the superficial laminae of cortical regions and there was severe neurofibrillary degeneration of the medial temporal lobe structures, including the hippocampus, amygdala, and entorhinal cortex (Figure 2, black arrowheads, and Figure 3). Neurofibrillary tangles were also frequent in the thalamus, nucleus basalis of Meynert, substantia innominata, substantia nigra, and locus coeruleus in severe CTE (Figure 3).
Deposition of amyloid-β was present in a subset of participants at all stages of CTE pathology, predominantly as diffuse amyloid-β plaques, but neuritic amyloid-β plaques and amyloid angiopathy were also present. In stage IV CTE, amyloid-β deposition occurred in 52 cases (91%). Deposition of TDP-43 and α-synuclein were found in all stages of CTE pathology; TDP-43 deposition occurred in 47 (83%) and α-synuclein deposition occurred in 23 (40%) stage IV CTE cases (Table 2).
Among the 25 football players without CTE, 9 showed no pathological abnormalities and 7 showed nonspecific changes; eg, hemosiderin-laden macrophages (n = 7) and axonal injury (n = 5). Other diagnoses included vascular pathology (n = 4), unspecified tauopathy not meeting criteria for CTE (n = 3), AD (n = 2), argyrophilic grain disease (n = 1), and Lewy body disease (n = 1).
Data on informants were collected beginning in 2014. The median number of participating informants was 2 (IQR, 1-3) per participant. Among all of the interviews, 71 (64%) included a spouse/partner, 56 (51%) included an adult child, 27 (24%) included a sibling, 16 (14%) included a parent, 13 (12%) included a non–first-degree relative, 8 (7.2%) included a neighbor or friend, and 4 included other informants. Among the informants who knew the participant the longest, the mean relationship length was 45.8 years (SD, 1.5 years).
Among the 111 CTE cases with standardized informant reports on clinical symptoms, a reported progressive clinical course was common in participants with both mild and severe CTE pathology, occurring in 23 (85%) mild cases and 84 (100%) severe cases (Table 3). Behavioral or mood symptoms were common in participants with both mild and severe CTE pathology, with symptoms occurring in 26 (96%) mild cases and 75 (89%) severe cases. Impulsivity, depressive symptoms, apathy, and anxiety occurred in 23 (89%), 18 (67%), 13 (50%), and 14 (52%) mild cases and 65 (80%), 46 (56%), 43 (52%), and 41 (50%) severe cases, respectively. Additionally, hopelessness, explosivity, being verbally violent, being physically violent, and suicidality (including ideation, attempts, or completions) occurred in 18 (69%), 18 (67%), 17 (63%), 14 (52%), and 15 (56%) mild cases, respectively. Substance use disorders were also common in participants with mild CTE, occurring in 18 (67%) mild cases. Symptoms of posttraumatic stress disorder were uncommon in both groups, occurring in 3 (11%) mild cases and 9 (11%) severe cases.
Cognitive symptoms were common in participants with both mild and severe CTE pathology, with symptoms occurring in 23 (85%) mild cases and 80 (95%) severe cases. Memory, executive function, and attention symptoms occurred in 19 (73%), 19 (73%), and 18 (69%) mild cases and 76 (92%), 67 (81%), and 67 (81%) severe cases, respectively. Additionally, language and visuospatial symptoms occurred in 54 (66%) and 44 (54%) severe cases, respectively. A premortem diagnosis of AD and a postmortem (but blinded to pathology) consensus diagnosis of dementia were common in severe cases, occurring in 21 (25%) and 71 (85%), respectively. There were no asymptomatic (ie, no mood/behavior or cognitive symptoms) CTE cases. Motor symptoms were common in severe cases, occurring in 63 (75%). Gait instability and slowness of movement occurred in 55 (66%) and 42 (50%) severe cases, respectively. Symptom frequencies remained similar when only pure CTE cases (ie, those with no neuropathological evidence of comorbid neurodegenerative disease) were considered (eTable in the Supplement).
Among the 111 CTE cases with standardized informant reports on clinical symptoms, 47 (42.3%; median age at death, 76 years [IQR, 63-81 years]) initially presented with cognitive symptoms, 48 (43.2%; median age at death, 66 years [IQR, 54-73 years]) initially presented with behavior or mood symptoms, and 16 (14.4%; median age at death, 65.5 years [IQR, 39-78]) initially presented with both cognitive symptoms and behavior or mood symptoms. Forty (85%) of those initially presenting with only cognitive symptoms were reported to have behavior or mood symptoms at the time of death and 43 (90%) of those initially presenting with only behavior or mood symptoms were reported to have cognitive symptoms at the time of death. Dementia was present at the time of death in 36 (77%) of those initially presenting with cognitive symptoms, 33 (69%) of those initially presenting with behavior or mood symptoms, and 11 (69%) of those initially presenting with both cognitive and behavior or mood symptoms.
The most common primary cause of death was neurodegenerative for all 3 groups (cognitive, 26 [55%]; behavior or mood, 16 [33%]; both cognitive and behavior or mood, 6 [38%]). Substance use disorders, suicidality, and family history of psychiatric illness were common among those who initially presented with behavior or mood symptoms, occurring in 32 (67%), 22 (47%), and 23 (49%) cases, respectively.
Discussion
Quiz Ref IDIn a convenience sample of 202 deceased former players of American football who were part of a brain donation program, a high proportion were diagnosed neuropathologically with CTE. The severity of CTE pathology was distributed across the highest level of play, with all former high school players having mild pathology and the majority of former college, semiprofessional, and professional players having severe pathology. Behavior, mood, and cognitive symptoms were common among those with mild and severe CTE pathology and signs of dementia were common among those with severe CTE pathology.
Nearly all of the former NFL players in this study had CTE pathology, and this pathology was frequently severe. These findings suggest that CTE may be related to prior participation in football and that a high level of play may be related to substantial disease burden. Several other football-related factors may influence CTE risk and disease severity, including but not limited to age at first exposure to football, duration of play, player position, cumulative hits, and linear and rotational acceleration of hits. Recent work in living former football players has shown that age at first exposure may be related to impaired cognitive performance29 and altered corpus callosum white matter30 and that cumulative hits may be related to impairment on self-report and objective measures of cognition, mood, and behavior,31 although it is unclear if any of these outcomes are related to CTE pathology. Furthermore, it is unclear if symptomatic hits (concussions) are more important than asymptomatic hits resulting in subconcussive injury. As with other neurodegenerative diseases, age may be related to risk and pathological severity in CTE. It will be important for future studies to resolve how different measures of exposure to football and age influence the outcome.
Quiz Ref IDIn cases with severe CTE pathology, accumulations of amyloid-β, α-synuclein, and TDP-43 were common. These findings are consistent with previous studies that show deposition of multiple neurodegenerative proteins after exposure to TBI32 and with work showing that neuritic amyloid-β plaques are associated with increased CTE neuropathological stage.33 Diagnoses of comorbid neurodegenerative diseases, including AD, Lewy body disease, motor neuron disease, and frontotemporal lobar degeneration, were also common in cases with severe CTE pathology. Overall, 19% of participants with CTE had comorbid Lewy body disease, which aligns with a recent observation by Crane et al34 regarding the increased prevalence of Lewy body pathology after single TBI. Chronic traumatic encephalopathy was not assessed in the analysis by Crane et al; to investigate the possibility of CTE after single TBI would require more extensive sampling of the depths of the cortical sulci with ptau immunostaining, as silver stains typically do not detect CTE pathology.
Quiz Ref IDBehavioral, mood, and cognitive symptoms were common among participants with either mild or severe CTE pathology. In participants with severe CTE pathology, there was marked ptau pathology in brain regions that have been associated with symptoms frequently reported: impulsivity, depressive symptoms, apathy, anxiety, and explosivity (prefrontal cortex, amygdala, locus coeruleus); episodic memory symptoms (hippocampus and entorhinal and perirhinal cortices); and attention and executive function symptoms (prefrontal cortex). Participants with mild CTE pathology often had these symptoms despite having relatively circumscribed cortical pathology and absence of ptau pathology in the hippocampus, entorhinal cortex, or amygdala. This may suggest that other pathologies not captured by the pathological data set, such as neuroinflammation, axonal injury, or astrocytosis, or pathologies in neuroanatomical regions not evaluated contribute to these clinical symptoms. Microglial neuroinflammation appears to precede tau accumulation in CTE,35 suggesting it may play a role in early symptoms.
Informants reported that 43% of participants had behavior or mood symptoms as their initial presentation. Many of these participants had a substance use disorder, demonstrated suicidality, or had a family history of psychiatric illness. Behavior or mood symptoms may be the initial presentation for a subset of individuals with CTE, or alternatively, CTE ptau pathology may lower the threshold for psychiatric manifestations in susceptible individuals. These clinical observations confirm and expand on previous reports of 2 primary clinical presentations of CTE.9
There is substantial evidence that CTE is a progressive, neurodegenerative disease. In this study, 107 participants (96%) had a progressive clinical course based on informant report. In addition, pathological severity of CTE was correlated with age at death (Table 3). However, a postmortem study evaluates brain pathology at only 1 time point and is by definition cross-sectional. In addition, the participants were not observed longitudinally during life. Although associations with age in cross-sectional samples can result from age-related progression within individuals, they can also arise from birth cohort effects, differential survival, or age-related differences in how individuals were selected into the study. Population-based prospective studies are needed to address the issue of progression of CTE pathology and age at symptom onset.
The strengths of this study are that this is the largest CTE case series ever described to our knowledge, more than doubling the size of the 2013 report,6 and that all participants were exposed to a relatively similar type of repetitive head trauma while playing the same sport. In addition, the comprehensive neuropathological evaluation and retrospective clinical data collection were independently performed while blinded to the findings of the other investigators.
This study had several limitations. First, a major limitation is ascertainment bias associated with participation in this brain donation program. Although the criteria for participation were based on exposure to repetitive head trauma rather than on clinical signs of brain trauma, public awareness of a possible link between repetitive head trauma and CTE may have motivated players and their families with symptoms and signs of brain injury to participate in this research. Therefore, caution must be used in interpreting the high frequency of CTE in this sample, and estimates of prevalence cannot be concluded or implied from this sample. Second, the VA-BU-CLF brain bank is not representative of the overall population of former players of American football; most players of American football have played only on youth or high school teams, but the majority of the brain bank donors in this study played at the college or professional level. Additionally, selection into brain banks is associated with dementia status, depression status, marital status, age, sex, race, and education.36 Third, this study lacked a comparison group that is representative of all individuals exposed to American football at the college or professional level, precluding estimation of the risk of participation in football and neuropathological outcomes.
Conclusions
In a convenience sample of deceased football players who donated their brains for research, a high proportion had neuropathological evidence of CTE, suggesting that CTE may be related to prior participation in football.
Back to top Article Information
Corresponding Author: Ann C. McKee, MD, Neuropathology Service, VA Boston Healthcare System, CTE Center, Boston University Alzheimer’s Disease Center, Boston University School of Medicine, 150 S Huntington Ave, Boston, MA 02118 (amckee@bu.edu).
Accepted for Publication: June 20, 2017.
Author Contributions: Drs Mez and McKee had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis. Drs Mez, Daneshvar, and Mr Kiernan are co–first authors.
Concept and design: Mez, Daneshvar, Abdolmohammadi, Murphy, Montenigro, Kowall, Cantu, Stern, McKee.
Acquisition, analysis, or interpretation of data: Mez, Daneshvar, Kiernan, Abdolmohammadi, Alvarez, Huber, Alosco, Solomon, Nowinski, McHale, Cormier, Kubilus, Martin, Murphy, Baugh, Montenigro, Chaisson, Tripodis, Weuve, McClean, Goldstein, Katz, Stern, Stein, McKee.
Drafting of the manuscript: Mez, Daneshvar, Abdolmohammadi, Alosco, Martin, Murphy, Montenigro, McKee.
Critical revision of the manuscript for important intellectual content: Mez, Daneshvar, Kiernan, Abdolmohammadi, Alvarez, Huber, Alosco, Solomon, Nowinski, McHale, Cormier, Kubilus, Baugh, Chaisson, Tripodis, Kowall, Weuve, McClean, Cantu, Goldstein, Katz, Stern, Stein, McKee.
Statistical analysis: Mez, Daneshvar, Abdolmohammadi, Huber, Montenigro, Tripodis, Weuve.
Obtained funding: Mez, Nowinski, McKee.
Administrative, technical, or material support: Daneshvar, Kiernan, Abdolmohammadi, Alvarez, Huber, Alosco, McHale, Cormier, Kubilus, Murphy, Baugh, Montenigro, Chaisson, Kowall, McClean, Stein, McKee.
Supervision: Mez, Daneshvar, Abdolmohammadi, Solomon, Cantu, Stern, McKee.
Conflict of Interest Disclosures: All authors have completed and submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest. Dr Nowinski reported that he receives travel reimbursements for various unpaid advisory roles from the NFL Players’ Association, Major League Lacrosse, World Wrestling Entertainment (WWE), National Collegiate Athletic Association (NCAA), and the Ivy League; receives royalties from the publication of his book Head Games: The Global Concussion Crisis, published by Head Games The Film; served as a consultant for MC10 Inc as recently as 2013; serves as chief executive officer of the Concussion Legacy Foundation; and receives speaking honoraria and travel reimbursements for educational lectures. Ms Baugh reported that she receives research funding through the NCAA and the Harvard Football Players Health Study, which is funded by the NFL Players’ Association. Dr Cantu reported that he receives compensation from the NFL as senior advisor to its Head, Neck and Spine Committee, from the National Operating Committee on Standards for Athletic Equipment as chair of its Scientific Advisory Committee and from the Concussion Legacy Foundation as cofounder and medical director for some talks given and receives royalties from Houghton Mifflin Harcourt and compensation from expert legal opinion. Dr Stern reported that he has received research funding from the NFL, the NFL Players’ Association, and Avid Radiopharmaceuticals Inc; is a member of the Mackey-White Committee of the NFL Players’ Association; is a paid consultant to Amarantus BioScience Holdings Inc, Avanir Pharmaceuticals Inc, and Biogen; and receives royalties for published neuropsychological tests from Psychological Assessment Resources Inc and compensation from expert legal opinion. Dr McKee reported that she has received funding from the NFL and WWE and is a member of the Mackey-White Committee of the NFL Players’ Association.
Funding/Support: This study received support from NINDS (grants U01 NS086659, R01 NS078337, R56 NS078337, U01 NS093334, and F32 NS096803), the National Institute on Aging (grants K23 AG046377, P30AG13846 and supplement 0572063345-5, R01 AG1649), the US Department of Defense (grant W81XWH-13-2-0064), the US Department of Veterans Affairs (I01 CX001038), the Veterans Affairs Biorepository (CSP 501), the Veterans Affairs Rehabilitation Research and Development Traumatic Brain Injury Center of Excellence (grant B6796-C), the Department of Defense Peer Reviewed Alzheimer’s Research Program (grant 13267017), the National Operating Committee on Standards for Athletic Equipment, the Alzheimer’s Association (grants NIRG-15-362697 and NIRG-305779), the Concussion Legacy Foundation, the Andlinger Family Foundation, the WWE, and the NFL.
Role of the Funder/Sponsor: The funders of the study had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; or decision to submit for publication.
Additional Contributions: We acknowledge the use of the resources and facilities at VA Boston Healthcare System and the Edith Nourse Rogers Memorial Veterans Hospital (Bedford, Massachusetts). We also acknowledge the help of all members of the CTE Center at Boston University School of Medicine, Concussion Legacy Foundation, and the individuals and families whose participation and contributions made this work possible. ||||| These crawls are part of an effort to archive pages as they are created and archive the pages that they refer to. That way, as the pages that are referenced are changed or taken from the web, a link to the version that was live when the page was written will be preserved.Then the Internet Archive hopes that references to these archived pages will be put in place of a link that would be otherwise be broken, or a companion link to allow people to see what was originally intended by a page's authors.The goal is to fix all broken links on the web . Crawls of supported "No More 404" sites. | Former quarterback Boomer Esiason says he “most likely” has CTE, the degenerative brain disease that a recent study found in the brains of nearly all the deceased football players it examined. And he thinks he’s not the only one who’s been affected by years of concussions on the field. On his radio show Boomer and Carton, a discussion of the study released two weeks ago prompted Esiason to open up about his own risk as a former NFL player, reports USA Today. “If I died tomorrow and my brain was taken and researched and it was found that I had CTE, which, most likely I have,” he said. "All football players probably have it, the way I read it and the way I see it." Conducted jointly by the VA Boston Healthcare System and Boston University School of Medicine, the study found that 110 of the 111 brains donated by NFL players contained evidence of CTE, or chronic traumatic encephalopathy. (Per the New York Daily News, the disease can currently only be diagnosed in autopsies.) The NFL was hit by a $1 billion lawsuit in 2011 filed by former players and wives of deceased players who say the league didn’t warn of the long-term dangers concussions can have. "The more we learn about our brains, the better it is for the guys who are playing today," Esiason said. “They have much better benefits and retirement benefits than we do." |
The Authority has designed an integrated transportation network for Los Angeles County called the Metro System. To develop part of that system, it has signed three grant agreements with FTA to help fund the final design and construction phases of a heavy rail system called the Red Line. As shown in figure 1, the Red Line links with other parts of the Metro System—the Blue and Green lines, which are light rail systems. The Green Line and part of the Blue Line are operational; both have relied solely on local funds to pay the construction costs. The Authority will use state and local funds for the Pasadena Blue Line, which is under construction. In contrast, the Red Line project is being designed and constructed using federal, state, and local funds. As of February 1996, about $3.3 billion had been appropriated for the project: $1.8 billion from the federal government; $393 million from the state government; and $1.1 billion from local governments. KENNETH HAHN/103RD ST. Map not to scale. Effective December, 1995. (Figure notes on next page) The Authority is responsible for the design, construction, and operation of the Red Line project. Day-to-day design activities are managed by an engineering management consultant, while construction activities are managed by several construction management consultants, all under contract to the Authority. FTA approves and oversees expenditures of federal funds for the project and has hired a contractor, Hill International, Inc., to help ensure that the Authority maintains a reasonable process for successfully designing and constructing the project and to monitor the Authority’s development and implementation of the project. As of February 1996, the Authority estimated the project’s total cost at $5.9 billion, an 8 percent ($427 million) increase over the $5.5 billion estimated in the grant agreements. As shown in table 1, this increase was the result of construction problems, new construction requirements, and enhancements of the project. For example, on segment one, unanticipated groundwater and soil contamination resulted in costs for cleanup as well as for purchasing property on the right-of-way for a new station alignment that avoided the contaminated area. In addition, on segment two, the Authority’s Board of Directors approved $65 million to add a new station entrance and make some modifications to stations for commercial development. Furthermore, during the construction of segment two, a small part of Hollywood Boulevard collapsed into the subway tunnel being dug under the roadway, creating a 70-by-70-foot-wide sinkhole. As a result of this and prior problems, the Authority fired the contractor. Contract costs resulting from the firing and from the rebidding of the remaining work will add about $67 million to the project’s costs. The Authority believes its cost containment program has helped to keep cost increases to a minimum. The project’s cost could increase beyond the $5.9 billion estimate. For example, on segment three, the design of the Mid-City extension was suspended following the discovery of high concentrations of hydrogen sulfide gas on the planned tunnel alignment. The Authority is considering two alternatives: (1) constructing shallow underground stations and a tunnel, estimated to cost an additional $190 million, or (2) constructing a subway with aboveground stations, estimated to cost an additional $130 million. A third option—constructing a deep tunnel with a different alignment—is being studied because of public opposition to the proposed aboveground stations and the estimated costs of the first two alternatives. Authority officials estimate that it could take up to 8 years to complete the Mid-City extension after the Authority’s board chooses an alternative. The Authority has made management decisions that may increase costs in the short term but that could provide better quality design work and forestall costly water damage to the tunnel. For instance, the final design of the East Side extension is behind schedule because the Authority is requiring the design contractor to, among other things, implement new quality control and cost containment procedures and perform additional geotechnical studies of fault areas before proceeding. In addition, the Authority has implemented some mitigation measures for the North Hollywood extension that are delaying construction, including performing additional grouting to stabilize the soil and prevent water from flowing into the tunnel. Pending lawsuits could also increase costs. For example, tunneling under Runyon Canyon Park is scheduled to begin at the end of May 1996. However, a lawsuit filed by two environmental groups seeks to prevent digging and tunneling below the canyon until federal, state, and local agencies develop a supplement to the 1989 environmental impact statement. If the tunneling is delayed, the project’s schedule would be extended, thereby increasing costs. Other lawsuits could also increase costs because they include financial claims against the Authority. The lawsuits are by retail establishment owners affected by settlement on Hollywood Boulevard and the construction contractor fired by the Authority for inadequate construction techniques. Depending on the outcome of the lawsuits and the ability of the Authority’s existing insurance to cover any awards against the Authority, the risk remains that the project’s cost will increase. The Authority estimates that it has secured sufficient federal, state, and local funding to finance $5.9 billion, its current estimate of the project’s total cost. However, about $380 million in financing commitments may not be realized. Furthermore, as noted earlier, the cost could increase beyond the current estimate. Therefore, to cover current and future funding shortfalls, the Authority may have to make difficult decisions, such as reducing the funding or scope of other rail capital projects; deferring or cancelling planned transit projects; or extending the schedule for completing the Red Line, which could further increase the project’s cost. The Authority plans to fund $3.1 billion of the project’s $5.9 billion total cost with federal funds and the remainder from state and local funding sources. Most of the federal funds—$2.8 billion—are from FTA’s new starts discretionary capital program. An additional $300 million has been provided from other federal programs, including the Surface Transportation and Congestion Mitigation and Air Quality programs—highway programs that provide states with the flexibility to use these funds for transit projects. California has committed about $539 million of the project’s funding. The majority of these state funds, about $500 million, are being provided from state gas tax revenues, which are allocated to both highway and transit projects. The remainder of the state’s share of the cost of the project will come from revenues generated from general obligation bonds for rail capital projects. Local funding for the project—about $2.3 billion—comes from three sources: Los Angeles County, the city of Los Angeles, and assessments levied on properties adjacent to the planned stations. Los Angeles County dedicates revenues from a 1-cent sales tax to the Authority for existing transit systems and new transit projects in the Los Angeles area; the Authority has allocated about $1.6 billion of these revenues to the Red Line. Some funds from the county’s dedicated sales tax are returned to the surrounding cities. The city of Los Angeles uses a portion of these funds to finance the 7 percent of the project’s costs that it has committed. The Authority estimates that the remainder of the local funding for the project will be derived from assessments levied on the retail properties adjacent to planned Red Line stations on all three segments because the Authority has or will designate the areas to be taxed as “benefit assessment districts,” since these areas may derive benefits from the project. About $380 million committed by federal, state, and local governments toward the current cost estimate of $5.9 billion may not be realized. On the federal level, there is currently a $94 million shortfall. Under the grant agreements for the Red Line, the federal government committed, subject to annual appropriations, $2.8 billion for the expected life of the project. The agreement breaks this total down into yearly amounts that are also contingent upon congressional action to appropriate funds. In fiscal years 1995 and 1996, the Congress did not provide the annual commitments identified in the grant agreements, resulting in the funding shortfall. While the grant agreements allow the federal government to provide additional funds at a later date to cover any annual shortfalls, and the Authority’s long-range plan assumes that the shortfalls will be made up, federal budget constraints could make it difficult to make up existing or additional shortfalls in the future. Authority officials indicated that they could absorb an additional small shortfall in fiscal year 1997 but may not be able to complete the Red Line as scheduled if there are future shortfalls in the federal funding. In 1995, the state legislature diverted $50 million in state sales tax revenues that had been committed to the Authority for its bus operations.Since the legislature specified that the shortfall could not be allowed to affect the bus program, the Authority provided to bus operations $50 million in county sales tax revenues that had been slated for segment three. Authority officials told us that they must offset this loss through operating efficiencies over the next 4 years and may delay segment three by 1 year. Some of the Authority’s local revenue commitments may also not be realized. The Authority is currently working with the city to reach agreement on its commitment to contribute $200 million for segment three. The Authority’s long-range plan indicates that if the city’s contribution to the project does not materialize, funds slated for current and planned rail construction projects, such as the Pasadena Blue Line and further extensions to the Red Line, would be needed to make up the shortfall. Diverting these funds could delay the affected projects by up to 3 years. Furthermore, the Authority’s long-range plan also states that $36 million of the expected $75 million in estimated revenues from assessments levied on retail properties adjacent to the planned stations for segments two and three may not be realized because retail property owners oppose the assessment. Apart from the revenues from the county’s dedicated sales tax, the Authority’s funding sources for cost increases beyond the $5.9 billion estimate are somewhat limited. Federal funds will likely not be forthcoming to finance further cost increases for the Red Line project. The grant agreements essentially limit the federal government’s exposure to increased costs for the project by capping the federal share from the new starts discretionary grant program at $2.8 billion. However, an extraordinary cost provision in the agreements allows the Authority to seek additional federal funds under certain circumstances, such as higher-than-estimated inflation. In 1995, the Authority requested an additional $30 million in federal funds under this provision for segment one. While FTA has not formally responded to the Authority’s request, FTA officials told us that because of the amount of competition for new starts discretionary grant funds, FTA is unlikely to grant this or future requests for funds above the level in the grant agreements. In fact, FTA has approved only one of several requests for extraordinary costs from grantees in the new starts program—for the St. Louis Metrolink—in the last 5 years. Without increased federal funds, the Authority will have to turn to state and local funding sources. However, the state will provide funds only in the case of extraordinary costs. On the other hand, the city of Los Angeles will pay 50 percent of the cost increase for segment one—up to $100 million—and has committed to pay up to $90 million for segment two. The city has made no commitment to fund cost increases for segment three. The remaining local funding source is the county’s dedicated sales tax. FTA and Hill International officials believe that one way the Authority can absorb increases above the current cost estimate is by using revenues that the Authority currently allocates to other rail capital projects. However, Authority officials told us that the amount of flexibility the Authority has in a given year is limited, in part because about 70 percent of discretionary sales tax revenues are allocated to the bus program and the Authority does not plan to use these funds for the Red Line project. Therefore, any decision to use sales tax revenues could adversely affect other rail capital projects. For example, when the recent recession reduced planned revenues, the Authority allocated these losses to the Pasadena Blue Line project. This delayed the project, which was not yet under construction, for 3 years. This decision meant that the Red Line would not lose revenues and could maintain its construction schedule. To determine how much flexibility it has to address a cost increase and/or revenue loss, the Authority assesses the magnitude of the increase and/or loss, the Red Line’s completion schedule, the available bonding capacity based on sales tax revenues, other potential sources of funding, and the impact on other rail capital projects. For example, the Authority recently determined that it had enough bonding capacity to provide $40 million toward the cost increase for segment two and still maintain the Red Line’s construction schedule. However, Authority officials acknowledge that if the bonding capacity is not sufficient and no other funding sources are available, the Red Line’s completion schedule would have to be extended and the project’s cost could increase. According to Authority officials, the Red Line is their number-one rail priority and the decision on the new alignment for Mid-City—not expected for about a year—is the single most costly increase currently expected for the project. They stated that the project would have to be assessed at that time to determine whether revenues are available to fund construction or whether that extension will have to be delayed. Depending on how long the Mid-City extension is delayed, funding slated for other projects, such as the San Fernando extension, scheduled to begin in 2003, could be used for Mid-City. FTA’s monitoring of financing capacity for the project, particularly once the cost of the Mid-City extension is determined, will be critical to help ensure that funding is available to proceed with design and construction. In November 1994, the Authority and FTA agreed to a plan to improve the overall management of construction of the Red Line project. However, this plan did not come about until FTA took action to stop tunneling under Hollywood Boulevard for the Red Line and temporarily suspended federal funding for the project to compel the Authority to address long-standing problems. Among these long-standing problems was the lack of a mechanism for elevating safety and quality assurance concerns to the appropriate level within the Authority’s and the construction management contractor’s organization. For example, during 1993 and 1994 several inspection reports alerted the resident engineer about weaknesses in the installation of the initial tunnel lining under Hollywood Boulevard. However, the issue was not elevated to the Authority’s Director of Quality until excessive surface settlement occurred on Hollywood Boulevard in the summer of 1994. The tunnel lining support was cited as a possible cause. Because of concerns about the management attention given to quality assurance, FTA recommended that this function be placed sufficiently high in the Authority’s and the construction management contractor’s organization to help ensure independence and adequate attention to deficiency reports by quality control inspectors. Because corrective actions were not taken, on this and other issues, FTA took action to stop tunneling under Hollywood Boulevard for the Red Line and suspended federal funding—from October 5 to November 10, 1994—for the project. As a condition for resuming federal funding, the Authority and FTA agreed to a plan in November 1994 that called for transferring quality assurance, quality control, and safety from the construction management contractor to the Authority and increasing staffing for quality assurance. These actions are now being implemented. For example, the Authority increased the number of quality assurance positions from 4.5 staff years in 1994 to 6 staff years in 1995, and it plans further increases. Also, in September 1995 FTA increased the number of permanent Hill International staff, from 5 to 7; provided 3 temporary staff, who have been extended at least through May 1996; and increased the frequency of interactions between Hill International, FTA, and the Authority. With more staff, according to Hill International, four rather than one staff members are present on the construction sites daily. Our past work has shown that FTA has rarely exercised the enforcement tool of withholding funds to compel grant recipients to fix long-standing problems. With its action on the Red Line project, FTA has seen the success of withholding funds to compel change. Given the cost and potential risks of underground tunneling and a history of resistance to certain quality control recommendations made in the past, timely enforcement actions could help to ensure that the Authority addresses key recommendations in the future. We provided copies of a draft of this report to FTA and Los Angeles County Metropolitan Transportation Authority officials for their review and comment. We met with FTA officials, including the Director, Office of Oversight, and the Program Manager for the Project Management Oversight Program in Headquarters and with the Director of the Office of Program Management in FTA’s Region IX. We also met with Authority officials, including the Deputy Executive Officer for Program Management, Director for Strategic Funding Analysis and Director for Grants Management. FTA and the Authority generally agreed with the facts as presented. However, both suggested that the report’s presentation of FTA’s oversight of the project’s quality assurance and quality control practices heavily emphasized past problems rather than focused on recent positive changes. We have revised that section of the report to clearly describe the actions FTA and the Authority have taken to improve construction management of the Red Line project. FTA and the Authority also commented that our discussion of the project’s future growth and potential financing issues are speculative. We agree that future projections are speculative, but the report describes clear examples of potential reasons for cost increases, such as the decision to realign the Mid-City extension and design delays for the East Side extension, as well as the Authority’s potential solutions to financing these increases. The Authority was also concerned that our discussion of cost growth, particularly in table 1, could be misconstrued because the cost growth for segment three is an estimate. To address their comments, we changed the title of the table to reflect that the figures are estimates and added a footnote stating that cost mitigation measures have reduced the estimated cost growth for the East Side extension from $29 million to $8 million. Both FTA and the Authority offered technical comments to clarify information in the report, and we have incorporated these comments, as appropriate. To prepare this report, we reviewed the Authority’s February 1996 Project Manager’s Status and Construction Reports for each segment of the Red Line. We reviewed supporting documentation and discussed costs, financing, and oversight issues with officials at FTA’s headquarters in Washington, D.C.; FTA’s Regional Office in San Francisco; Hill International, Inc. in Los Angeles; and the Los Angeles County Metropolitan Transportation Authority. We also reviewed the Authority’s 20-year transportation plan and February 1996 financial update and discussed them with officials at FTA, Hill International, and the Authority. We performed our work from October 1995 through April 1996 in accordance with generally accepted government auditing standards. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies to the Secretary of Transportation, the Administrator of the Federal Transit Administration, the Chief Executive Officer of the Authority, and cognizant congressional committees. We will also make copies available to others upon request. Please call me at (202) 512-2834 if you or your staff have any questions. Major contributors to this report are listed in appendix II. Gary Hammond Roderick Moore James Moses The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the Los Angeles County Metropolitan Transportation Authority's Red Line subway project, focusing on the: (1) project's estimated cost; (2) Authority's financing plans; and (3) Federal Transit Administration's (FTA) oversight of the project's quality control and assurance practices. GAO found that: (1) as of February 1996, the project's estimated total cost was $5.9 billion; (2) project costs have increased due to construction problems, new construction requirements, and project enhancements; (3) additional design problems and pending lawsuits may further increase project costs; (4) the Authority plans to use $3.1 billion in federal funds and $2.8 billion in state and local funds to finance the project, but it may not realize about $380 million of the total; (5) the Authority may have to reduce the funding or scope of other rail projects, defer or cancel planned projects, or extend the project's construction schedule to cover current or future funding shortfalls, but extending the project's construction schedule could also increase costs; (6) in response to FTA actions, the Authority is reorganizing its quality control and assurance programs and increasing program staff; and (7) FTA has increased the number of on-site oversight personnel to improve project monitoring. |
Western Hemisphere countries have gone beyond their multilateral trade commitments during the past decade and pursued economic integration through numerous free trade and customs union agreements. The largest of these are Mercosur, signed in 1991, and the North American Free Trade Agreement (NAFTA), which entered into force in 1994. Other regional groups such as the Central American Common Market, the Andean Community, and the Caribbean Community have either been initiated or expanded. (See app. I for more information on the 34 countries of the Free Trade Area of the Americas.) Also, countries in the region have concluded numerous bilateral free trade and investment agreements with others in the region and worldwide. In addition, Chile and the European Union have recently started trade negotiations, while similar European Union and Mercosur negotiations are already under way. In December 1994, the heads of state of the 34 democratic countries in the Western Hemisphere agreed at the first Summit of the Americas in Miami, Florida, to conclude negotiations on a Free Trade Area of the Americas (FTAA) no later than 2005. The FTAA would cover a combined population of about 800 million people, more than $11 trillion in production, and $3.4 trillion in world trade. It would involve a diverse set of countries, from some of the wealthiest (the United States and Canada) to some of the poorest (Haiti) and from some of the largest (Brazil) to some of the smallest in the world (Saint Kitts and Nevis). Proponents of the FTAA contend that a successful negotiation could produce important economic benefits for the United States. The FTAA region is already important economically for the United States, purchasing about 36 percent of U.S. exports of goods and services in 1999 and receiving over 23 percent of U.S. foreign direct investment. Business groups point out that if relatively high tariffs and other market access barriers are removed, U.S. trade with the region could expand further. U.S. exports to many FTAA countries face overall average tariffs above 10 percent, whereas all 33 other countries participating in FTAA negotiations already have preferential access to the U.S. market on certain products through unilateral programs or NAFTA. In addition, some U.S. industry representatives assert that they have lost sales and market share to competitors that have preferential access into other Western Hemisphere markets through bilateral free trade agreements that exclude the United States. For example, the U.S. Trade Representative testified before the House Committee on Ways and Means in March 2001 that because of the Canada-Chile trade agreement, Canadian products will enter Chile duty free, while U.S. products face an 8 percent duty. The FTAA would help remedy this disadvantage by providing U.S. exporters with access equivalent to that provided to U.S. competitors. Supporters also assert that the FTAA would benefit the United States by stimulating increased trade and investment and enabling more efficient production by allowing businesses to produce and purchase throughout an integrated hemisphere. Beyond these economic benefits, the FTAA is widely regarded as a centerpiece of efforts to forge closer and more productive ties to Western Hemisphere nations, increase political stability, and strengthen democracy in the region. While an FTAA may provide benefits for the United States, it may also adversely impact certain import-competing sectors. Some U.S. business and labor groups argue that import restrictions are necessary to help them to compete against imports produced with more favorable labor costs, less restrictive environmental regulations, or imports that receive government assistance. Also, some labor and environmental groups argue that potential FTAA provisions may reduce the ability of countries to set and enforce high standards for health, safety, and the environment. For example, some opponents are concerned that the FTAA would contain NAFTA-like investment provisions, which they argue give corporations a greater ability to challenge government regulations than is provided for under domestic law. Finally, as is the case with other international trade agreements, the FTAA has drawn the attention of organizations and individuals apprehensive about the FTAA’s effects on greater global integration and the resulting impact on society and the environment. Between December 1994 and March 1998, FTAA countries laid the groundwork for an FTAA. Efforts over the past 18 months have produced a first draft of text on the major negotiating topics, which will constitute the basis from which negotiations will proceed in those areas. The FTAA negotiations have also resulted in the adoption and partial implementation of several business facilitation measures and improved coordination between FTAA countries on trade matters. In the first years of the FTAA process, FTAA negotiators agreed on the overall structure, scope, and objectives of the negotiations. FTAA participants formally initiated the negotiations at the San José Ministerial and Santiago Summit of 1998, where they agreed on how the negotiations would proceed. Specifically, they agreed in 1998 at San José that the FTAA would be a single undertaking, meaning that the agreement would be completed and implemented as one whole unit instead of in parts. Ministers also agreed that the FTAA could coexist with other subregional agreements, like Mercosur and NAFTA, to the extent that the rights and obligations go beyond or are not covered by the FTAA. An eventual FTAA agreement would contain three basic components: (1) chapters on general issues and the overall architecture of the FTAA and its institutions, (2) schedules for reducing tariff and nontariff barriers, and (3) chapters on specific topics. The specific topics currently under negotiation include (1) market access for goods, (2) investment, (3) services, (4) government procurement, (5) dispute settlement, (6) subsidies/antidumping/ countervailing duties, (7) agriculture, (8) intellectual property rights, and (9) competition policy. As illustrated in figure 1, FTAA participants formed negotiating groups on each of these topics; agreed on a general mandate for each group; formed special committees on smaller economies, the participation of civil society, and electronic commerce; and determined that the negotiations would be led by a vice-ministerial-level Trade Negotiations Committee. Chairmanship of the negotiations changes every 18 months, with Argentina serving as the current chair, to be succeeded by Ecuador for the next round of negotiations following the April meetings. Brazil and the United States are set to co-chair the final round from November 2002 to December 2004. Ministers set out the workplans for the negotiating process and select new chairs for the negotiating groups in the same 18- month increments. Since the 1998 launch of the negotiations, the nine FTAA negotiating groups have met the ministerial goals set for them of producing first drafts of their respective chapters, which contain the agreement’s detailed rules. As illustrated in figure 2, negotiators were directed by ministers in November 1999 to submit first drafts of their chapters to the Trade Negotiations Committee by December 2000, using annotated outlines developed in the previous phase as frames of reference. According to FTAA participants and other observers, these were ambitious goals, and working-level activity since 1998 has been fairly intense in order to meet them. They stated that merely providing the first drafts of the chapters marks important progress, as the drafts are necessary groundwork for future negotiations. Under FTAA negotiating procedures, individual countries may still propose new text to be included in the draft chapters; the removal of brackets and text can only be done by consensus. Third Summit Quebec City, According to U.S. and foreign negotiators, however, the draft text is heavily bracketed, indicating that agreement on specific language has not been reached. The draft text generally represents a consolidation of all proposals submitted by FTAA countries so far. FTAA participants state that the draft conveys wide differences between the countries over substance and philosophical approaches to key issues. The Trade Negotiations Committee is currently in the process of assembling a report that will be provided to trade ministers at the upcoming Buenos Aires Ministerial on April 7. In addition to making progress on producing the first drafts of the chapters, the negotiations have yielded several other accomplishments. Ministers agreed to adopt eight customs-related business facilitation measures (for example, expediting express shipments) and 10 additional transparency (openness) measures (for example, posting tariff and trade flows to the FTAA website) at the Toronto Ministerial in 1999. U.S. officials report that the FTAA countries immediately began to implement all 10 transparency measures and are in various stages of carrying out the customs measures. Outside of the concrete accomplishments, many observers feel the negotiations have greatly improved coordination and provided a broader understanding of trade and its impacts among FTAA countries, in part through technical assistance in the form of reports, databases, seminars, and financial assistance provided by the Inter- American Development Bank, the Organization of American States, and the United Nations Economic Commission for Latin America and the Caribbean. A number of challenges must be overcome in order to successfully complete the FTAA. For example, to build on the technical foundation of the first years of negotiations, much work remains to be done in three areas: setting the agreement’s detailed rules, deciding on the market access concessions, and devising the institutional structure to implement the completed agreement. However, negotiators have not yet begun to bargain on the agreement’s detailed rules or market access concessions, and vice-ministers have not begun to formulate the agreement’s institutional structure. Negotiators will conduct their work in an environment filled with challenges, due to the complex and controversial character of some of the issues, and the diverse nature and fluid political and economic condition of the participants. Many observers believe these challenges will be resolved only if the governments demonstrate their commitment to the agreement’s completion. In order to conclude the FTAA, the negotiating groups will first need to begin negotiating on the removal of the brackets that signify disagreement in the text on the agreement’s detailed rules. However, this task will be difficult, because the text deals with controversial and complex issues. For example, agricultural support measures and antidumping provisions are widely understood to be controversial; observers feel that some of the more difficult issues will not be resolved until the deadline for completing the negotiations. Other negotiating groups’ tasks are complex by virtue of the extent of the subject matter to be covered. For example, the market access negotiating group is responsible not only for the elimination of tariffs but also for devising rules of origin, customs procedures, safeguards, and technical barriers to trade. Other negotiating groups’ tasks are complex because they break new ground for many of the FTAA countries. For example, competition policy has not been the subject of a multilateral agreement on which to build, and only two of the FTAA countries are signatories to the multilateral Agreement on Government Procurement. Before countries can begin to negotiate on market access concessions, they must agree on the basic ground rules of the negotiations. Negotiators refer to these as the “modalities.” Once the FTAA participants agree on the modalities, market access liberalization negotiations can begin. Decisions on these procedural matters are especially important for five of the nine negotiating groups: market access, agriculture, government procurement, investment, and services. In addition, some negotiating groups need guidance on whether their groups can share procedural processes. For example, the market access and agriculture groups could have a common approach to tariff reduction starting points or the pace of tariff elimination. Much work remains to be done in order to establish an institutional structure for the implementation of the agreement. This involves such key issues as the role and location of a permanent secretariat and the institutional mechanism by which the participants will oversee implementation of the agreement, including dispute settlement provisions. FTAA experts expect it can only be completed near the end of the negotiation process because the structure is largely dependent on the results of the negotiations. The ministers also need to address administrative issues related to the negotiation process. The final negotiation period will be chaired jointly by the United States and Brazil. However, both U.S. and Brazilian government officials told us that they have not yet determined how a joint chair relationship will function. The very fact that 34 widely differing countries are participating in an endeavor to create a hemispheric free trade zone in itself complicates the process. Since the participants range from some of the world’s largest and most economically powerful to the smallest and most economically disadvantaged, their objectives and incentives for the negotiations naturally differ. For example, the United States seeks broad improvements in trade rules and access, in addition to the lowering of regional tariffs; Brazil is primarily interested in gaining access to certain sectors of the U.S. market in which it faces relatively high barriers; the smaller economy countries are interested in protecting their economies from becoming overwhelmed by the larger ones while securing special treatment in an eventual FTAA; and Mexico has less economic incentive to pursue an FTAA because it already has preferential access to most hemispheric markets through a comprehensive network of free trade arrangements. Finally, several FTAA experts told us that the 2005 deadline has seemed far away to many participants, thus sapping needed momentum from the negotiating process. The FTAA negotiating process is challenging because it requires consensus. Interests of specific individual countries or negotiating blocks may not be ignored even if they are not accepted in their entirety. For example, the United States pressed for the inclusion of labor rights and environment provisions in the FTAA. This proposal was met with steadfast opposition by some FTAA countries, but the United States was ultimately accommodated with the creation of the Committee of Government Representatives on the Participation of Civil Society. The Committee, which is to provide a vehicle for public input on these issues, remains a point of contention for both the United States and some of its FTAA partners. For example, the United States proposed that the Committee release a report containing recommendations based on the first round of public input but was initially blocked from doing so by another FTAA country. Eventually, a compromise was reached, and the Committee issued a summary report of the public input. Another challenge is the varying resource capacity of the FTAA participants. Many of the countries, including most of those with smaller economies, negotiate in blocks, which helps them to pool resources in the negotiations. However, government officials from some FTAA- participating countries told us that they are concerned about the demand placed on their limited budgets and staff. For example, the market access negotiating group, which has a very broad portfolio of issues, was not able to be broken up into more manageable components because of resource capacity limitations. In addition, potential competing trade negotiations could also challenge the FTAA process. For example, several foreign government officials explained that the start of a new round of negotiations at the World Trade Organization (WTO) would require them to choose between the WTO and the FTAA for their most qualified negotiators and experts. The domestic political and economic climate of the participants influences not only their internal policies but also the reaction of the other participants. The recent U.S. election is a good example. FTAA experts told us that uncertainty in the fall of 2000 over how the election would affect the direction of U.S. trade policy impacted the progress of the negotiations. In addition, the United States had not developed its negotiating position for several important issues. Some FTAA experts told us that they believed the United States did not have a mandate to make meaningful concessions on market access, which are, in their view, necessary to complete an agreement. In addition, some experts believe that progress in the FTAA in certain areas such as agriculture is reliant on progress in the WTO. Meanwhile, economic hardship and political uncertainty have made some participants more reluctant to pursue an FTAA. FTAA experts noted that in the future, participating countries could face other distractions that would direct their energies away from the FTAA. This includes increased opposition from groups that have not yet fully mobilized against the FTAA. A number of participants told us that the FTAA could be successfully concluded if the key Western Hemisphere leaders demonstrate that they have the political will to conclude the agreement. However, some observers have concerns about whether this climate currently exists in the two main FTAA countries: the United States and Brazil. In particular, FTAA experts and participants have been closely following the debate within the United States on the overall direction of U.S. trade policy and its implications for the FTAA. Some FTAA participants believe that the United States has been distracted from pursuing trade liberalization because it lacks a domestic consensus on the benefits of trade and the way in which to handle the overlap between trade and labor rights and the environment. Several told us that they believed the absence of trade promotion authority has hampered the process to the extent that other countries have held back making concessions on free trade agreement rules and procedures. Others stated that the primary cost of the President’s lack of trade promotion authority was in giving others an excuse to slow progress. Many observers we consulted believe that trade promotion authority is essential for the next phase of negotiations, particularly completion of the market access concessions. These experts said that the foreign partners will not make significant concessions unless they have credible assurance that the deal will not come undone when submitted to Congress for approval. Concerns also exist about Brazil’s commitment to the FTAA process. Even though Brazil has actively participated in the negotiations, observers have noted that Brazil has appeared reticent to embrace an FTAA, and Brazilian officials admit that Brazil has held back during the negotiations. They explained that this reticence is because they believe the United States is not ready to negotiate on issues of greatest interest to Brazil such as high U.S. tariffs on key Brazilian exports and changes to the U.S.’s antidumping regime. In addition, Brazil’s Foreign Minister recently stated that the FTAA is less of a priority for Brazil than the expansion of Mercosur in South America. The April 2001 meetings of ministers in Buenos Aires and leaders in Quebec City represent a critical juncture in the process. Successful meetings in April could lend fresh momentum and clear direction to the FTAA at an important point in the negotiations. At a minimum, FTAA negotiators need guidance for the next 18 months to proceed. However, while the time allotted to settle numerous outstanding decisions is tight, there has been considerable high-level political activity recently that might improve the chances for a favorable outcome. Both April meetings, but particularly the April Summit of hemispheric leaders, provide an opportunity to inject momentum into the negotiating process at a critical point in the FTAA’s development. Past summits have been used to make major advancements in the FTAA process. For example, the first summit, held in Miami in 1994, resulted in the leaders’ commitment to achieve the vision of an FTAA by 2005. The April Summit will engage President Bush and other newly elected heads of state in the FTAA process and provide an opportunity for all 34 leaders to renew their countries’ political commitment to the FTAA. Doing so at this time is particularly important, because the phase of negotiations where countries set out initial positions is ending. The next phase is expected to narrow the many substantive differences that remain, which will require political direction and support. The April meetings will provide an indication of the U.S.’s and other countries’ willingness to make the effort and tough choices required for the bargaining that lies ahead. The April meetings also represent an opportunity to generate interest in and support of the FTAA within the U.S. Congress, the U.S. business community, and the U.S. public. This support will be crucial if the United States is to provide the forceful leadership many FTAA participants believe is necessary for concluding a deal. It is also required for ultimate approval of an FTAA in Congress and in the U.S. “court of public opinion.” Until recently, congressional interest in the FTAA has been limited, and business support has been muted, according to both business and government officials. The April meetings could highlight the importance attached by hemispheric leaders to an FTAA and provide reasons for optimism about its potential viability. The political boost FTAA supporters hope to achieve in April depends, in part, on the meetings’ success in addressing key questions about how negotiations will proceed. These decisions will set the pace, goals, and structure for the next phase of negotiations, since the ministers typically set out the agenda for the next phase of the process at the ministerial meeting. As shown in figure 3, specific direction needs to be provided for the remainder of the negotiations. At a practical level, the negotiators are seeking specific direction as follows: 1. The additional work to be done in refining the rules and disciplines contained in the draft texts, such as removing the brackets that currently signify disagreement. 2. The date for deciding on how negotiations on specific market access commitments will be negotiated. 3. General and institutional provisions of an FTAA. 4. The chairs of the various groups and committees for the next 18 months, and whether to create new committees or groups. However, these practical decisions may be affected by broader issues. For example, Chile has floated the idea of moving up the target date for completion of the negotiations to December 31, 2003, with a final agreement entering into effect on January 1, 2005. This idea of accelerating negotiations is still being debated within and among FTAA governments and may be actively discussed at the April meetings. Some FTAA participants, notably Brazil, have publicly stated that a 2003 deadline is unrealistic. Others believe that a 2003 deadline is both doable and desirable. Decisions made at the April meetings could affect public input into and support for the next phase of FTAA negotiations. For example, trade ministers are expected to consider adopting additional business facilitation measures. In addition, whether and how to respond to the input from civil society groups must be decided. U.S. groups that submitted formal input to the FTAA Committee of Government Representatives of Civil Society told us they are disappointed because there is little evidence that their input is being given serious consideration in FTAA negotiations. Some U.S. government officials we interviewed concurred with this assessment. Others said that U.S. negotiators are considering the input, as are some foreign negotiators. The United States is seeking a more in-depth report on civil society views this year and an expansion of public outreach efforts in future years. In addition, Canada, more than 50 Members of Congress, and various U.S. nongovernmental groups are calling for public release of the bracketed text. Publicly available information on the FTAA negotiations is limited, a fact that has caused suspicion and concern among the nongovernmental groups. These groups see the release of the text as an important confidence-building measure in its own right and as concrete evidence of ministers’ commitment to transparency in decision-making. However, this is likely to prove controversial among FTAA governments in April, given the ongoing and confidential nature of FTAA deliberations. The issue of transparency is also controversial domestically. U.S. negotiators note that releasing the text could hamper their flexibility in exploring creative options to obtain their objectives. Even though the U.S. government released public summaries of U.S. negotiating positions in the FTAA in late January, it faces a lawsuit by two environmental groups seeking access to the full text of U.S. proposals. A large number of issues remain to be resolved between now and the conclusion of the April meetings. When vice-ministers met in January 2001 to prepare for the April meetings, their discussions focused on solving controversies associated with the bracketed text. They spent less time discussing other decisions required in April, or resolving issues, such as whether more business facilitation measures are practical. In addition, the vice-ministers could not schedule an anticipated follow-up planning meeting. As a result, FTAA countries will be forced to tackle their ambitious agenda for April in a very short time frame. Only 4 days of official meetings have been scheduled, and these immediately precede the Ministerial. Expected protests by opponents of the FTAA may complicate the situation further. The United States has faced unique constraints in preparing for the Buenos Aires Ministerial. The new U.S. administration has yet to decide its position on key issues, such as whether to support a 2003 deadline for completing FTAA negotiations, and public release of the bracketed text. In addition, Robert Zoellick, the chief U.S. trade negotiator, was sworn in as U.S. Trade Representative on February 7, just 2 months before the Buenos Aires Ministerial. While significant work remains to be completed for the April meetings, there has been considerable high-level political activity that might improve the chance for a favorable outcome. The new U.S. administration has initiated a number of high-level contacts between President Bush and key hemispheric leaders in advance of the Quebec Summit of the Americas. Already, President Bush has met Mexican President Vincente Fox, Canadian Prime Minister Chrétien, Colombian President Pastrana, and Salvadorean President Flores. Meetings with Brazilian President Cardoso, Chilean President Lagos, and Argentinian President de la Rua have been announced. Among other things, the meetings are intended to establish personal rapport and to reassure these leaders of President Bush’s intention to make the region a priority and to conclude the FTAA. The President’s Trade Policy Agenda released in early March underlines these ideas, as well as the President’s seriousness in securing trade promotion authority from Congress to implement an FTAA. These statements, and others like it, may help the administration establish political support for the decisions required to start the next phase of FTAA negotiations on a solid footing. We obtained oral comments on a draft of this report from the U.S. Trade Representative’s Director for the Free Trade Area of the Americas. USTR generally agreed with the information in the report and provided technical comments that we incorporated as appropriate. To meet our objectives of (1) discussing what progress has been made in the free trade negotiations to date, (2) identifying the challenges that must be overcome to complete a free trade agreement, and (3) discussing the importance of the April meetings of trade ministers and national leaders of the participating countries, we reviewed FTAA and executive branch documents and related literature and economic literature, and held discussions with lead U.S. government negotiators for each of the FTAA negotiating groups. We also had discussions with foreign government officials representing the negotiating blocks, and from officials with the Inter-American Development Bank, the Organization of American States, and the United Nations Economic Commission for Latin America and the Caribbean, who each provide technical assistance to the negotiations. In addition, we met with experts on the FTAA and international trade negotiations, and business and civil society groups that have expressed interest in the FTAA process. This report is also based on our past and ongoing work on Western Hemisphere trade liberalization. We conducted our work from September 2000 through March 2001 in accordance with generally accepted government auditing standards. As you requested, unless you publicly announce its contents earlier, we plan no further distribution of this report until 15 days after its issue date. At that time, we will send copies to appropriate congressional Committees and to the Honorable Robert Zoellick, U.S. Trade Representative. Copies will be made available to others upon request. If you or your staff have any questions about this report, please contact me on (202) 512-4128. Other GAO contacts and staff acknowledgments are listed in appendix II. The 34 FTAA countries include some of the U.S.’s largest trading partners and some of its smallest. Many of them are members of regional trade groups or free trade agreements. Figure 4 shows the countries of the FTAA region and some of the regional trade groups. Table 1 shows the U.S. trade and investment relationship with the 33 other FTAA countries, organized by regional trade groups. In addition to the persons named above, Tim Wedding, Jody Woods, Ernie Jackson, and Rona Mendelsohn made key contributions to this report. | The negotiations to establish a Free Trade Area of the Americas (FTAA), which would eliminate tariffs and create common trade and investment rules within the 34 democratic nations of the Western Hemisphere, are among the most significant ongoing multilateral trade negotiations for the United States. Two meetings held in April 2001 offer opportunities to inject momentum and set an ambitious pace for the next, more difficult phase of the negotiations. Because of the significance of the FTAA initiative, this report (1) discusses the progress that has been made in the free trade negotiations so far, (2) identifies the challenges that must be overcome to complete a free trade agreement, and (3) discusses the importance of the April meetings of trade ministers and national leaders of participating countries. GAO found that the FTAA negotiations have met the goals and deadlines set by trade ministers. Significant challenges remain, including market access concessions and doubts that key Western Hemisphere leaders will have the political will to embrace the agreement. The April meetings of trade ministers will serve as a transition from the initial proposal phase to the substantive negotiations phase. |
More Dark Sky Conservation Initiatives
FOR IMMEDIATE RELEASE – September 3, 2013
Contacts: Chad Moore, This email address is being protected from spambots. You need JavaScript enabled to view it. , 970-267-7212
Nate Ament, This email address is being protected from spambots. You need JavaScript enabled to view it. , 435-719-2349
Starry Starry Night: A cooperative effort to celebrate, promote and preserve the star-filled night skies of the Southwest’s Colorado Plateau Starry Starry Night: A cooperative effort to celebrate, promote and preserve the star-filled night skies of the Southwest’s Colorado Plateau
MOAB, Utah – In a collaborative effort to “celebrate starry skies” across the Colorado Plateau of the American Southwest, a voluntary cooperative is organizing to promote the preservation, enjoyment and tourism potential of stargazing and astronomy in the vast region.
The Colorado Plateau Dark Skies Cooperative is focused on the topographic heart of the high desert, forest and canyon country where the Four Corners states meet. The 130,000-square-mile Colorado Plateau, contains substantial parts of Utah, Colorado, Arizona and New Mexico. Its combination of high elevation, excellent air quality, low population density and frequent cloud-free weather afford world-class viewing and enjoyment of naturally dark, star-filled skies.
In daylight, the region’s striking scenery has long attracted millions of visitors annually to national and state parks, national forests, tribal lands, and other public lands. Now, their after-dark appeal is a rapidly growing phenomenon, too. Where the “drive-through” nature of some daytime tourist visits can be fleeting, stargazing fosters overnight stays that can pump more dollars into local economies. In much of the developed world, the experience of a dark sky in one’s own back yard is disappearing or gone. On the Colorado Plateau, the exceptional unfettered view of the Milky Way, planets, meteors and galaxies has become a major reason for many to visit from across the U.S. and around the world.
In support of this cooperative initiative, the National Park Service (NPS) has hired a full-time Colorado Plateau Dark Sky Cooperative Coordinator, Nate Ament, with an office in Moab, UT. Nate joined the cooperative with a diverse background of environmental education, resource management, and restoration coordination across the western U.S. He has spent most of his life on the plateau, exploring its wonders and working toward their preservation. Nate will work with the NPS’s own parks and with other land management agencies, interested communities, groups, businesses, and individuals to support local projects and promote civic engagement with the dark skies message.
“We commend the members of the Colorado Plateau Dark Sky Cooperative,” said W. Scott Kardel, managing director of the International Dark Sky Association. “Their work will help preserve valuable resources on the ground and in the sky while keeping the stars brightly shining over the plateau for generations to come.”
Dark Sky Cooperative members invite community discussion about what form initial local efforts might take – public meetings, lighting demonstration projects, night-skies festivals, dark-sky monitoring and the like. (A list of online resources is below.)
Although labeled for the Colorado Plateau, the region’s dark skies have no formal boundary. The initiative intends to support and encourage all who voluntarily seek to protect, enhance and appreciate the plateau’s night-sky resource as a recreational, economic and educational treasure. Other dark sky benefits include cultural heritage, improved habitat for nocturnal wildlife, energy conservation and greenhouse gas reduction, preservation of rural character, promotion of astronomy and the inspiration of youth with an interest in science.
Colorado Plateau communities such as Flagstaff, AZ and Springdale, UT already have adopted dark sky ordinances that foster the use of lighting that does not harm the night viewing environment. Some communities, businesses, individuals and government agencies also are retrofitting light fixtures to reverse past practices that may have unnecessarily dimmed clear night views.
In federal and state public lands alone, the Colorado Plateau’s dark sky resource is enormous and rich. The plateau contains at least 27 national parks and monuments, five national forests, many Bureau of Land Management (BLM) areas, and several state parks of the Four Corners states. In 2007, Natural Bridges National Monument in southeastern Utah was named the world’s first-ever “Dark Sky Park” by the International Dark-Sky Association. In many of the national parks, stargazing programs are the most popular ranger-led activity, day or night. A number of of them have annual night-sky festivals, as do some plateau communities. On September 6-7, 2013, Wayne County, UT will hold its fourth annual Heritage Starfest.
Nor is this a new activity. People have been drawn to view the Colorado Plateau night skies for millennia, from prehistoric ancestors of pueblo Indian peoples to astronomers, vacationers and dreamers today. Bryce Canyon National Park in southern Utah, which averages 305 cloudless nights a year, has hosted stargazing programs continuously since 1969. In 2012, the park reported approximately 52,000 night-sky related visits and $2 million in associated benefits to local economies.
“The public knows that the dark skies of the Colorado Plateau are both a celestial treasure and a celestial refuge,” said Chad Moore, Night Skies Team leader for the Park Service, one of the partners working to organize the cooperative. “We are happy to partner in this effort so that residents and visitors alike will see this ‘dark harbor’ as something worth protecting now and for the future.”
There are numerous online resources for further information, including:
The International Dark-Sky Association website (http://www.darksky.org/)
Local government dark-skies activities in Colorado Plateau communities:
-- Flagstaff, AZ (http://www.flagstaffdarkskies.org/)
-- Springdale, UT (http://www.springdaletown.com/uploads/fb/pdf/generalplan_pdf/07-Environmental-Resources2010-clean.pdf )
The National Park Service Night Sky website (http://www.nature.nps.gov/night/ )
For additional information about the Colorado Plateau Dark Sky Cooperative, news media can contact: Nate Ament, Colorado Plateau Dark Sky Coordinator, This email address is being protected from spambots. You need JavaScript enabled to view it. , (435) 719-2349.
and
Chad Moore, National Park Service Night Skies Team Leader, This email address is being protected from spambots. You need JavaScript enabled to view it. , (970) 267-7212.
-- end – ||||| Share This Story Tweet Share Share Pin Email
When she's trying to explain the idea of light pollution, Laura Williams often shows people a picture taken from the South Rim during a blackout at Grand Canyon National Park.
It's mostly black, with some stars in the sky and a few car headlights, but then, there's this orange glow creeping from the south.
It's not coming from Sedona or Flagstaff, but from the Valley.
The Greater Phoenix area casts a nighttime arc of light over most of Arizona, extending more than 200 miles in each direction.
"People have a hard time believing it," said Williams, Grand Canyon National Park's night-skies inventory coordinator. "They don't realize how bright Phoenix is or how far light travels."
Williams is creating a system to identify unnecessary fixtures and too-bright bulbs so they can be replaced.
The idea is that increased darkness will mean Canyon visitors can better see the celestial bodies that make up our universe, and have an easier time considering their places in it.
But light from Phoenix car lots, Glendale billboards and Mesa strip malls floods the skies to the north, well past Flagstaff's Lowell Observatory, and well past Kitt Peak Observatory on the Tohono O'odham Reservation southwest of Tucson.
The glow washes out the skies from California's Joshua Tree National Park to the western edge of the Chihuahuan Desert in New Mexico.
For those living in cities, it can be hard to fathom why the word galaxy comes from the Greek for "milk."
CLOSE Arizona Republic columnist Ed Montini and reporter Megan Finnerty discuss how Phoenix's sky-glow can be seen all the way at the Grand Canyon.
It can be hard to appreciate that even above the white-glowing intersection at Central Avenue and Camelback Road in Phoenix — the Valley's midpoint — the Milky Way is above, shining brightly enough to cast shadows on Earth.
It can be hard to imagine that in truly dark spaces, people would run out of wishes before they'd run out of shooting stars.
Light pollution doesn't just bleach the night sky. It squanders electricity, does little to curb crime, disrupts life for animals including sea turtles, bats and migratory birds, and has been linked to everything from insomnia to breast cancer in humans.
"The whole issue isn't about not having light. It's how do we use light more responsibly and thoughtfully," said Paul Bogard, author of 2013's "The End of Night."
"The night is beautiful, amazing and filled with wonder," Bogard said. "What's the value of ... standing under the galaxy and wondering who you are and what your life is about? Losing it has an enormous cost to our souls and spirits."
Scientists estimate that in about 10 years, America will have only three dark patches of land where people will be able to clearly see the Milky Way and where they'll be able to do high-quality astronomy and nocturnal wilderness research.
Those areas are southeastern Oregon and western Idaho; northeastern Nevada and western Utah; and northern Arizona and southeastern Utah — the better part of the Colorado Plateau.
The light-sprawls of the greater Las Vegas and Phoenix areas imperil dark skies in both the Colorado Plateau and northeastern Nevada and western Utah. The Oregon-Idaho patch is not near large cities.
"Phoenix is in a unique position because it's such a large metro area so close to so many dark places," said Nathan Ament, coordinator of the Colorado Plateau Dark Sky Cooperative for the National Park Service.
"Your light affects people's experience in national parks and nocturnal wildlife environments," Ament said. "You can't just say, 'It's my backyard and I'll do what I want.' It's a shared resource."
The fact that two big cities can make a darkness difference is an anomaly. In the planet's brightest places, Europe, Japan, South Korea and the U.S.east of the Mississippi, light pollution looks like a mostly unbroken glow. It wouldn't matter if Paris forfeited its title as the City of Light, if Milan, Madrid and Oslo didn't tone it down, too.
This map shows light pollution using false coloring to illustrate the intensity of light that spreads from cities and towns. The white and red areas are the brightest. The gray and blue areas are the darkest. This image illustrates an especially sensitive measure of light pollution and is not meant to imply that all colored areas are very bright. Rather, all colored areas are brighter than they would be without light pollution. This map shows light pollution using false coloring to illustrate the intensity of light that spreads from cities and towns. The white and red areas are the brightest. The gray and blue areas are the darkest. This image illustrates an especially sensitive measure of light pollution and is not meant to imply that all colored areas are very bright. Rather, all colored areas are brighter than they would be without light pollution.
Williams, Ament and others within the National Park Service are part of a 15-year-old movement, born in Western national parks, to inspire individuals to protect the skyskape.
During ranger talks, team members recommend putting landscape and architectural up-lighting on timers, putting security lights on motion sensors and making sure lights shine downward and only where needed. They talk about saving energy and money, and how easy it is to find the right bulbs and fixtures at, say, Home Depot.
Ament said rangers prefer to educate visitors about smart lighting, rather than lobbying municipalities to rewrite lighting codes.
"In the West, it seems to work a lot better for people to make decisions for themselves than if city or state or federal government tells them to," he said.
Astronomy at stake
Beyond conservation, there's the economic argument.
Arizona is home to three of the five largest telescopes in the continental U.S. And the bulk of America's telescopes are concentrated in the West. Many of them sit in Southern California and on or at the edges of the Colorado Plateau: around Flagstaff, Tucson and western New Mexico.
Astronomy, space and planetary-science fields bring Arizona $252.8 million annually. They attract about 200,800 visitors and employ about 3,300 directly and indirectly, according to a 2008 study by the University of Arizona, the most recent available.
Dark skies-friendly lighting checklist:
1. Is the light necessary?
2. Is it on only when needed, or should it be on a timer or a sensor?
3. Is it fully shielded, meaning, does the light point only down, not out and up?
4. Does it give off the minimum amount of light necessary or could you do dental surgery on the steps?
5. Is it the right color? Amber bulbs create the least skyglow, which is why Tucson’s streetlights are that color.
6. Is the bulb energy efficient?
The future of this industry depends on darkness.
That's why Tucson, among other Arizona cities, implemented dark-skies-friendly lighting codes decades ago. Tucson hasn't gotten brighter in 30 years even though the population has increased 59 percent since 1980, said Katy Garmany, an associate scientist at the National Optical Astronomy Observatory outside Tucson, which just completed a study of Tucson's skyglow.
But scientists at the National Observatory on Kitt Peak estimate that if the Valley continues to brighten, they've got about 10 years left, said Garmany.
Then astronomers will have to travel to Hawaii or Chile to do certain research, such as trying to spot planets outside our solar system.
"It keeps getting brighter and brighter," Garmany said. "It's just really hard to do the cutting-edge stuff, and you have to go ... where it's darker. (Scientists) have ways of eliminating extra scattered light in the sky, but there's only so much they can do."
Creating dark skies
John C. Barentine is the dark-skies places program manager for the Tucson-based International Dark-Sky Association. He leads the team that designates places as having the kind of low but adequate lighting sufficient to preserve the nightscape.
Flagstaff was the first International Dark Skies Community in 2001. Barentine's team has designated only seven other communities and 25 parks and reserves globally. Sedona was added to the list earlier this year.
Barentine grew up in Phoenix and remembers the first time he saw the Milky Way.
"To say it was shocking would be an understatement," he said. "I was 10 or 11 years old. We were up in Flagstaff to play in the snow. I remember going outside … and I was floored. I was absolutely floored."
Now, he spends his days helping communities create lighting codes so kids don't have to ride in the car for 90 minutes to be filled with awe at the night sky.
A saguaro is illuminated by headlights from passing cars on Interstate 17 against the northern horizon on a recent moonless night south of Black Canyon City.
(Photo: Rob Schumacher/The Republic)
He goes to town halls and city planning meetings to talk about how putting lights on timers and sensors, adding shields to the tops of lights and choosing amber bulbs or lower wattages can all reduce skyglow without impacting ground visibility.
Barentine sighs when he talks about Phoenix, but said the city's lighting codes "aren't bad."
The city has site codes, which were last updated 11 years ago, and street-lighting codes, which were last updated two years ago. The site codes don't apply to lights older than 1985, although there aren't many still in use. But if a building is new, the lights must conform to updated codes, which call for the kinds of lights Barentine recommends.
But issues remain. Asphalt is more reflective than dirt or grass, and lighting codes don't address ground glare. Horizontal lighting, a main cause of skyglow, is common here: Think of strip-mall signs lit from within. Common white and blue lights, even LEDs, glare more than other colors.
And enforcement is an issue.
"We don't have the right equipment, time or staffing to do that," said Tim Boling, the deputy director of neighborhood services for Phoenix.
“(Scientists) have ways of eliminating extra scattered light in the sky, but there's only so much they can do.” Katy Garmany
Last year, Boling's staff closed 70,000 complaint cases, and he estimates only three or four were related to lighting violations.
In Flagstaff, which adopted dark-skies codes in 1989, all outdoor lighting is low and amber-colored, even at the hospital and jail. But people can see easily because nothing is significantly brighter than anything else. And safety isn't an issue. Research across several disciplines has shown that more light doesn't necessarily make buildings or streets safer.
Here's why: The human eye adjusts to the brightest thing in the landscape. This is why in starlight, if a woman waits 15 to 45 minutes, she can see her way along a path, or find dropped change on the ground.
But introduce a set of headlights, an iPhone screen or a luminous watch face, and all of her dark-adaption gets blown out. It's why in dark rooms, humans are blinded by camera flashes.
It's why at El Tovar Lodge on the South Rim, visitors on the light-filled front steps can't see elk 20 feet away.
It's why a gas-station canopy makes everything around it seem dim, causing neighboring businesses to add lights proportionally.
In some cases, diminishing nighttime lighting can improve visibility. Reflectors, like on the edges of highway lanes, or limited guide lights, like on runways, work better than adding floodlights because the key to visibility is contrast, not overall brightness.
To preserve the work done at Lowell, an acre can have only 50,000 lumens in parts of Flagstaff. (A lumen is a measurement of how much light a bulb puts out. A 40-watt incandescent bulb puts out 450 lumens; a 100-watt one puts out 1,600 lumens.)
In contrast, in parts of Maricopa County, a single sign can give off 40,000 lumens. Barentine's association recommends a sign not exceed 3,000 lumens. In Phoenix, there are no lumen limits at all. In the Valley, the brightest things are gas stations, billboards and car lots, all using wattage to attract attention, Barentine said.
The golden glow from the lights of the Phoenix metro area can be seen from the Mogollon Rim on Labor Day.
(Photo: Pat Shannahan/The Republic)
Thirty-two municipalities make up the Valley, and since Phoenix is constricted from substantial growth by the surrounding municipalities, it will take all of them working together to limit skyglow, said Alan Stephenson, the director of Phoenix's planning and development department.
It's not on people's short lists, though.
"When we talk to downtown residents, they're more concerned about street lights being broken, or safety, when it comes to light," said Stephenson.
People aren't against dark-skies-friendly lighting codes, though. They just don't think about light pollution, said Christian B. Luginbuhl, an astronomer at the United States Naval Observatory in Flagstaff.
"They're focused on their day-to-day jobs and whether the traffic is bad or whether they can buy the things they need for their families," said Luginbuhl. "They don't notice that the stars are gone. And if they do, they think, 'Well, that's what it's like to live in a city.'
“(People are) focused on their day-to-day jobs. They don't notice that the stars are gone.” Christian Luginbuhl
"But it doesn't have to be that way, and they can demand more."
Special interests derail efforts
Once, there was a movement to demand more.
In 2009, the Maricopa Association of Governments formed the Dark Skies Stakeholders Group to study ways to protect the state's astronomy industry from Maricopa County's glow.
CLOSE While you were enjoying your Labor Day weekend, the world kept turning. Watch the Milky Way move across the sky in a two-hour-long time lapse on the Mogollon Rim. Pat Shannahan/The Republic
Typically, the association coordinates the policies that make it easy to live in cities that border each other, setting standards and best practices for things like air quality and solid-waste management.
The group was composed of 96 people, scientists, city engineers, transportation-safety experts, town planners and representatives from banking, retail and signage organizations.
They worked from January 2009 to April 2011, examining light-use studies from around the world, interviewing experts about best practices and weighing concerns from stakeholders and the public in meetings.
They drafted an 87-page public document that called for limiting lighting throughout Maricopa County to 150,000 lumens per acre, or three times most of Flagstaff's limit.
When the plan was presented for approval to an internal MAG board, members who had not previously attended meetings raised concerns about safety, liability and diminished commerce.
Those members included the International Council of Shopping Centers, the Arizona Food Marketing Alliance, Arizona Retailers Association, Arizona Sign Association and International Sign Association, and the Arizona Bankers Association.
According to the meeting's minutes, only one association representative presented a case study. None of the others produced supporting data.
"They withheld their participation until the end, and then they tried to create a train wreck," said group member Luginbuhl. "They said it would cost people money and hinder commerce, and it derailed the entire thing."
Luginbuhl had seen this before. In the late 1980s, he'd been instrumental in designing Flagstaff's lighting codes to protect the views from Lowell Observatory. He watched the same groups bring up the same issues.
But at a crucial public meeting in 1989, the deputy county attorney spoke up, saying liability was not going to be a problem. The planning and zoning commission sided with the scientists who said visibility would not be a problem, commerce would continue, and people would stay safe. Flagstaff passed stricter lighting codes.
Since then, neither Flagstaff nor any of the other International Dark Skies Communities has turned into a commerce-free, crime-filled Gotham. The retail centers, banks and gas stations have remained easy for people to find and patronize, even at night.
The Maricopa Association of Governments has not addressed the issue since.
Celebrating dark skies
"By day, the Canyon makes you feel small," said park ranger Marker Marshall. "By night, the sky does too."
“By day, the Canyon makes you feel small. By night, the sky does too. ” Marker Marshall
It was late June, a week without a moon, and Marshall was hosting a ranger talk called "Starry Starry Night: A Tour of the Universe As Seen Over the Grand Canyon" as part of the 24th Annual Grand Canyon Star Party.
The auditorium was at capacity, with 233 people ready to learn how to use a sky map and discern a planet from a star. Kids clutched workbooks, eager to earn the Junior Ranger Night Explorer patch, deep blue and featuring Ursa Major.
Marshall wore tiny gold earrings shaped like Saturn. She said things like "Deneb is 54,000 times as luminous as our sun, putting out more energy in 10 minutes than our sun does all year. The light that we can see now left Deneb in 403 A.D., around the time of the fall of the Roman Empire."
Marshall grew up in New York City and remembers the night sky as orange. But one night when she was 7 and visiting her grandparents in rural Canada, they watched the Apollo 11 moon landing on TV.
"And it was amazing. But then we went outside and went out to look at the moon, and it was amazing all over again," Marshall said.
Marshall earned a biology degree at Smith College and soon got a ranger job at Organ Pipe Cactus National Monument in southern Arizona, where she memorized a new constellation each night.
There, at the start of her dark-skies programs, she'd just turn off the lights and the sudden appearance of so many stars would make people clap and exclaim.
Astronomer Tyler Nordgren has created posters like this one to promote night sky viewing. Astronomer Tyler Nordgren has created posters like this one to promote night sky viewing.
Later, she spent three years taking 96 astronomy classes as part of the Great Courses lifelong-learning program. The Whirlpool Galaxy is her favorite. "It's so pretty."
Just outside the auditorium, professional and amateur astronomers had set up 51 telescopes, each trained on a different celestial body.
More than 100 astronomy volunteers trek to the Canyon each year for the star party, mostly from Tucson. They bring telescopes and years of formal and informal expertise.
Mostly retirees, they coach people on how to peer through the lenses at Mars or Vega, and how to track their eyes along a green laser up to the collection of stars that looked to Ptolemy like a swan or a queen on her throne.
Just after 9 p.m., more than 1,000 people in hoodies and jeans wandered among the telescopes. The lot filled with shouts of "Wow!" and "Mom, come see this!" And, more breathless, "I had no idea."
Erich and Karen Shofstall drove from Livermore, Calif., with their two daughters to look at the stars. Kate, 10, said the drive took "forever."
"We can explore and learn and see some cool stars and some of the planets," said Erich. "It's really educational. I think the kids like it."
"We've seen Mars and Jupiter and Saturn," Kate said.
"How 'bout those nebulas?" asked Erich.
"Yeah, we've seen those, too," Kate said, taking the universe in stride.
"We get so many people who realize they've seen their first planet, first shooting star," Marshall said. "It's so special.
"People need the sense of beauty and perspective and awe that we get from our exposure to the universe in a dark night sky. It's part of every culture, part of being human — to contemplate what's above us."
Across the parking lot, a retired plastic surgeon from Boston led a constellation tour, which is to say, he stood in one place and pointed up with a laser.
"All that stuff you see is not clouds," he said. "It's perfectly clear tonight. It's the Milky Way."
A collective gasp rose from his small audience.
ON THE BEAT
Megan Finnerty is a reporter on the Page One team. She is the founder and host of the Arizona Storytellers Project. She loves the Grand Canyon. She has been with The Arizona Republic since 2002.
How to reach her
megan.finnerty@arizonarepublic.com
Phone: 602-444-8770
Twitter: @MeganMFinnerty | Even in the vastness of the American West, the glow from cities has become so bright that places with truly dark skies at night are becoming an endangered species. In the continental US, experts predict that in a decade, there will be just three areas where the sky will be dark enough to see the Milky Way clearly, the Arizona Republic reports. One area covers part of eastern Oregon and western Idaho, another includes parts of Nevada and western Utah, a third takes in parts of northern Arizona and southern Utah—and the latter two are in danger from the bright lights of Las Vegas and Phoenix, which can be seen for more than 200 miles. Light pollution not only hinders astronomy, it can disrupt ecosystems and people's sleeping patterns, warns the International Dark-Sky Association. The Tucson-based organization has been trying for years to preserve the West's dark spots, encouraging cities to adopt dark-sky-friendly lighting codes. This month, the group launched a campaign to preserve and promote the dark skies over the Colorado Plateau, stressing its value as a "celestial treasure and a celestial refuge" that has attracted visitors for centuries. "People need the sense of beauty and perspective and awe that we get from our exposure to the universe in a dark night sky," Grand Canyon ranger Marker Marshall tells the Republic. "It's part of every culture, part of being human—to contemplate what's above us." (Click to read about how the full moon messes with your sleep.) |
Photo: SAUL LOEB/AFP/Getty Images
If you’re interested in getting in on the dumbest fad sweeping the country while simultaneously looking down on the plebs who shop at Walmart, you’re out of luck. After clown masks began disappearing from Target stores nationwide last week, the company confirmed on Sunday that the Halloween costumes are being pulled due to creepy clown incidents.
“Given the current environment, we have made the decision to remove a variety of clown masks from our assortment, both in stores and online,” Target spokesperson Joshua Thomas told WCCO Minneapolis.
The latest round of clown sightings began in the South and often involved claims that the figures were trying to lure children into the woods. The evidence to back up such reports was pretty thin, but the stories left many spooked, and soon local troublemakers across the country saw an opportunity.
While it seems the vast majority of creepy clown sightings are pranks, recently there have been some disturbing, violent incidents in which attackers were reportedly dressed as clowns.
A woman in Oklahoma said she was attacked on Saturday night when she stopped on the side of the road to help a woman who flagged her down. The victim, whose brother passed away several days ago, told police that when she stopped the woman approached the driver’s side door followed by two men wearing clown masks. According to Oklahoma’s News on 6, she said the men pulled her from the car, held her down while the woman wrote “clown posse” on her face, extinguished a cigarette on her face, and choked her.
Officials say that thanks to social media, clown incidents are spreading outside the U.S. as well. Earlier this month, a person wearing a clown mask stabbed a teen in the shoulder in Sweden, and on Saturday, two teen girls in the U.K. say that they were chased by a person dressed as a clown and brandishing a machete.
And there are other signs that the trend had gotten completely out of hand. Early on Sunday morning, a Santa Clarita, California, homeowner told police that he fired a warning shot into the air after he was threatened by a knife-wielding clown. Police wound up arresting the homeowner on suspicion of possessing weapons and narcotics, but that’s not the weird part. Per the L.A. Times:
Deputies did discover a man with a clown mask hiding in some bushes a few blocks away from where the warning shots were fired — a sighting “unusual for that time of morning,” [Sergeant Cortland] Myers said.
However, “the homeowner didn’t identify this clown as the correct clown,” Myers said. “His guy had a full clown costume and a mask, and the clown he saw was taller.”
If your bushes have yet to be infested with creepy clowns, consider yourself lucky. ||||| Target is pulling all of its clown masks in stores nationwide and on its online site “out of sensitivity for the issue at hand,” a spokesman said Sunday.
The “issue at hand” is the “crazy clown craze,” including threats of violence made on social media.
Joshua Thomas, a Target spokesman, said all clown masks have been pulled from stores and the company is in the process of pulling the masks from its online site. As of Sunday afternoon, there were still five clown masks available at target.com.
In early October, Hopkins police arrested a 15-year-old girl from Bloomington who had posted a Kroacky Klown threat on Facebook aimed at residents in Bloomington, Richfield, Minneapolis, Brooklyn Park, St. Paul, Rochester, Apple Valley, Plymouth and Hopkins.
“Should I come to Hopkins and kill?” the post said, according to police. “If you live in the following Minnesota cities, you are in danger.”
The girl used her younger sister’s cellphone to create the fictitious Facebook account, police said. She told authorities that her intent was to scare her boyfriend, but the situation got out of control and went viral.
Two days later, Bloomington police arrested a 13-year-old boy after a clown-related post “implying violence” against Valley View Middle School.
Last week, it was announced that a St. Francis volunteer soccer coach was fired after he wore a clown mask in a photo at the team’s final practice.
School administrators said they had to act for the safety of students, several of whom had previously expressed anxiety about the national “creepy clown craze,” where people dressed in clown costumes are spotted walking residential neighborhoods at night, sometimes carrying weapons.
Thomas said he didn’t know if there had been an upsurge in sales of clown masks either in stores or online because of the “crazy clown craze.”
In Roseville, Mich., near Detroit, two 18-year-old women were arrested Oct. 8 after they allegedly dressed as clowns and jumped out and began to chase and scream at two 14-year-old girls, terrorizing them. The police chief called the women “morons” and “idiots” in a news release. The women have been charged with disorderly conduct. ||||| Creepy, threatening clown sightings have been increasing over the last couple months, leading residents of affected states to wonder just what’s going on. Earlier on Friday, some schools in Reading, Ohio were closed after a woman reported being attacked by someone dressed as a clown who threatened the students at her school. But the complaints extend far beyond Ohio. At least 44 states have had strange clown sightings so far, and the number keeps on growing.
Some clown sightings just hoaxes or jokes, and some may have been copycats from earlier news reports. For example, a report of a clown in Agawam, Massachusetts ended up being part of a promotion for the New England Scare Fest. And a report about a clown shot and killed in Indiana was apparently a hoax. (Read more about that in the Indiana section below.) But not all incidents are explained that easily, and a number of arrests have already been made. Some of the social media threats to different schools have caused parents a lot of concerns. And some of these threats, even though they’re sent to different states, are coming from accounts using the same fake clown names.
The affected states so far are Alabama, Alaska, Arizona, Arkansas, California, Colorado, Connecticut, Florida, Georgia, Hawaii, Idaho, Illinois, Indiana, Kansas, Kentucky, Louisiana, Maine, Maryland, Massachusetts, Michigan, Minnesota, Mississippi, Missouri, Montana, Nebraska, Nevada, New Jersey, New Mexico, New York, North Carolina, Ohio, Oklahoma, Oregon, Pennsylvania, Rhode Island, South Carolina, Tennessee, Texas, Utah, Virginia, Washington, Washington, D.C., West Virginia, and Wisconsin.
Although the creepy clown sightings slowed down right before Halloween, a new sighting was reported in late December.
Scroll down to see details on each state and let us know in the comments about any sightings that aren’t listed here.
Here are the states affected and what happened. States are listed in alphabetical order.
Alabama Clown Sightings
To anyone wanting to dress like a clown and terrorize people in Alabama, a warning: that is an easy way to be shot. — Bob Sentell Jr (@Commadore_Bob) September 20, 2016
In Alabama in late September, two high school students were arrested for making a clown video that threatened violence. A 10-year-old was arrested the same week for making clown threats. Some of the people arrested had made Facebook pages with fake clown names, like Flomo Klown and Kaleb Klown, and they were using those pages to make threats. At least nine clown-related arrests have been made in Alabama so far, including seven facing felony charges, Nola.com reported.
Alaska Clown Sightings
Creepy clown sightings continue in Alaska. In Juneau, one resident said her kids’ friends were talking about seeing clowns and really scaring her children. And another Juneau resident said her daughter saw someone dressed as a clown near Floyd Dryden Middle School. In the same city, two clowns were reported to have chased someone’s car while carrying metal objects.
Here’s one video shared on Facebook of a sighting in Alaska. It’s unknown if this video is real or staged:
Arizona Clown Arrests & Sightings
Schools in Mesa, Arizona were on high alert after threatening messages from a clown on a social media account named “Ain’t Clowning Around.” The account threatened to go to high schools on Friday, September 30 and kidnap students or kill teachers, ABC 15 reported. This was almost identical to an Ain’t No Clowning Around social media page that threatened the same thing to Missouri schools, which you can read about in the Missouri section below.
To make the whole thing in Arizona even creepier, some students got threatening text messages that asked if they were ready to play and threatened to kill them, ABC 15 reported.
Police in Phoenix arrested three teenagers for making similar threats, AZCentral reported. Two teens were also arrested in Phoenix after robbing a Taco Bell and Domino’s while wearing clown masks, NBC4i reported.
Arkansas Clown Reports
Arkansas hasn’t been spared of clowns. Arkansas Online reported that Cross County received an unverified report that four people were dressed as clowns, traveling near West Merriman Avenue. Sheriff J.R. Smith said they had zero tolerance for anyone dressing up as a clown to scare other people. Other unverified sightings were in Pine Bluff near White Hall. According to Arkansas Online, a school resources officer in White Hall is being investigated for posing in a clown costume in an online photo. It’s unclear what the nature of the photo was.
A man was arrested for driving in Poinsett County in a clown mask and harassing a store clerk, NWA Online reported.
Meanwhile in White Hall, Arkansas, a police officer was suspended for two days without pay after dressing in a clown suit for a party at his parent’s home and posting the photo on social media, KATV reported. The photo was posted a couple days after clown sightings were reported to the department. He wasn’t linked to sightings in the White Hall area.
California Clown Sightings
There have been quite a few clown sightings in California, too. On October 4, clowns threatened the Sacramento, Vallejo, and Fairfield public schools. A clown was also accused of trying to kidnap a one-year-old girl, but no arrests were made, ABC 7 News reported. The mom said a clown approached them at Denny’s on Willow Pass Road near Water World Parkway in Concord and tugged on the girl’s arm. On October 5, residents in Los Angeles County reported seeing people wearing clown masks, possibly carrying kitchen knives. And in San Pedro, two clowns were reported by Taper Avenue Elementary employees.
On Friday, threats to Marysville, California schools grabbed parents’ attention. The posts were made on Instagram by an account called “mozzytheclown.”
Police later arrested a juvenile in connection with the posts and threats.
Then on October 10, a clown was spotted on a bus in San Francisco and it flipped off a photographer who took its photo.
On October 16 in Santa Clarita, a homeowner fired a gun in the air to scare a way a clown who approached him with a knife while he was sitting on his porch. Later, a different person dressed as a clown and carrying burglary tools was arrested.
Then on October 23, three men were reported in Westfield San Francisco Centre, carrying clown masks and a gun. Mall security said the clowns ran away from them.
California has dealt with creepy clowns in the past, CNN reported. Two years ago, Bakersfield police dealt with 20 calls about clowns, including one carrying a weapon. Daily Mail even wrote an article titled: “Mystery clowns that are terrorizing California towns at night have started carrying GUNS.” But so far, the clowns haven’t returned.
Colorado Clown Threats
In Fort Collins, police reported on September 28 that a threatening Facebook message included a clown’s photo. It threatened local high school students. Meanwhile, more clown sightings are being reported all over Denver. One woman said a clown starting following he rafter she parked around 9:30 p.m. “Every time I would stop and turn around, it would just stare at me,” she said. When she got inside her house, it stood outside the window waving at her.
Then on October 6, a man punched a clown in Colorado Springs when he ran into him on Pikes Peek Greenway Trail and the clown refused to identify himself. The man said the clown hit him in the head with a bottle of whiskey and ran away.
Connecticut Clown Sightings & Hunt
You don’t want to leave Connecticut off your list of creepy clown sightings. On Monday night, October 3, hundreds of students armed themselves with hockey sticks and golf clubs as they went on a “hunt” for clowns, Patch.com reported. This is similar to what happened the same night at Penn State (which you can read about in the Pennsylvania section below.) That night, the police got up to thirty 911 calls about clown sightings on campus. The majority, Patch.com reported, were from people who said they heard about clowns on campus, not that they had seen any themselves. But a few had very specific reports of their sightings, including at Storrs Cemetery, Husky Village housing complex, and the Towers housing complex. Police couldn’t find the clowns.
A rumor spread that the school was under lockdown, along with Sacred Heart and Quinnipiac. But police said these rumors were false, Patch.com reported.
New Britain School was placed on lockdown on October 13 thanks to a clown-related incident. A man and woman were arrested for breach of peace after they were found driving with clown masks and playing very loud music.
Florida Clown Reports
Clown reports have been numerous in Florida. Police weren’t sure if a Victoria Park, Florida sighting was a reported crime or just an attempt to scare someone, USA Today reported. A woman said two people dressed as clowns were staring her down, but she didn’t know why. They weren’t found. Meanwhile, a video surfaced of a clown hiding in the woods near a Marion County road, NBC Miami reported. You can watch the video here. A clown was also spotted near Georgetown Apartments in Gainesville.
In the Bay Area, a teenage girl from Largo High School said that someone wearing clown shoes and a yellow-and-red-polka-dot outfit chased her while she was waiting at a school bus stop, Tampa Bay Times reported.
Pasco County also dealt with a clown scare and sent notifications warning parents about the concern. In the call, school officials said they were aware of an Ain’t Clowning Around Twitter site that posted threats to their high schools. The officials said they had no reason to believe the threat was credible, but were still placing the campuses on alert.
If you search this article for “Ain’t Clowning Around,” you’ll see that social media accounts with the same name have been making the same threats in states around the country. Other schools in Florida have also been locked down due to clown scares. A 12-year-old boy, connected with at least some of the threats, was arrested.
This isn’t the first time Florida has dealt with scary clown sightings. In 2014, people began dressing as clowns and scaring people in different Florida towns, including Jacksonville. Surveillance videos caught them on porches and sidewalks, Click Orlando reported. Here’s one of those videos:
Pumpkin ripper
Jacksonville, Florida pic.twitter.com/uEaHvub6Ld — Clown Sightings (@ClownsSightings) October 3, 2016
Georgia Clown Reports
The video above is from a WSB report of two girls who said they took Snapchat videos of a clown outside their apartment in Fort Oglethorpe. They said it appeared to be waving a knife before it ran away.
In Georgia, LaGrange police said they got multiple reports of children saying clowns were trying to lure them into wooded areas. A middle school was put on a soft lockdown because of clown reports, the LaGrange Daily News reported. In mid-September, two people were arrested for making false clown claims. Four people were later arrested for making clown threats, 11 Alive reported. Threats were also made to schools in Atlanta, AJC.com reported, but the district said there was no credible evidence for the threats.
In Athens, Georgia, an 11-year-old girl was so scared of clown reports that she took a knife with her to school for protection. She was arrested for carrying the knife.
Not all the clown reports are authentic, thought. In mid-September, Georgia police arrested two people for making false clown reports to 911. The two friends, in their mid-20s, called police and said they saw clowns in a white van.
Hawaii Clown Sightings
Don’t leave Hawaii out of your list. On October 6, a group of clowns posted on Instagram, threatening violence to Wai’anae Intermediate school. The threat was posted from an account called 808Clownz. No clowns were found, but parents were still very shaken up.
Idaho Clown Reports
Idaho residents have also reported some creepy clowns. In fact, so many are reporting clowns that the Nampa Police Department posted on Facebook that “the large number of Nampa residents who are out looking for clowns is making this more difficult to deal with.” Idaho Statesman reported that several residents called the police about seeing people dressed as clowns, including one carrying a machete. But every time the police went to that location, the clown suspects were gone. Among these sightings are people talking about a brightly colored van that had several clowns inside.
In Wendell, a teen said they received an anonymous text that a clown was coming to their school. The Shoshone County Sheriff’s Office issued a press release about several students getting harassed similarly on Facebook. They said this was identical to harassing that students across the country were receiving.
Illinois Clown Sightings
And yes, there have been some clown sightings in Illinois too. According to The Haverhill Gazette, a man dressed in a clown costume approached a resident’s daughter and friend at night while they were sitting on their porch in Effingham and shined a flashlight on them. She said they saw the clown again a few days later at a vacant home nearby. No one has been arrested.
Multiple students at Greenville Elementary also reported seeing three clowns. They said one carried a briefcase, one had a knife, and one had a gun. They reportedly fled in an SUV.
Indiana Clown Reports
Indiana has had some clown sightings too. Steve Stewart, chief of police in Muncie, Indiana, commented about numerous Facebook rumors concerning clowns in the area. One post said that police were recommending people stay inside and keep their doors locked. However, police said it was possible people were starting rumors to scare people. Stewart said: “The thing I want the public to know is we know who some of these people (behind the rumors) are and we will be watching them closely,” The StarPress reported, adding that some of the people starting the rumors might also be dressing up as clowns.
In late September, a bus driver called police after finding out that a clown was scaring kids at a Fort Wayne bus stop, WANE.com reported. Clown sightings have also been reported at a number of universities, including Purdue, Indiana University, and Indiana State University.
However, reports that someone in Fort Wayne, Indiana shot and killed a clown are not correct, The Journal Gazette reported. The Gazette said the story was attributed to WANE-TV and quoted a police officer, but WANE-TV had not written any such stories, WANE’s digital director said. “They’re posing as us, using our logo.” The police officer also did not confirm the report.
Another hoax was uncovered in LaPorte, Indian, WSBT reported. Police said that Matthew Cox, 24, reported being chased by two clowns until he stopped and fought back, punching one of the attackers. Police Det. Capt. Thomas Thate later said that after interviewing and investigating, they believed Cox had made up the whole thing.
Then on October 13, a clown was reported near Brockton Apartment Complex, carrying a hoe.
Kansas Clown Sightings
Kansas has joined the ranks of states with clown sightings too. KSHB reported that a child said they saw a clown in the bushes near 85th and Parallel Parkway in Kansas City. There were other reports too, but police have not been able to confirm any of them. This prompted them to make a Facebook post that there were no “actual” sightings. “It’s probably just a hoax,” Tom Tomasic of the Kansas City Police said.
Kentucky Clown Arrests
Lurking clown arrested by Kentucky police *coulrophobes sleep easyhttps://t.co/WyhRCoHiG1 pic.twitter.com/5DMd6XR9XP — BBC News (World) (@BBCWorld) September 23, 2016
In Kentucky, police arrested a man who was lurking around, dressed as a clown. He was dressed in a clown costume, hiding in trees by an apartment complex, police said. According to Kentucky.com, the man arrested was a 20-year-old in Bell County who was walking in a wooded area near apartments at around 2 a.m. In Gallatin County, Kentucky, extra security was assigned to schools after two people posted threatening messages on Facebook that included creepy clown profile pictures.
Meanwhile, one resident in Plano posted a weird photo on Facebook of a clown that she said was in her daughter’s backyard. This photo has been wrongly attributed to Plano, Texas, but it was actually in Plano, Kentucky.
And this was reported in Drake:
In Winchester, Kentucky, a woman told police that she was walking on a trail the night of Friday, September 30, when she was assaulted by a man wearing a clown mask, WDRB reported. She fought him off and escaped.
According to Kentucky.com, there have been many other sightings in the state, including in Waco and Laurel County. Sightings have been reported in multiple cities in the state.
Louisiana Clown Reports
Clowns were also seen in south Louisiana, KPLC 7 News reported in late September. Two people entered Matherne’s Supermarket in Paulina while wearing clown masks. They left when the manager told them to go. Unlike some sightings, these didn’t seem aggressive.
Then on October 15, police said they arrested a 21-year-old for wearing a clown mask and waving a gun at drivers in Rosepine.
Maine Clown Sightings
Maine has some clown sightings too, WCSH reported. A person wearing a clown mask was reported hanging out near an apartment complex in Orono for hours. Orono police responded on Facebook:
According to WCYY, more sightings popped up after that, including in Kennebunk, Standish at Saint Joseph’s College, and Wells.
Maryland Clown Sightings
Residents in Salisbury called police because they saw three people dressed as clowns hiding and jumping out of bushes, trying to scare people. They had blue hair and clown masks, but police didn’t find them, USA Today reported. Police searched a neighborhood for the clowns and eventually issued a statement asking people to stop scaring each other.
Meanwhile, four elementary-aged students in Annapolis said they saw clowns on their way to school, but police later realized it was all a hoax.
In fact, so many creepy clowns were being reported in Maryland that some clowns backed out of a Hagerstown parade so they didn’t cause any stress.
Massachusetts Clown Reports
Clown sightings have spread to Massachusetts too. Among them is a sighting in New Bedford. One resident shared this photo. If you look closely, you can see a second clown in the photo next to the one clearly visible.
In the comments, some readers tried to lighten the photo and said it looks like one is carrying a machete. What do you think?
Merrimack College in Massachusetts went on lockdown on Monday because of a clown scare, Fox 25 Boston reported. Students were told to shelter in place after a reported sighting of a clown that might be armed:
Merrimack College advised to shelter in place. Suspicious person dressed as a clown may be armed. Await further instruction from college. — The Beacon (@MCBeacon) October 4, 2016
Police later gave the all clear and reported that the sighting was “unfounded.”
Michigan Clown Sightings
Sightings have spread to Michigan, leaving many residents on edge. 9&10 News reported that a clown was “prowling a neighborhood” in Big Rapids late at night on Tuesday. A caller said the clown was wearing a blue costume, had red-pink hair, and was just staring at people before running into a wooded area.
A commenter on this article said there were clowns spotted in Lapeer, Michigan, and this has been confirmed by ABC12.com. Last month, they reported, an 18-year-old called police to report three clowns near the woods by Walmart, carrying what might have been a hammer or baseball bat. Police searched using thermal imaging and K9s but couldn’t find the clowns.
Another sighting was reported in Clinton Township, ABC 7 reported. A clown was seen waving outside a car wash. Police said the clown wasn’t doing any criminal acts, just waving. Two days later, two clown attacks were reported in Sterling Heights. A seven-year-old said his arm was scratched by a clown with red hair and a red nose who was carrying a sharp object. Later than night, two women said three males wearing clown masks yelled profanity at them, hit a bat against a fence, and ran away.
Sightings have been getting even scarier than that in Michigan. Detroit Free Press reported that on Tuesday night, October 4, a person wearing a clown mask and carrying a handgun robbed three businesses: two in Ann Arbor and one in Brownstown Township. Likely the same person tried to rob two hotels in Livonia just a few days earlier and tried to shoot police officers who were chasing him or her.
Minnesota Clown Reports
Clown incidents were reported in Minnesota too. A 15-year-old girl created a Kroacky Klown Facebook profile and used it to threaten to kill people in Bloomington and other cities. The girl said she only made the profile to scare her boyfriend but it all got out of hand.
Mississippi Clown Sightings
Creepy clown lurks in Mississippi town https://t.co/wwtN5WHKRy — Deanne Tanksley (@deannetanksley) September 26, 2016
A man dressed in a multi-colored wig, a mask, and overalls was seen carrying a machete in Mississippi. He ran away from a police patrol vehicle and hasn’t been arrested, USA Today reported. A driver in Rankin County said a clown sighting caused a wreck on Highway 469. And Facebook photos show a clown lurking around McComb.
Missouri Clown Reports
Missouri has also had quite a few clown sightings. Some reports were fake, as a Rolla police officer complained about in this Facebook post above. He wrote: “Then I ended up tracking down the young aspiring cinematographers, who will remain nameless… They had also heard of the creepy clowns that have been sweeping through the nation doing their nefarious deeds. Well it turns out one of the girls has [sic] a little bit of a fear of clowns so they decided to play a joke on her. They had one of the husbands put on a black outfit and put on a creepy clown mask.”
Not all of the sightings have been debunked, however, but they also haven’t been proved. In Cole County, multiple people have called in with clown sightings, but law enforcement hasn’t been able to locate any of the clowns.
A Facebook post on Friday, September 30 in Howard County threatened that clowns were going to kidnap students and kill teachers in mid-Missouri schools. The schools went on lockdown as a result, including the California R-1 School District, ABC17 News reported. The Sheriff’s Department said there was no immediate threat:
A series of clown sightings were reported in Jefferson County, Ozarks First reported. An anonymous caller said he saw a clown with a knife near Pevely Pointe Apartments. They could not find anyone to verify the claim at the location. Social media claims said clowns were seen southwest of Hillsboro, but Jefferson County Sheriff’s Department said it didn’t get any calls about that. Granite City police told KMOV they had gotten about 10 calls about people dressed as clowns, but they also couldn’t verify the claims.
A commenter to Heavy.com below said that two clowns were sighted near Kirksville, but they said they were only out to prank a friend.
Montana Clown Sightings
On October 12, a 15-year-old boy received a citation for posting a threatening clown post on Facebook on October 5. The clown had threatened to kill students at Missoula schools. The Facebook profile was named Zootown Klown.
Nebraska Clown Sightings
Nebraska has had a number of clown sightings, but only a few arrests. Clowns have been spotted in Grand Island, with three separate reports of clown’s threatening or chasing residents, Journal Star reported. One happened on a Friday night, just south of downtown; then on a Saturday someone reported clowns carrying knives in the same area. On Sunday, police cited four teens dressed as clowns, including one carrying a BB gun. A woman also reported a clown banging on her back door and window. A clown was also reported at night on the campus of Northeast High School, though police could not find the suspect.
Nevada Clown Reports
Sparks, Nevada is also experiencing clown reports, KTVN reported. Alex Ruelas said he and his friends saw a clown in the Shadow Mountain parking lot, holding what looked like a machete. The recorded video on Snapchat of the clown running toward them. Sparks police have been getting more reports since that incident.
A Facebook profile was also threatening El Dorado, Canyon Springs, Legacy, and Las Vegas high schools:
New Jersey Clown Sightings
Please be safe everyone. There has been a clown sighting in Bayonne. This especially goes to my friends in Jersey City. pic.twitter.com/Dvv7smSpnp — Dianna (@stephdiannaxo) October 1, 2016
Apparently there have been some reports of clowns in New Jersey, including clowns carrying weapons, NJ.com reported. A boy said he was chased into the woods by three clowns in Walters Park in New Jersey, but police couldn’t find the suspects. Three additional sightings were reported in Warren County. A child said he was chased down Shafer Avenue by a jester holding a sword. And a resident reported seeing a truck with several clowns driving down Mercer.
On Friday, September 30, Philipsburg Middle School had a shelter-in-place due to a threat that a clown would attack the school. Police ruled the threat not credible, New Jersey 101.5 reported.
Vineland Public Schools received one of the “Ain’t Clowning Around” social media threats that many states are getting. They let parents know through a phone notification about the social media message.
Clowns have also been reported in Bound Brook, Monroe, North Plainfield, and Spotswood.
Toms River police spokesman Ralph Stocco told 101.5 that early clown sightings may have stemmed from a movie promotion, that eventually led to copycats.
New Mexico Clown Sightings
Not to be outdone, New Mexico also has had clown sightings in Roswell, Alamogordo, and Hobbs, KOB.com reported. It’s gotten so bad that some professional clowns in the state are worried the sightings will affect their jobs. In Roswell, for example, police said that residents have told them about random clowns wandering around, with some carrying baseball bats, KOB reported. Similar reports were made in Hobbs, and a school in Las Cruces was threatened.
Here’s what Las Cruces said about the sightings:
In Albuquerque, three juveniles were arrested while they were wearing clown masks outside a children’s clothing store on October 9. They had a handgun with them, Associated Press reported.
New York Clown Sightings
Everyone please be careful driving tonight in Brentwood. I just witnessed 3 men dressed in clown suits on Commack Rd. Coming towards my car. — 👸🏻 (@MoodyMelz) September 29, 2016
Police in Long Island, New York sent out warnings on Friday, September 30, about clown sightings from the previous week. One person said people dressed as clowns were jumping in front of cars. Another said that a person was dressed as a clown in North Babylon. No arrests have been made. North Babylon High School was put on a lock out on Friday after an unknown person made clown-related threats on social media, ABC 7 NY reported.
Sightings have also been reported near Utica College. On Thursday night, September 29, callers told police that two clowns approached people in Pixley Park carrying what appeared to be baseball bats and knives. Law enforcement couldn’t find the clowns. Police received other calls about clowns in Utica, but these callers didn’t see the clowns carrying weapons.
Herkimer police have also gotten calls about clowns. One sighting was at a K-Mart on Washington Street, WKTV reported. A second sighting was on September 28, a few blocks away. The person dressed as a clown was reported to be wearing “creepy killer clown attire” but only made eye contact.
In Syracuse, a 10-year-old boy told police three men in clown costumes approached him and he hid between two houses from them, until he had a chance to flee.
Syracuse Police say one child hid between two homes and called 911. Police say they have a detailed description of the three clowns pic.twitter.com/tfHjovb9Ff — Alex Dunbar (@AlexDunbarNews) September 29, 2016
And in Amsterdam, middle school students said a clown chased them down several streets. Four other similar reports were made to authorities.
A Twitter account called @LIClowns (Suffolk Clowns) began making threats in the New York area, but it was later suspended from Twitter.
On October 5, a 16-year-old reported that a clown carrying knife on a Manhattan subway threatend him and tried to block him from entering the subway at 96th Street. On October 13, police arrested a 53-year-old man in connection with the incident. On October 9, a clown was reported trying to lure people into the woods in Sloatsburg. On October 10, a clown chased a girl as she was walking her dog in Harriman State Park.
North Carolina Clown Reports
Children in North Carolina reported in early September that a clown tried to lure them into the woods. He had red shoes, red hair, a white face, and a red nose. He offered them treats. Police have made no arrests, USA Today reported. Meanwhile, earlier in September Greensboro residents said a clown ran into the woods after it was chased by a man with a machete.
This is where clown sighting was reported. @cityofwspolice say he used treats to try & lure kids into woods @myfox8 pic.twitter.com/uN16osd5pK — Sarah Krueger (@WRALSarah) September 5, 2016
In early September, police started doing extra patrols in Winston-Salem after two children said a clown was trying to lure kids into the woods with treats. The clown was wearing white overalls, red shoes, red hair, and a red nose. The police couldn’t find the suspect, Fox News reported.
On person on Facebook shared this post from North Carolina:
Despite what the post reads, reports were of clowns trying to lure kids away, but there were no reports, as of the time of publication, of actual kidnappings in North Carolina.
In early September, a North Carolina man was arrested for making a false report to police that a clown had knocked on his window at night. He had claimed that he had chased the clown into the nearby woods.
Ohio Clown Sightings
Ohio is one state with recent sightings. On Thursday night, September 29, a woman reported that she was attacked by a man dressed as a clown, who then threatened students at the Reading Community School District. Mount Notre Dame High School was also closed because it shares a parking lot with Reading High School, Cincinnati.com reported. The woman was smoking a cigarette on her porch when a man dressed as a clown grabbed her around the neck and made threats against the school and her. The School District later reported that the man had not yet been arrested:
In another incident, a juvenile was arrested in Colerain Township who threatened the high school there. The police said in a Facebook post:
This suspect used the current clown trend to further terrorize parents and students and has been charged with Making Terrorist Threats and Inducing Panic.”
Reading’s Homecoming Parade, football game, and homecoming dance were set to continue as scheduled.
Meanwhile, a clown hoax was uncovered in Norwalk, Ohio. A photo was posted to Facebook claiming clowns were seen in Norwalk and the police got calls from people who said they saw people in clown costumes harassing residents, jumping out of the woods and scaring people, and trying to break into homes, Fox 8 reported. One woman even live streamed on Facebook as she was hunting for clowns. Police found the man and woman who made the original photo that went viral on Facebook. The couple said they dressed as clowns themselves for the photo because they wanted to be famous.
There was another hoax uncovered in Reading. WCPO reported that an 18-year-old woman said she was attacked by a clown wielding a knife. However, after police started digging into her story, they uncovered quite a few inconsistencies. She later confessed that she made it up because she was running late for her job at McDonald’s.
And in Columbus, Ohio, teens reported being chased by a six-foot-tall man who was holding a knife while wearing a clown mask.
During the first weekend of October, more arrests were made connected to clowns. Montgomery police charged five students with inciting panic connected to clowns, CBS News reported. And in Fairborn, police arrested a 15-year-old boy for making terroristic threats when he created a clown persona on Facebook and used it to send death threats to students in Fairborn.
On October 4, a Speedway gas station in Dayton was robbed by three people, one of whom was wearing a clown mask. The other two were wearing a surgical mask and a Guy Fieri mask.
This video at this link was shared on Facebook in Ohio, but it’s not authenticated, so we don’t know if it might just be something made up to share on social media.
In late December, another clown sighting popped up in Ohio. A 55-year-old man dressed in a clown suit was arrested for drunk driving. He said he was wearing the clown costume because he had been at a party. An official with the police department said about the DUI:
It’s just proof that all backgrounds and socioeconomic backgrounds commit this offense – including clowns.”
Oklahoma Clown Reports
There have been multiple reports of a clown walking around campus. The O'Colly will update as this story continues. — O'Colly (@OColly) October 4, 2016
Unfortunately, you can add Oklahoma to the list of clown sightings. KTUL reported that Oklahoma State University students called campus police about seeing a clown walking around the dorms. Officers couldn’t find the clown and students went on a search in groups, also failing to find the clown. That was just the first of many reports to follow. Sightings have been reported in Moore, Oklahoma City, McAlester, Tulsa, Chickasha, and Miami.
In Moore, residents confronted two people wearing clown costumes one night who were loitering in a playground. Residents said one clown ran but the other listened to the residents talking to him about the ramifications of what he was doing, came around to their viewpoint, and removed his costume, KFOR reported.
Oregon Clown Sightings
When the clown epidemic reaches Oregon and your BACKYARD IS LITERALLY 9 ACRES OF FOREST — Kenz 🍾 (@knzsnfrd) October 2, 2016
The scary clown reports made their way to Oregon too, according to KTVZ. An employee of Central Oregon Eyecare reported seeing someone dressed as a clown around 10:30 p.m. near Southwest Indian Avenue and 11th street in Redmond. The employee said they were wearing a clown mask and blue pants. She asked for a police escort to her car because she was scared, but police couldn’t find the suspect.
Another clown was reported in Portland, Oregon. A woman shopping near O’Bryant Square around noon on Friday, September 30, said a man wearing a silver clown mask and black clothes came up to her car and started banging on the driver’s side window. He tried to open all the doors and she sped away.
A 55-year-old was arrested for wearing a clown mask and taunting students at Floyd Light Middle school, KGW reported.
Pennsylvania Clown Stabbing, Threats, & Penn State Clown Hunt
PennLive reported that a high school student was stabbed and killed after getting in a fight with someone wearing a clown mask. But that’s not the only report. York College students reported seeing students on and off campus dressed as clown in late September, including a car full of people dressed as clowns and possibly carrying weapons, USA Today reported. They weren’t found by police. Pottsville, PA police are investigating reports of clowns yelling at children.
Children in Northampton County said three people dressed as clowns chased them one Monday afternoon, but school staff didn’t see the clowns. Other sightings were reported in Easton and Huntingdon County.
Philadelphia is now affected too, with a social media account posting a threat specifically toward Philadelphia school students, 6ABC.com reported. The threats read: “We coming to all north east charter high schools…we will be after your let out on Wednesday” and “the gang is in philly … ya’ll ain’t safe. it’s 12 of us yaheard [sic].” The Philadelphia police department and the school district released a joint statement on Sunday saying the Office of Homeland Security was also alerted and they are taking the threats seriously.
Clowns were also reportedly sighted in Philadelphia on Sunday at Boyle Park. The clowns, residents said, were chasing kids but ran away from the parents. If you have any information about the Philadelphia threat, call 215-686-TIPS.
Clown sightings have also infiltrated Penn State. Apparently a group of students, possibly as many as 1,000 or more, decided to hunt down clowns on Monday night after sightings began popping up on campus. Even the football team joined in. (Read more about the Penn State Clown Hunt in Heavy’s story here.)
CLOWNS have caused riots and clown hunting at penn state this is the end pic.twitter.com/5rmRVTBLgT — Emma Karpinski (@Emmakarpinskiii) October 4, 2016
Here’s another video of the clown hunt:
Some people run away from clowns, Penn State runs towards them pic.twitter.com/Y7JNgBeJZo — Bryce Vukovich (@B_Vukovich) October 4, 2016
But not all sightings in the state have been authentic. A video purportedly showing a clown on the side of the road in Mercer, Pennsylvania ended up being a viral photo that’s been attributed to multiple locations, including West Virginia. (See the West Virginia entry for more information.)
Rhode Island Clown Reports
Clown sightings are also popping up in Rhode Island, Herald News reported. First, the clowns started getting dangerously close to Rhode Island, which started making people nervous. Here’s just one video the news source shared, which shows a clown dancing around in a parking lot in Massachusetts. They didn’t stay in Massachusetts long, however:
Spotted the killer clowns in Fall River pic.twitter.com/ioxOD6yK33 — caitlin (@Caitlinjean02) October 2, 2016
As they moved closer, Rhode Island residents expressed worry:
@sfontxo @cborges96 what!!! I didn't know it was happening so close 😭 — jessica 🎄 (@jpachxo) October 3, 2016
This week, a social media threat was made against two Pawtucket, Rhode Island schools, Providence Journal reported. But it turns out, clowns were already here even last week, when they reportedly chased someone out of Slater Park three different times. Although police haven’t seen the clowns, they’re taking residents’ concerns seriously. The West Warwick Police Department addressed the concerns on Facebook:
Interestingly, a clown known as “Wrinkles” moved from Rhode Island to Florida last year and started creating quite a stir, The Washington Post reported. The 65-year-old creepy clown had shown up at public gatherings and outside people’s homes for years. He can be hired for a few hundred dollars, and he’s even been hired to scare children into reforming: “Do you want Wrinkles to come back?” He didn’t give his real name out to news media, but confirmed he was 65 and from Rhode Island. Here’s what Wrinkles, originally from Rhode Island and now in Florida, looks like:
Halloween is over but "Wrinkles the Clown" may haunt your dreams. @OlessaStepanova has EyePoppers at 6:24am. #wcvb pic.twitter.com/yp7556g2JM — Jenny Barron (@JennyWCVB) November 2, 2015
South Carolina Clown Sightings
This is the state where the problem began in mid-August, the New York Times reported. Residents reported a white man in clown makeup and red hair staring a woman down at a laundromat. Children reported clowns trying to lure them into the woods.
In early September, police started doing extra patrols in Winston-Salem after two children said a clown was trying to lure kids into the woods with treats. The clown was wearing white overalls, red shoes, red hair, and a red nose. The police couldn’t find the suspect, Fox News reported. Police also said a number of clowns were reported in Greenville County hiding near apartment complexes and knocking on doors. One apartment complex even sent a letter out to residents warning them. The sightings haven’t been confirmed by police or with video or photos.
One Greenville apartment complex warned them to stick to the 10 p.m. curfew and not let their children out alone. No arrests have yet been made and authorities have said the sightings haven’t been officially confirmed. (Learn more in our story here.)
Tennessee Clown Robbery
On September 28, two men — one dressed in a clown mask — robbed a Memphis bank after carrying explosives inside, USA Today reported. In a separate incident, a social media threat led to two Nashville schools going on lockdown, but police later said the clown threat wasn’t credible.
Texas Clown Sightings
Clown sightings are being reported in Texas, too, and they’re increasing in number. In Corpus Christi, someone posted a clown photo on social media and said they would visit 10 campuses. A seventh-grader faces disciplinary action after encouraging the anonymous clown to visit his school, KRISTV reported. It’s not known yet who was the source of the original post.
Meanwhile, clown sightings are growing in the Dallas-Fort Worth area. DFW Scanner issued an update on Facebook about clown sightings. Fort Worth police reported that a person saw a clown running on a sidewalk in a residential area, and more than a dozen other sightings have been reported. DFW Scanner wrote that there were so many reports out there, it was tough to separate fact from rumor. Here’s just one story posted to Facebook about an experience with clowns in Fort Worth:
In Southlake, there were multiple clown reports, residents reported on Monday. And in Austin, Texas, social media threats have been made against several schools including Reagan and Travis Early College High Schools and Martin Middle School. There have also been unverified sightings of clowns, KXAN reported. Hays CISD said it investigated a clown threat that it determined was a hoax. Manor ISD investigated claims about a clown at the high school, but couldn’t verify the claims. And Del Valle High School heard a rumor about clowns, but said school would continue as usual.
Then the clown craze continued with reports of a clown assault at Texas State University on October 3. A woman reported that she was attacked by a clown around 7 p.m. outside Bobcat Village Apartments. You can read more about what happened in Heavy’s story here.
Amarillo has also been a location of sightings. KSWO reported residents telling them about clown sightings at bus stops and Medi Park, but those ended up being from a local haunted house, Sixth Street Massacre, trying to promote their events. However, there are new reports in Borger and Canyon, connected to Twitter accounts, and it’s not known if those are also connected to the haunted house, KSWO reported.
Red Oak, Texas’ police department took a different approach to the whole thing by writing a long post on Facebook about clown sightings. The explored the history of clown sightings and connections to Stephen King. You can read the post here:
Utah Clown Sightings
Two Ogden, Utah schools had lockouts due to clown sightings, but police officers say the rumors were not substantiated. Gramercy Elementary and Mount Fort Junior High went on lockout after there were reports that someone was on campus dressed as a clown. Police officers weren’t able to confirm the sighting as legitimate, Fox 13 reported. The reports came after a Facebook user threatened the schools with an account that had a profile photo of a clown.
In Orem, Utah, police said they had not received any reports of clowns in the city, after concerned parents called them numerous times:
In Provo, Utah, a woman told police that a man dressed as a clown ran across her yard around midnight.
Vermont Clown Arrest
In Williston, Vermont, a 15-year-old was arrested after banging on classroom windows while wearing a clown mask. Officers said that was the only verified report of clown violence or scares.
Virginia Clown Sightings
Fort Defiance had multiple clown sightings, USA Today reported, but no one was caught. Multiple callers said people were dressed as clowns. Wearing anything that conceals your identity, if you’re over 16, is against Virginia law and can be a class 6 felony.
WTVR reported seven places in Chesterfield County where creepy clowns were spotted. In one incident, a man tried to open a driver’s car door on Old Warson Drive, with another person dressed as a clown standing behind him. On Marina Drive, a person reported seeing someone wearing a clown suit while holding a knife. And clowns were seen running on Omo Road and on Alberta Road. And a 13-year-old boy was cited by police for creating a social media account for a clown that referenced a school shooting.
In Petersburg, a woman reported seeing two clowns in a car, driving erratically. At a stop light, one got out and started approaching her car, NBC 12 reported.
A mother in Henrico County, Virginia took video of someone dressed as a clown who was riding in a car and stopped next to them at an intersection. Here’s the video. Do you think the mom and daughter were overreacting?
This wasn’t the only sighting in Henrico though. NBC 12 reported that three clowns were seen driving around Knight Drive, entering back yards and knocking on windows. Police couldn’t find the suspects.
Clowns were also spotted in Washington County, Virginia, WJHL reported. The county’s first sighting was Monday night, October 3, on Conner Lane near High Point Elementary around 8 p.m. A brother and sister told police they saw a clown dressed in black with a red nose and purple hair. Police couldn’t find the clown.
In some cases, however, the perpetrator isn’t actually the clown. The Associated Press reported that in Hampton, Virginia, a 13-year-old girl was arrested after asking a clown on social media to kill her teacher.
Washington Clown Reports
Some reports are coming in from Washington state too. The News Tribune reported sightings in Pierce County. Three people said they saw men dressed in scary clown costumes hiding under bridges or in wooded areas. There were no reports of crime or violence. A sheriff’s spokesman said they were probably just “jumping on the bandwagon of what’s been going on around the country.”
Here’s another photo that cropped up on Facebook from Tukwila, Washington:
Students in Puyallup’s Rogers High School received texts on October 3 from a clown who threatened to kill students at the school. One person told police a clown was seen near campus carrying a knife. The school was put on lockdown on October 4 and two clown masks were found near the school. Two other schools — Emerald Ridge High School and Glacier View Junior High — were also closed on October 4.
Washington, D.C. Clown Report
A 14-year-old girl was arrested in early October for making clown threats to McKinley Middle School.
West Virginia Clown Photo or Hoax?
Some sightings have cropped up in West Virginia, but it’s unclear how accurate or authentic they are. This photo was shared on Facebook showing a clown standing on the side of the road in the Montcalm/Duhring region:
Fox 29 reported on the photo here. This particular photo, however, has gone viral so it’s tough to pinpoint just where it originated, making it more likely to be a hoax. It was shared on YouTube with the claim that the clown was seen in Mercer, Pennsylvania.
Wisconsin Clown Sightings
The sighting in Wisconsin ended up being a hoax. A man who set up a Facebook account to report sightings of Gags the Green Bay Clown later came clean and said it was part of an independent horror film coming out soon. Unfortunately, not all Wisconsin sightings have been uncovered yet as hoaxes. One mom kept her son out school because she had heard so many rumors of people wearing clown masks roaming around the Universal Academy for the College Bound, the Milwaukee-Wisconsin Journal Sentinel reported. Other schools have also beent the source of rumors. Meanwhile, a seventh grade girl faces possible charges for sending messages to six students where she pretended to be a clown in West Bend. In Beloit, police need help finding who owns the Facebook page “Twist the Clown.”
An incident in Wisconsin from last year, November 2015, adds an interesting perspective to the whole thing.
Waukesha residents are buzzing about a "clown" spotted around town at night in recent days. #WISN12 at 10. pic.twitter.com/dpGdb1IqS0 — Nick Bohr (@NickBohr) November 24, 2015
A mom’s son in Waukesha, Wisconsin was often dressing up as a clown even into late November. She said he did it for laughs and smiles and didn’t intend to upset anyone. But many people were disturbed to see a random clown wandering around. Some of the sightings, his mom insisted, could not have been her son, but may have come from someone else dressing in a similar clown costume.
Clown sightings aren’t limited to the United States. In 2013, a 22-year-old filmmaker was arrested in Northampton after he dressed up as a clown and continually scared people in the area. Someone even started dressing as a vigilante to catch the clown before he was identified:
This guy in Far Cotton calls himself #TheClownCatcher omg what has my town turned into? #northamptonclown *FacePalm* pic.twitter.com/gjUTPmRInH — MISS Y AM I HERE (@JuneJailer) September 15, 2013
In 2014, Daily Mail reported that people in America and Europe were dressing up as clowns and scaring strangers. Police in France even issued a warning against mobs of clowns or vigilante mobs going after clowns.
Do you know of other states with sightings not reported in this article? Let us know in the comments below.
Read more about these creepy clowns in Spanish at AhoraMismo.com: | Target will stop selling clown masks both online and at brick-and-mortar locations due to the "crazy clown" craze that has spread across the US. Fueled by social media and sensational news stories of people in clown masks terrorizing residential neighborhoods, a slew of copycats have sprung up across the nation. The Minneapolis-based retailer has decided to do its small part to stop the craze by refusing to sell the popular Halloween masks, the Star Tribune reports. A Target spokesperson kept it neutral, telling CBS Minnesota simply, "Given the current environment, we have made the decision to remove a variety of clown masks from our assortment, both in stores and online." There have been several clown-related incidents in the Minneapolis-St. Paul community, where Target is headquartered, as well as at least one instance in nearby Michigan of two teenage girls using the masks to terrorize younger kids. That's far from all—Heavy is keeping a list of "threatening clown sightings" reported in the US, and so far it includes 40 states with reported incidents. The craze has even spread beyond the US, according to New York magazine. A teen in Sweden was stabbed by someone in a clown mask, and two teens in the UK say they were chased by a person wearing a clown mask and wielding a machete. |
The U.S. Congress is contemplating a $700 billion government assistance package to arrest the financial crisis in the United States. President Bush argued that failure to enact legislation quickly could result in a wholesale failure of the U.S. financial sector. As discussion of the Administration's plan unfolded, however, questions in Congress arose over issues of magnitude and management of the "bailout," the need for oversight, and the possibility that less costly and perhaps more effective alternatives might be available. In this light, Chile's response to its 1981-84 systemic banking crisis has been held up as one example. The cost was comparable relative to the size of its economy to that facing the U.S. Government today. In 1985, Central Bank losses to rescue financially distressed financial institutions were estimated to be 7.8% of GDP (equivalent to approximately $1 trillion in the United States today). The policy options Chile chose had similarities as well as differences from those contemplated in the United States today. Their relevance is debatable, but they do highlight an approach that succeeded in eventually stabilizing and returning the Chilean banking sector to health, while keeping the credit markets functioning throughout the crisis. The seeds of the Chilean financial crisis were much different than those in the United States. Nonetheless, in both cases, the financial sector became the primary problem, with policy makers concerned over the prospect of a system-wide collapse. Chile's problems originated from large macroeconomic imbalances, deepening balance of payments problems, dubious domestic policies, and the 1981-82 global recession that ultimately led to financial sector distress. Although most of these are not elements of the U.S. crisis, there are a number of similar threads woven throughout both cases. Broadly speaking, both countries had adopted a strong laissez-faire orientation to their economies and had gone through a period of financial sector deregulation in the years immediately prior to the crisis. A group of scholars characterized Chile's orientation toward the financial sector as the "radical liberalization of the domestic financial markets" and "the belief in the 'automatic adjustment' mechanism, by which the market was expected to produce a quick adjustment to new recessionary conditions without interference by the authorities." In both cases, given the backdrop of financial sector deregulation, a number of similar economic events occurred that ultimately led to a financial crisis. First, real interest rates were very low, giving rise to a large expansion of short-term domestic credit. With credit expansion came the rise in debt service, all resting on a shaky assumption that short-term rates would not change. In both cases, but for different reasons, rates did rise, causing households and firms to fall behind in payments and, in many cases, to default on the loans. The provision for loan losses was inadequate causing financial institutions to restrict credit. Soon, many found themselves in financial trouble or insolvent, resulting in the financial crisis. Chile's response may prove useful as policy makers evaluate options. Following the coup against socialist President Salvador Allende in 1973, General Augusto Pinochet immediately re-privatized the banking system. Banking regulation and supervision were liberalized. Macroeconomic conditions and loose credit gave way to the economic "euphoria of 1980-81." The exuberance included substantial increases in asset prices (reminiscent of a bubble) and strong wealth effects that led to vastly increased borrowing. The banking system readily encouraged such borrowing, using foreign capital, that because of exchange rate controls and other reasons, provided a negative real interest rate. From 1979 to 1981, the stock of bank credit to businesses and households nearly doubled to 45% of GDP. This trend came to a sudden halt with the 1981-82 global recession. The financial sector found itself suddenly in a highly compromised position. Weak bank regulations had allowed the financial sector to take on tremendous amounts of debt without adequate capitalization. Debt was not evaluated by risk characteristics. Most debt was commercial loans, but banks also carried some portion of consumer and mortgage debt. As firms and households became increasingly financially stressed, and as asset prices plummeted, the solvency of national banks became questionable. Two issues would later be identified: the ability of borrowers to make debt payments, and more importantly, the reluctance of borrowers to do so given there was a broadly-held assumption that the government would intervene. By November 1981, the first national banks and financial institutions that were subsidiaries of conglomerates failed and had to be taken over by regulatory authorities. Most debt was short term and banks were in no position to restructure because they had no access to long-term funds. Instead, they rolled over short-term loans, capitalized the interest due, and raised interest rates. This plan was described by one economist as an unsustainable "Ponzi" scheme, and indeed was a critical factor in bringing down many banks as their balance sheets rapidly deteriorated. From 1980 to 1983, past-due loans rose from 1.1% to 8.4% of total loans outstanding. The sense of crisis further deepened because many of the financial institutions were subsidiaries of conglomerates that also had control over large pension funds, which were heavily invested in bank time deposits and bank mortgage bonds. In the end, although the roots of the banking crisis were different than those in the United States, the Chilean government faced the possibility of a complete failure of the financial sector as credit markets contracted. The Central Bank of Chile took control of the crisis by enacting three major policies intended to maintain liquidity in the financial system, assist borrowers, and strengthen lender balance sheets. These were: 1) debt restructuring for commercial and household borrowers; 2) purchases of nonperforming loans from financial institutions; and 3) the expeditious sale, merger, or liquidation of distressed institutions. From the outset of the rescue plan, the Chilean Central Bank considered providing relief to both debtors and lenders. There were two rationales. First, as a matter of equity, there was a sense that households as well as firms should be helped. Second, to maintain a functioning credit market, both borrowers and lenders needed to be involved. The Central Bank decided to restructure commercial, consumer, and mortgage loans. The goal was to extend the loan maturities at a "reasonable" interest rate. The debtor was not forgiven the loan, rather banks were given the means to extend the maturities of the loans to keep the debtor repaying and the credit system functioning. Restrictions were in place. Eligible firms had to produce either a good or service, eliminating investment banks that held stock in such firms. Only viable businesses were eligible, forcing the bankruptcy procedures into play where unavoidable. To keep the program going, the loan conditions of each subsequent iteration of the program became easier: longer maturities; lower interest rates; and limited grace periods. The program allowed Central Banks to lend firms up to 30% of their outstanding debt to the banking system, with the financing arrangement working in one of two ways. At first, the Central Bank issued money, lent it to debtors, which used it to pay back the bank loans. Later, the Central bank issued money to buy long term bonds from the banks, which used the proceeds to restructure the commercial loans. Variations of this process were applied to consumer and mortgage debtors. In cases where loans were made directly from the Central Bank to the debtor, repayment was expected usually beginning 48 months after the loan was made. The fiscal cost was significant, approximating 1% of GDP in 1984 and 1985. This program was more controversial and had to be adjusted over time to be effective. The key idea was to postpone recognition of loan losses, not forgive them. It relied on identifying nonperforming loans and giving banks time to provision against them, without risking insolvency. The process has been variously characterized as the Central Bank taking on bad debt through loans, purchases, or swaps. All three concepts play some part of this complex, largely accounting-driven arrangement. Initially, this program was described as a sale, although there was no exchange of assets. The Central Bank technically offered to "buy" nonperforming loans with non-interest bearing, 10-year promissory notes. Banks were required to use future income to provision against these loans and "buy" them back with the repurchase of the promissory notes. In fact, they were prohibited from making dividend payments until they repaid the Central Bank in full. The banks, though, actually kept the loans and administered them, but did not have to account for them on their balance sheets. This arrangement was intended to encourage banks to stop rolling over non-performing loans, recognize the truly bad ones, and eventually retire them from their portfolios. The banks benefitted by remaining solvent and gaining time to rebuild their loan loss reserves so to address nonperforming loans. The credit market was served by banks being able to continue operating with increased funds from released loan-loss reserves. This program did not work as hoped at first and had to be adjusted. The Central Bank allowed more time for banks to sell nonperforming loans and also permitted a greater portion of their loan portfolios to qualify. It also began to purchase these loans with an interest-bearing promissory note. The banks, however, actually repaid the interest-bearing note at a rate 2 percentage points below that paid by the Central Bank to the banks. This added differential was sufficient incentive for the banks to sell all their bad loans to the Central Bank, beginning a process of identifying good loans and allowing for the eventual retirement of bad loans from the balance sheets (and the banking system). The cost to the Central Bank increased, but by 1985, the portfolio of non-performing loans at the Central Bank began to decline and was eventually eliminated. A major goal of government actions was to ensure that bank owners and creditors were not absolved of responsibility to help resolve the crisis, including using their own resources to absorb some of the costs. The government worked closely with all financial institutions to impose new risk-adjusted loan classifications, capital requirements, and provisioning for loan losses, which would be used to repurchase loans sold to the Central Bank. The banks, through the Central Bank purchase of substandard loans, were given time to return to profitability as the primary way to recapitalize, and became part of the systemic solution by continuing to function as part of the credit market. A number of banks had liabilities that exceeded assets, were undercapitalized, and unprofitable. Their fate was determined based on new standards and they were either allowed to be acquired by other institutions, including foreign banks, or liquidated. The "too big to fail" rule was apparently a consideration in helping keep some institutions solvent. A total of 14 financial institutions were liquidated, 12 during the 1981-83 period. In most cases, bank creditors were made whole by the government on their deposits with liquidated banks. For three financial institutions that were closed in 1983, depositors had to accept a 30% loss on their assets. The overriding goal of a strategy to correct systemic crisis in the financial sector is to ensure the continued functioning of credit markets. Chile succeeded in accomplishing this goal and restoring a crisis-ridden banking system to health within four years. The single most important lesson of the Chilean experience was that the Central Bank was able to restore faith in the credit markets by maintaining liquidity and bank capital structures through the extension of household and consumer loan maturities, the temporary purchase of substandard loans from the banks, and the prompt sale and liquidation of insolvent institutions. Substandard loans remained off bank balance sheets until the viable institutions could provision for their loss from future profits. Other losses were covered by the government. In addition, a number of other insights emerged from the Chilean crisis: The market could not resolve a system-wide failure, particularly in the case where there was a high expectation of a government bailout. The expectation of a bailout became self-fulfilling and increased the cost. Appropriate prudential supervision and regulation were critical for restoring health and confidence to the financial system. Observers lamented the a priori lack of attention to proper regulation. Private institutions that survived shared in the cost and responsibility to resolve the crisis to the apparent long-term benefit of the financial sector. The fiscal cost of the three policies discussed above was high. Liquidating insolvent institutions had the highest cost followed by the purchase of non-performing loans and rescheduling of domestic debts. The strategy, however, is widely recognized as having allowed the financial system and economy to return to a path of stability and long-term growth. | Chile experienced a banking crisis from 1981-84 that in relative terms had a cost comparable in size to that perhaps facing the United States today. The Chilean Central Bank acted quickly and decisively in three ways to restore faith in the credit markets. It restructured firm and household loans, purchased nonperforming loans temporarily, and facilitated the sale or liquidation of insolvent financial institutions. These three measures increased liquidity in the credit markets and restored the balance sheets of the viable financial institutions. The Central Bank required banks to repurchase the nonperforming loans when provision for their loss could be made and prohibited distribution of profits until they had all been retired. Although the private sector remained engaged throughout the resolution of this crisis, the fiscal costs were, nonetheless, very high. |
GPRA is intended to shift the focus of government decisionmaking, management, and accountability from activities and processes to the results and outcomes achieved by federal programs. New and valuable information on the plans, goals, and strategies of federal agencies has been provided since federal agencies began implementing GPRA. Under GPRA, annual performance plans are to clearly inform the Congress and the public of (1) the annual performance goals for agencies’ major programs and activities, (2) the measures that will be used to gauge performance, (3) the strategies and resources required to achieve the performance goals, and (4) the procedures that will be used to verify and validate performance information. These annual plans, issued soon after transmittal of the President’s budget, provide a direct linkage between an agency’s longer-term goals and mission and day-to-day activities. Annual performance reports are to subsequently report on the degree to which performance goals were met. The issuance of the agencies’ performance reports, due by March 31, represents a new and potentially more substantive phase in the implementation of GPRA—the opportunity to assess federal agencies’ actual performance for the prior fiscal year and to consider what steps are needed to improve performance and reduce costs in the future. The mission of the Department of Defense is to support and defend the Constitution of the United States; provide for the common defense of the nation, its citizens, and its allies; and protect and advance U.S. interests around the world. Defense operations involve over $1 trillion in assets, budget authority of about $310 billion annually, and about 3 million military and civilian employees. Directing these operations represents one of the largest management challenges within the federal government. This section discusses our analysis of DOD’s progress in achieving outcomes and the strategies that DOD has in place, particularly human capital and information technology, for accomplishing these outcomes. In discussing these outcomes, we have also provided information drawn from our prior work on the extent to which DOD provided assurance that the performance information it is reporting is credible. In general, the extent to which DOD has made progress in achieving the six outcomes is unclear. In our opinion, one of the reasons for the lack of clarity is that most of the selected program outcomes DOD is striving to achieve are complex and interrelated and may require a number of years to accomplish. This condition is similar to what we reported last year on our analysis of DOD’s fiscal year 1999 performance report and fiscal year 2001 performance plan. Further, with the new administration, DOD is undergoing a major review of its military strategy and business operations, which may result in changes to the way DOD reports performance information. The extent to which the Department has made progress toward the outcome of maintaining U.S. technological superiority in key war-fighting capabilities is difficult to assess. DOD’s performance goal for this outcome is to transform U.S. military forces for the future. As we reported last year, some of the performance goal’s underlying measures—such as procurement spending and defense technology objectives—do not provide a direct link toward meeting the goal, thus making it difficult to assess progress. DOD’s performance report does not reflect concerns raised within the Department about the adequacy of its strategy and institutional processes for transforming forces. We noted in a prior report that a transformation strategy is presented in the former Secretary of Defense’s 2001 Annual Report to the President and the Congress. However, the strategy does not clearly identify priorities or include an implementation plan and outcome- related metrics that can be used to effectively guide the transformation of U.S. forces and assess progress. This topic is currently being reviewed by the new administration. As we reported, a 1999 Defense Science Board study had recognized the need and called for such an explicit strategy, or master plan; a roadmap; and outcome-related metrics to assess progress. Also, a joint military service working group identified a need for a comprehensive strategy as an issue that the 2001 Quadrennial Defense Review must address. Further, the Defense Science Board, Joint Staff and unified command officials, joint military service working group, and others raised concerns about the ability of DOD’s current institutional processes to turn the results of transformation initiatives into fielded capabilities in a timely manner. These processes—which include DOD’s planning, programming, and budgeting system and weapons acquisition system— focus on near- or mid-term requirements and do not foster the timely introduction of new technologies to operational forces. For each of the supporting performance measures, DOD’s report describes data collection and verification measures. However, our work in this area has not addressed the reliability of DOD’s data. Thus, we are unable to comment on the extent to which the reported performance information is accurate. DOD’s performance measures do not adequately indicate its progress toward achieving the outcome of ensuring that U.S. military forces are adequate in number, well qualified, and highly motivated. Therefore, we cannot judge the level of progress DOD has made in this area. DOD’s performance goal for this outcome is to recruit, retain, and develop personnel to maintain a highly skilled and motivated force capable of meeting tomorrow’s challenges. DOD’s performance measures still do not fully measure how well DOD has progressed in developing military personnel or the extent to which U.S. military forces are highly motivated. Although DOD’s report identifies specific goals for recruiting and retention, the Department does not include human capital goals and measures aimed specifically at tracking the motivation or development of its personnel. The level of progress toward meeting specific targets in the areas of enlisted recruiting and retention is mixed. The Air Force failed to meet its targets for first- or second-term retention, and the Navy did not meet its target for first-term retention. While most reserve components met or came in under their targets for enlisted attrition, the Army Reserve did not stay within its attrition target. On the positive side, the services met or exceeded their targets for enlisted recruiting and recruit quality. However, DOD’s report showed that the target for active enlisted recruiting was revised downward, enabling DOD to meet a goal it might otherwise have been unable to achieve. If such adjustments become commonplace, the same kind of force shaping problems that resulted from the intentional restriction of new accessions during the 1990s drawdown could result. Still other targets, such as for enlisted retention, are set at such aggregate levels that they could mask variations in retention by occupational area and skill levels, which would limit achieving the outcome of ensuring that U.S. military forces are adequate in number, well qualified, and highly motivated. As such, the enlisted retention goal provides only a partial measure of the military’s ability to retain adequate numbers of personnel. DOD’s performance report realistically identified the likelihood of continued challenges in recruiting for the military services and in retention for the Navy and the Air Force. But it did not devote significant attention to identifying specific reasons why DOD missed certain targets. Likewise, with the exception of the enlisted recruiting area, the report did not identify specific planned actions that DOD or the services will take to assist them in meeting future performance targets. For enlisted recruiting, however, the services identified several actions to help them cope with this challenge. For example, the Army and the Navy have increased funding for recruiting and plan to offer enlistment bonuses of up to $20,000. They also plan to continue allowing recruits to choose a combination of college fund and enlistment bonuses. The Army plans to experiment with innovative ways to expand the market for new recruits through programs like College First and GED Plus. And, the Air Force has instituted a college loan repayment program, increased enlistment bonuses to $12,000, and added more recruiters. With regard to retention, the Department’s performance report discusses generally the difficulties of the current retention environment and the fiscal year 2000 enlisted retention challenges. However, the report contains little clear articulation of specific actions or strategies being taken to improve future retention. For example, the report noted that the Navy has established a Center for Career Development that is chartered to focus on retention, providing the fleet the necessary tools to retain Navy personnel. However, the performance report does not elaborate on what those tools are or how they are being enhanced. Similarly, the Air Force indicated that it held two retention summits in fiscal year 2000 and that initiatives resulting from those summits will facilitate achievement of fiscal year 2001 retention targets. However, the report does not cite specific initiatives that would be taken or when they would be put into place. DOD expects that fiscal year 2001 will continue to present retention challenges for the services’ reserve components. The report, however, did not identify any specific actions or initiatives that would be taken to help address the challenge. Finally, for each of its performance measures, DOD’s report describes the data flow used to produce DOD’s assessment. The procedures used to collect, verify, and validate the data cited in the report provide reasonable assurance that the information is accurate and reliable. The level of progress that DOD has made toward the outcome of maintaining combat readiness at desired levels is unclear. DOD’s performance goals for this outcome are to maintain trained and ready forces and have strategic mobility. Although DOD has met some performance measure targets for both goals, other targets are incomplete, have been lowered, or have not been met, thus making an accurate assessment of progress difficult. For example, DOD reported meeting its force-level targets for the performance goal of maintaining trained and ready forces. However, the targets do not provide a complete picture of the forces needed to respond to a full spectrum of crises, to include fighting and winning two major theater wars nearly simultaneously.DOD’s metric includes only combat forces for each service, and not the necessary support forces. In the Army’s case, this means that DOD’s metric captures only 239,000 of the 725,000 forces the Army projects it would deploy to two wars. The targets also do not capture other important attributes beyond the size of the force, such as the extent to which DOD has made the best possible use of its available resources. For example, DOD’s plan does not set results-oriented goals for integrating the capabilities of the active, National Guard, and Reserve forces—even though each of these components is essential for mission effectiveness. As another example, DOD still has not been able to achieve its tank-mile training target of 800 miles of training per tank, conducted at various home stations, in Kuwait, and in Bosnia. Although DOD came closer to meeting the target in fiscal year 2000 than it did in fiscal year 1999—101 (17 percent) more tank miles—it still fell short by nearly 100 training miles per tank. DOD reported that it failed to meet the targets because units were not available for training, units used training simulators instead of actual training, and resources were diverted from field exercises to other high priority needs such as upgrades and maintenance of key training ranges. While our recent work shows this to be true, we reported that the movement of training funds for other purposes had not resulted in the delay or cancellation of planned training events in recent years. Further, data are not as reliable as they could be. DOD and the Army define the 800 tank-mile measure differently. DOD’s definition includes tank-training miles conducted in Kuwait and Bosnia, while the Army’s home station training measure excludes those miles. Using the Army’s home station training measure, it conducted 655 miles of training in fiscal year 2000, which is 145 miles or 18 percent short of its budgeted home station training goal. Figure 1 compares budgeted and actual Army home station tank training miles from fiscal year 1997 to fiscal year 2000. For strategic mobility, DOD reported that it met targets for two of three underlying measures: airlift capacity, and land- and sea-based prepositioning. However, in the area of airlift capacity, DOD revised the performance targets downward from those that had been set in prior performance plans and last year’s performance report. DOD reported that it revised the new targets to reflect updates to the planning factors for C-5 aircraft wartime performance. While it is appropriate for DOD to revise targets, as necessary, we reported that the new targets are significantly less than goals established in a 1995 Mobility Requirements Study Bottom- Up Review Update and even lower than a newly established total airlift capacity requirement of 54.5 million-ton miles per day established in DOD’s Mobility Requirements Study 2005, issued in January 2001. DOD’s performance report contains targets of a total airlift capacity of 45.4 million-ton miles per day for military aircraft and the Civil Reserve Air Fleet, with 24.9 million-ton miles per day coming from military aircraft. By comparison, DOD’s airlift capacity requirements are about 50 million-ton miles per day for total airlift capacity, with nearly 30 million-ton miles per day coming from the military. DOD’s performance report does not explain how these new targets were set or how they differed from prior years’ targets. It is also unclear whether or how DOD intends to meet the higher requirement of 54.5 million-ton miles per day. Because DOD reported that it had met its force-level targets, it plans no significant changes or strategies in force structure for fiscal year 2001. However, we believe that force-level targets could be more complete and meaningful if they included associated support forces with existing combat unit force levels. For example, the lack of any target setting for Army support forces masks the Army’s historic problem in fully resourcing its support force requirements, as well as more recent steps the Army has taken to reduce its shortfall level. With respect to tank training strategies, in response to our recent recommendation, DOD agreed to develop consistent tank training performance targets and reports to provide the Congress with a clearer understanding of tank training. Also, DOD has initiated a strategy to more clearly portray the number of tank training miles driven, and the Department is moving toward becoming more consistent with the Army’s 800-tank mile measure. However, as stated above, DOD continues to include tank-training miles conducted in Kuwait in its definition of the measure, while the Army excludes those miles. DOD reports that the problems encountered in meeting fiscal year 2000 tank training objectives are not, for the most part, expected to recur in fiscal year 2001. However, the problems DOD describes are not unique to fiscal year 2000. Army units are now in their sixth year of deployments to the Balkans, which, as DOD stated, affects its training availability. Further, in at least 6 of the past 8 fiscal years (1993 through 2000), DOD has moved funds from division training for other purposes. For the most recent of those years—the 4-year period from fiscal years 1997 through 2000—DOD moved a total of almost $1 billion of the funds the Congress had provided for training. DOD reports that an Army management initiative implemented in fiscal year 2001 will limit the reallocation of funds. However, at the time of our work, it was too early in the fiscal year to assess the initiative’s success. Further, DOD has identified strategies for strategic airlift improvement, such as including a C-17 aircraft procurement program to provide additional airlift capacity and upgrading of C-5 aircraft components. We recently reported that the C-5 upgrades, however, were fiscal year 2000 proposals that are waiting to be funded in the 2001-2012 timeframe. Thus, in the near term, this strategy would not likely result in significant increases in capacity. For each of its performance measures, DOD’s fiscal year 2000 performance report discussed the source and review process for the performance information. With one exception involving DOD’s En Route System of 13 overseas airfields, DOD’s data appear to be reasonably accurate. The En Route System is a critical part of DOD’s ability to quickly move the large amounts of personnel and equipment needed to win two nearly simultaneous major theater wars, as required by the National Military Strategy. However, DOD’s performance report excludes data on En Route System limitations from the measures it uses to assess performance in strategic mobility, resulting in an incomplete picture of its capabilities. Rapid mobilization of U.S. forces for major theater wars requires a global system of integrated airlift and sealift resources, as well as equipment already stored overseas. The airlift resources include contracted civilian and military cargo aircraft and the 13 En Route System airfields in Europe and the Pacific where these aircraft can land and be serviced on their way to or while in the expected war zones in the Middle East and Korea. We learned during a recent review of the En Route System that DOD includes measures of its performance in meeting goals for aircraft, sealift, and prepositioned equipment capacities in its measures of strategic mobility capability. However, it does not include data on shortfalls in En Route System capacity, which are a major limiting factor on airlift capacity and overall performance in strategic mobility. Officials from the Office of the Secretary of Defense told us that they do not include data on En Route System shortfalls because airfield capacity has not been considered a primary criterion for measuring performance in strategic mobility. However, DOD has reported that the chief limiting factor on deployment operations is not usually the number of available aircraft but the capability of en-route or destination infrastructure to handle the ground operations needed by the aircraft. In a recently issued report, we recommended that DOD begin to include information on En Route System limitations and their effects on strategic mobility in its performance reports. DOD’s progress toward achieving the outcome of ensuring that infrastructure and operating procedures are more efficient and cost- effective remains unclear. The performance goals for this outcome are to streamline infrastructure through business practice reform and improve the acquisition process. DOD reported that it met many of its performance targets, such as disposing of property, reducing logistics response time and streamlining the acquisition workforce. However, as we reported last year, the targets did not always hold up to scrutiny; and some targets that DOD reported as met had been lowered or were not met. For example, while DOD has reported meeting its targets for public- private competitions, we have found that delays have been encountered in initiating and completing planned studies that have the potential for reducing savings expected to be realized in the near-term. Additionally, changes have been made in overall study goals, creating some uncertainties about future program direction. For example, the Department recently reduced its plan to study 203,000 positions under Office of Management and Budget (OMB) Circular A-76 to about 160,000 positions while supplementing it with a plan to study 120,000 positions under a broader approach known as strategic sourcing. Similarly, DOD reported that it had met its 99-month target cycle time for average major defense acquisition programs. However, compared to fiscal year 1999 results, the average cycle time actually increased by 2 months. We have reported numerous examples of questionable defense program schedules, such as with Army Comanche helicopter program delays. In this regard, our work has shown that DOD could benefit from the application of commercial best practices to ensure that (1) key technologies are mature before they are included in weapon system development programs, (2) limits are set for program development cycle times, and (3) decisions are made using a knowledge-based approach. As another example, DOD reported that it did not meet its cost growth measure. On average, reported costs rose in major defense acquisition programs by 2.9 percent during fiscal year 2000 compared to the goal of 1.0 percent. DOD explains the causes for the excessive cost growth but not the strategies to solve the problem. We have reported pervasive problems regarding, among other things, unrealistic cost, schedule, and performance estimates; unreliable data on actual costs; and questionable program affordability. Also, we have recommended that DOD leadership improve the acquisition of weapon systems by using more realistic assumptions in developing system cost, schedule, and performance requirements and approving only those programs that can be fully executed within reasonable expectations of future funding. DOD’s fiscal year 2000 performance report sufficiently explains why a number of performance measures were not met but does not provide clear plans, actions, and time frames for achieving them. For example, DOD reported that no systemic problems would hinder it from meeting working capital fund and defense transportation documentation targets in the future. However, DOD believes it may have difficulty meeting supply inventory goals due to continuing concerns about the impact of inventory reductions on readiness. In the report, DOD acknowledges that it may have problems meeting some targets because it must balance its infrastructure reduction initiatives with efforts to enhance quality of life, improve recruiting and retention, and transform the military to meet the challenges of the 21st century. For each of its performance measures, DOD’s report discusses the source and review process for the performance information. The data appear to be credible, with some exceptions. For example, we previously reported that unreliable cost and budget information related to DOD’s measure for the percentage of the budget spent on infrastructure negatively affects the Department’s ability to effectively measure performance and reduce costs. We also reported that significant problems exist with the timeliness and accuracy of the underlying data for the measure related to inventory visibility and accessibility. We could not assess DOD’s progress in achieving performance goals or measures because DOD’s fiscal year 2000 performance report did not include performance goals or measures for this outcome. DOD does, however, assist U.S. and foreign law enforcement agencies in their efforts to reduce the availability and use of illegal drugs. It has lead responsibility for aerial and maritime detection and monitoring of illegal drug shipments to the United States. It also provides assistance and training to foreign governments to combat drug-trafficking activities. DOD’s 2000 performance report recognized counternarcotics as a crosscutting function and outlined DOD’s responsibilities in this area. In a December 1999 report on DOD’s drug control program, we recommended that DOD develop performance measures to determine the effectiveness of its counterdrug activities and make better use of limited resources. In response to our recommendation, DOD developed a set of “performance results” that are compiled on a quarterly basis. These performance results are intended to (1) provide a useful picture of the performance results of individual projects, (2) facilitate the identification of projects that are not demonstrating adequate results, (3) allow an overall assessment of DOD’s counterdrug program’s results, and (4) describe those DOD accomplishments that directly support the performance goals delineated in the National Drug Control Strategy’s Performance Measures of Effectiveness Plan. DOD is currently refining the performance results in an effort to improve its ability to measure the success or failure of counterdrug activities. We had no basis to assess DOD’s progress in achieving the outcome of making fewer erroneous payments to contractors because DOD had no performance goals directly related to the outcome. However, this issue represents a significant problem for DOD. Under its broader goal of improving the efficiency of its acquisition processes, DOD has developed performance measures that address related contracting issues. Specifically, the 2000 performance report contains goals and measures for increasing the use of paperless transactions. However, these measures do not directly address the outcome of fewer erroneous payments. While they do reflect quantifiable measures of the levels of usage for these contracting processes, they may not directly address whether the number of erroneous payments has been reduced. On a related issue, we have reported over the last several years that DOD annually overpaid its contractors by hundreds of millions of dollars, constituting a significant problem. In February of this year, we reported that DOD contractors repaid $901 million in overpayments in fiscal year 2000 to a major DOD contract payment center. This represents a substantial amount of cash in the hands of contractors beyond what is intended to finance and pay for the goods and services DOD bought. For example, contractors returned $351 million in overpayments in fiscal year 1999 to this DOD payment center. Contractor data indicate that 77 percent of that amount resulted from contract administration actions (see fig. 2). However, DOD does not review available data on why this major category of overpayments occurs. Such a review is necessary if excess payments are to be reduced. Therefore, in our February 2001 report, we recommended that DOD routinely analyze data on the reasons for excess payments, investigate problem areas, and implement necessary actions to reduce excess payments. In responding to our recommendation, DOD stated that it would conduct an initial review of excess payment data and determine whether routine receipt and analysis of this data would be meaningful. In comparing DOD’s fiscal year 2000 performance report with its prior year report, we noted that DOD has made several improvements. For example, it added more discussion on the importance of human resources in achieving its performance objectives; summarized how its performance metrics responded to each of eight major management challenges it faces; and included a more in-depth explanation of each cross-cutting activity it is involved with, rather than just a listing of the responsible agencies. The eight major management challenges facing the Department are: Developing strategic plans that lead to desired mission outcomes. Hiring, supporting, and retaining military and civilian personnel with the skills to meet mission needs. Establishing financial management operations that provide reliable information and foster accountability. Effectively managing information technology investments. Reforming acquisition processes while meeting military needs. Improving processes and controls to reduce contract risk. Creating an efficient and responsive support infrastructure. Providing logistics support that is economical and responsive. In terms of data verification, presentation, and content, DOD’s fiscal year 2000 performance report has an effective format that is understandable to a nondefense reader. DOD also clarified some of its terminology. For example, it changed the term “performance goal” to “performance target” to remove confusion about what the annual performance goals are. The fiscal year 2000 report, however, did not address several weaknesses that we identified in the fiscal year 1999 report. For example, DOD reported nine measures and indicators to make infrastructure and operating procedures more efficient and cost-effective. We believe that these measures are insufficient to assess whether DOD is actually making progress toward streamlining its infrastructure. Some measures, such as the number of positions subject to OMB Circular A-76 or strategic sourcing reviews, generally reflect status information rather than the impact that programs are having on the efficiency and cost-effectiveness of operations. Since DOD has not changed or supplemented these measures, we continue to believe that DOD will have problems determining how effective its infrastructure reduction efforts have been. Also, we have testified that DOD has undergone a significant downsizing of its civilian workforce. In part due to the staffing reductions already made, imbalances appear to be developing in the age distribution of DOD civilian staff. The average age of this staff has been increasing, while the proportion of younger staff, who are the pipeline of future agency talent and leadership, has been dropping. As another example, DOD’s performance report has no outcome-oriented measures for working capital fund activities. The idea behind working capital funds is for activities to break even over time. Thus, if an activity has a positive net operating result one year, it will budget for a negative net operating result the next year. The measure DOD currently uses to assess its working capital fund operations is net operating results. This particular measure, however, is of little value for determining the outputs achieved for goals and services provided through the working capital fund activities. We believe that additional measures are needed to help determine operational effectiveness, particularly because these activities report about $75 billion in annual revenues associated with their operations. For example, a good measure to determine the effectiveness of the supply management activity group could be the percentage of aircraft that are not mission capable due to supply problems. GAO has identified two governmentwide high-risk areas: strategic human capital management and information security. Regarding strategic human capital management, we found that DOD’s performance report did not explain DOD’s progress in resolving human capital challenges. However, the report included a description on the importance of human resources, such as the importance of total force integration and quality of life and personnel. With respect to information security, we found that DOD’s performance report did not explain its progress in resolving its information security challenges. However, it states that specific goals, objectives, and strategies for improving DOD’s management of information can be found in the Information Management Strategic Plan (http://www.c3i.osd.mil) discussed in appendix J of DOD’s 2001 Annual Report to the President and the Congress. In addition, GAO has identified eight major management challenges facing DOD. Some of these challenges are crosscutting issues. For example, improving DOD’s financial management operations so that it can produce useful, reliable and timely cost information is essential if DOD is to effectively measure its progress toward achieving outcomes and goals across virtually the entire spectrum of DOD’s business operations. Although DOD’s performance report discussed the agency’s progress in resolving many of its challenges, it did not discuss the agency’s progress in resolving the following challenge: “Developing strategic plans that lead to desired mission outcomes.” As we reported in March 2001, sound strategic planning is needed to guide improvements to the Department’s operations. Without it, decisionmakers and stakeholders may not have the information they need to ensure that DOD has strategies that are well thought-out to resolve ongoing problems, achieve its goals and objectives, and become more results oriented. While DOD has improved its strategic planning process, its current strategic plan is not tied to desired mission outcomes. As noted in several of the other key challenges, sound plans linked to DOD’s overall strategic goals are critical to achieving needed reforms. Inefficiencies in the planning process have led to difficulties in assessing performance in areas such as combat readiness; support infrastructure reduction; force structure needs; and matching resources to program spending plans. Appendix I provides detailed information on how well DOD addressed these challenges and high-risk areas as identified by both GAO and the DOD Inspector General. Shortfalls in DOD’s current strategies and measures for several outcomes have led to difficulties in assessing performance in areas such as combat readiness, support infrastructure reduction, force structure needs, and the matching of resources to program spending plans. DOD’s fiscal year 2002 performance plan, which has yet to be issued, provides DOD with the opportunity to address these shortfalls. DOD is also in the process of updating its strategic plan through the conduct of its Quadrennial Defense Review, which sets forth its mission, vision, and strategic goals. The review provides DOD with another opportunity to include qualitative and quantitative information that could contribute to providing a clearer picture of DOD’s performance. On the basis of last year’s analysis of DOD’s fiscal year 1999 performance report and fiscal year 2001 performance plan, we recommended that the Department include more qualitative and quantitative goals and measures in its annual performance plan and report to gauge progress toward achieving mission outcomes. DOD has not as yet fully implemented this recommendation. We continue to believe that the Secretary of Defense should adopt this recommendation as it updates its strategic plan through the Quadrennial Defense Review and prepares its next annual performance plan. By doing so, DOD can ensure that it has strategies that are tied to desired mission outcomes and are well thought-out for resolving ongoing problems, achieving its goals and objectives, and becoming more cost and results oriented. As agreed, our evaluation was generally based on the requirements of GPRA; the Reports Consolidation Act of 2000; guidance to agencies from OMB for developing performance plans and reports (OMB Circular A-11, Part 2); previous reports and evaluations by us and others; our knowledge of DOD’s operations and programs; our identification of best practices concerning performance planning and reporting; and our observations on DOD’s other GPRA-related efforts. We also discussed our review with agency officials in DOD’s Office of Program Analysis and Evaluation and with the DOD Office of Inspector General. The agency outcomes that were used as the basis for our review were identified by the Ranking Minority Member, Senate Governmental Affairs Committee as important mission areas for the agency and do not reflect the outcomes for all of DOD’s programs or activities. Both GAO, in our January 2001 performance and accountability series and high risk update, and DOD’s Inspector General in December 2000 identified the major management challenges confronting DOD, including the governmentwide high-risk areas of strategic human capital management and information security. We did not independently verify the information contained in the performance report, although we did draw from other GAO work in assessing the validity, reliability, and timeliness of DOD’s performance data. We conducted our review from April 2001 through June 2001 in accordance with generally accepted government auditing standards. In a letter dated June 14, 2001, the DOD Director for Program Analysis and Evaluation provided written comments on a draft of this report. DOD indicated that its annual GPRA report provides the Congress and the public an executive-level summary of key performance results over the past budget year. DOD stated that, together, the metrics presented in its report demonstrate how DOD’s existing management practices enable it to recruit, train, equip, and field the most effective military force in the world. DOD said that we overlooked this fact in our draft report. However, DOD pointed out that future GPRA submissions will refine its performance metrics to reflect priorities of the new defense strategy, but it sees little value in adding large amounts of new measures auditors and others have proposed over the past 18 months. DOD reiterated that GPRA is not the sole venue for reporting performance results—it submits more than 900 reports annually to the Congress alone, many of which address issues highlighted in our draft report. DOD stressed that a key goal of the GPRA legislation is to increase public confidence in government and, although it does not want to mask deficiencies in how DOD manages performance, it does not want to emphasize shortfalls at the expense of true achievements. DOD stated that it would be helpful if we could provide a clearer definition of what standards of sufficiency will be applied in evaluating future submissions. Notwithstanding DOD’s statement that the metrics DOD presented in its performance report can enable it to have an effective military force, we continue to believe, for the reasons cited in our report, that DOD’s progress in achieving the selected outcomes is still unclear. As we have recently recognized in our report on major performance and accountability challenges facing DOD, our nation begins the new millennium as the world’s sole superpower with military forces second to none, as evidenced by experiences in the Persian Gulf, Bosnia, and Kosovo. We also stated that the same level of excellence is not evident in many of the business processes that are critical to achieving DOD’s mission in a reasonably economical, efficient, and effective manner. A major part of DOD’s performance report focuses on outcomes related to these processes, the results of which are critical to DOD’s ability to maintain its military capability. As we reported in last year’s assessment, we agree that the answer is not to simply measure more things in more detail. However, in many instances, for the outcomes identified by the Committee, DOD’s report does not discuss strategies for achieving unmet goals and does not fully assess its performance. We believe that the best test of reasonableness or sufficiency to evaluate DOD’s future progress resides in the requirements of GPRA itself, which requires, among other things, agencies to explain and describe, in cases where a performance goal has not been met, why the goal was not met. The requirement to submit a fiscal year 2002 performance plan, which DOD has yet to issue, also provides DOD with the opportunity to address these shortfalls. In that regard, we have issued guidance that outlines approaches agencies should use in developing performance plans. These actions would place DOD in a position of continuously striving for improvement. Appendix II contains DOD’s comments. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies to appropriate congressional committees; the Secretary of Defense; and the Director, Office of Management and Budget. Copies will also be made available to others on request. If you or your staff have any questions, please call me at (202) 512-4300. Key contributors to this report were Charles I. Patton, Jr.; Kenneth R. Knouse, Jr.; Elizabeth G. Mead; Cary B. Russell; and Brian G. Hackett. The following table identifies the major management challenges confronting the Department of Defense (DOD), which include the governmentwide high-risk areas of strategic human capital management and information security. The first column of the table lists the management challenges that we and/or DOD’s Inspector General (IG) have identified. The second column discusses what progress, as discussed in its fiscal year 2000 performance report, DOD made in resolving its challenges along with our assessment. We found that DOD’s performance report discussed the agency’s progress in resolving many of its challenges but that it did not discuss the agency’s progress in resolving the following challenges: Strategic Planning, Other Security Concerns, and Health Care. | This report reviews the Department of Defense's (DOD) fiscal year 2000 performance report required by the Government Performance and Results Act of 1993 and assesses the Department's progress in achieving selected outcomes that were identified as important mission areas for DOD. GAO found that shortfalls in DOD's current strategies and measures for several outcomes have led to difficulties in assessing performance in areas such as combat readiness, support infrastructure reduction, force structure needs, and the matching of resources to program spending plans. DOD's fiscal year 2002 performance plan, which has yet to be issued, provides DOD with the opportunity to address these shortfalls. On the basis of last year's analysis of DOD's fiscal year 1999 performance report and fiscal year 2001 performance plan, GAO recommended that the Department include more qualitative and quantitative goals and measures in its annual performance plan and report to gauge progress toward achieving mission outcomes. DOD has not as yet fully implemented this recommendation. GAO continues to believe that the Secretary of Defense should adopt this recommendation as it updates its strategic plan and prepares its next annual performance plan. By doing so, DOD can ensure that its strategies are tied to desired mission outcomes and are well thought-out for resolving ongoing problems, achieving its goals and objectives, and becoming more cost and results oriented. |
RS21772 -- AGOA III: Amendment to the African Growth and Opportunity Act Updated January 19, 2005 After two decades of economic stagnation and decline, some African countries began to show signs of renewed economic growth in the early 1990s. Thisgrowth was generally due to better global economic conditions and improved economic management. However,growth in Africa was also threatened by newfactors, such as HIV/AIDS and high foreign debt levels. The African Growth and Opportunity Act (AGOA) ( P.L.106-200 - Title I) was enacted to encouragetrade as a way to further economic growth in Sub-Saharan Africa and to help integrate the region into the worldeconomy. AGOA provided trade preferencesand other benefits to countries that were making progress in economic, legal, and human rights reforms. Currently,37 of the 48 Sub-Saharan African countriesare eligible for benefits under AGOA. AGOA expands duty-free and quota-free access to the United States as provided under the U.S. Generalized System of Preferences (GSP). (1) GSP grantspreferential access into the United States for approximately 4,600 products. AGOA extends preferential access toabout 2,000 additional products by removingcertain product eligibility restrictions of GSP and extends the expiration date of the preferences for beneficiaryAfrican countries from 2006 to 2015. Otherthan articles expressly stipulated, only articles that are determined by the United States as not import-sensitive (inthe context of imports from AGOAbeneficiaries) are eligible for duty-free access under AGOA. Beyond trade preferences, AGOA directs the President to provide technical assistance and trade capacity support to AGOA beneficiary countries. Various U.S.government agencies carry out trade-related technical assistance in Sub-Saharan Africa. The U.S. Agency forInternational Development funds three regionaltrade hubs, located in Ghana, Kenya, and Botswana, that provide trade technical assistance. Such assistance includes support for improving Africangovernments' trade policy and business development strategies; capacity to participate in trade agreementnegotiations; compliance with WTO policies and withU.S. phytosanitary regulations; and strategies for further benefiting from AGOA. AGOA also provides for duty- and quota-free entry into the United States of certain apparel articles, a benefit not extended to other GSP countries. This hasstimulated job growth and investment in certain countries, such as Lesotho and Kenya, and has the potential tosimilarly boost the economies of other countries,such as Namibia and Ghana. In order to qualify for this provision of AGOA, however, beneficiary countries mustdevelop a U.S.-approved visa system toprevent illegal transshipments. Of the 37 AGOA-eligible countries, 24 are qualified for duty-freeapparel trade (wearing-apparel qualified). These countriesmay also benefit from Lesser Developed Country (LDC) status. Countries that have LDC status for the purpose ofAGOA, and are wearing-apparel qualified,may obtain fabric and yarn for apparel production from outside the AGOA region. As long as the apparel isassembled within the LDC country, they mayexport it duty-free to the United States. Some LDC AGOA beneficiaries have used this provision to jump-start theirapparel industries. This provision was dueto expire on September 30, 2004. The AGOA Acceleration Act extends the LDC provision to September 30, 2007,with a reduction in the cap on the allowablepercentage of total U.S. apparel imports beginning in October 2006. Countries that are not designated as LDCs butare wearing apparel qualified must use onlyfabric and yarn from AGOA-eligible countries or from the United States. The only wearing apparel qualifiednon-LDC countries is South Africa, althoughMauritius only qualifies for LDC status under AGOA for one year ending September 30, 2005, per theMiscellaneous Trade and Technical Corrections Act of2003 ( P.L. 108-429 ). AGOA was first amended in the Trade Act of 2002 ( P.L. 107-210 ), which doubled a pre-existing cap set on allowable duty-free apparel imports. The cap wasonly doubled for apparel imports that meet non-LDC rules of origin; apparel imports produced with foreign fabricwere still subject to the original cap. Theamendment also clarified certain apparel rules of origin, granted LDC status to Namibia and Botswana for thepurposes of AGOA, and provided that U.S.workers displaced by production shifts due to AGOA could be eligible for trade adjustment assistance. U.S. duty-free imports under AGOA (excluding GSP) increased dramatically in 2003 -- by about 58%, from $8.36 billion in 2002 to $13.19 billion in 2003-- after a more modest increase of about 10% in 2002. (2) However, 70% of these imports consisted of energy-related products from Nigeria. ExcludingNigeria, U.S. imports under AGOA increased 30% in 2003, to $3.84 billion, up from $2.95 billion in 2002. Theincrease in AGOA imports since the law'senactment is impressive, but it must be viewed in the broader context of Africa's declining share of U.S. trade overmany years. AGOA has done little to slowor reverse this trend -- the growth in AGOA trade can be explained by a greater number of already-traded goodsreceiving duty free treatment under AGOA. One industry has grown substantially under AGOA: the textile and apparel industry. Much of the growth in textileand apparel imports has come from thenewly emerging apparel industries in Lesotho, Kenya, and Swaziland. Apart from the apparent success of the emergent apparel industries in some African countries, the potential benefits from AGOA have been slowly realized. There has been little export diversification, with the exception of a few countries whose governments have activelypromoted diversification. Agriculturalproducts are a promising area for African export growth, but African producers have faced difficulties in meetingU.S. regulatory and market standards. Manycountries have been slow to utilize AGOA at all. Others, such as Mali, Rwanda, and Senegal, have implementedAGOA-related projects, but have madeinsignificant gains thus far. In addition to lack of market access, there are substantial obstacles to increased exportgrowth in Africa. Key impediments includeinsufficient domestic markets, lack of investment capital, and poor transportation and power infrastructures. Othersignificant challenges include low levels ofhealth and education, protectionist trade policies in Africa, and the high cost of doing business in Africa due tocorruption and inefficient governmentregulation. Furthermore, the apparel industry in Africa now faces a challenge in the dismantled MultifibreArrangement quota regime, which ended as ofJanuary 1, 2005. As a result, Africa must now compete more directly with Asian apparel producers for the U.S.market. AGOA beneficiaries retain theirduty-free advantage, but they have lost their more significant quota-free advantage. Apparel producers havereportedly already left Lesotho, with a loss of 7,000jobs. (3) This makes export diversification in Africaall the more vital. AGOA III extends the preference program to 2015 from its previous 2008 deadline. AGOA III supporters claimed that many AGOA beneficiaries had onlyrecently begun to realize gains as a result of AGOA, and that extending AGOA benefits would improve the stabilityof the investment climate in Africa. AGOA III also provides for apparel rules of origin and product eligibility benefits; it extends the third-country fabricrule for LDCs, and encourages foreigninvestment and the development of agriculture and physical infrastructures. Extension of Lesser Developed Country Provision. One of the more controversial aspects of AGOA III wasthe extension of the LDC provision. If the LDC provision had not been extended, LDCs would no longer haveduty-free access to the United States for apparelmade from third-country fabric after September 30, 2004. Supporters of the extension claimed that if the LDCprovision was not extended, the apparel industrymay have contracted significantly, causing a loss of many of the gains from AGOA, as apparel assembly plants wereshut down. This might have occurredbecause all AGOA beneficiaries would need to source their fabric and yarn from within the AGOA region or fromthe United States in order to get duty-freeaccess under AGOA, and the regional supply of fabric and yarn would likely be insufficient to meet the demand. (4) Sourcing materials from the United Stateswould not be a viable option because it would entail greater costs. Some analysts argued for the LDC provision tobe extended to allow more time to develop atextile milling industry to support the needs of the apparel industry in Africa, and to prevent the collapse of theemerging apparel industry. Opponents of extending the LDC provision claimed that the expiration of the LDC provision would provide an incentive for further textile milling investmentsin Africa. They argued that the LDC provision has slowed fabric and yarn production investment in Africa, becausethese materials could be imported cheaplyfrom Asia for use in AGOA-eligible apparel with no need for costly investments. They feared that an extension ofthe LDC provision would provide adisincentive to textile milling investment in Africa, because the deadline would lose its credibility as investorsanticipated further extensions. However,supporters of the extension argued that investment in the textile industry would continue because of its inherentprofitability, despite the availability ofthird-country fabric. Others worried that looser rules of origin under the LDC provision might allow companies touse Africa as a transshipment point betweenAsia and the United States. The outlook for the development of a textile industry in Sub-Saharan Africa is clouded by the phase-out of the Multifibre Arrangement (MFA) quota regime inJanuary 2005. (5) Now that quotas have beeneliminated, Africa will be competing more directly with Asia for the U.S. apparel and textile market, though theyremain eligible for tariff preferences. Apparel plants are particularly sensitive to price conditions as they do notrequire large capital investments and can easilyand rapidly be shifted to areas outside Africa. Textile plants are more capital-intensive and more costly to move,and are therefore likely to remain in Africa inthe long-term. Thus, it is argued that the promotion of vertical integration between apparel, textile, and cottonproducers is necessary to keep apparel plants inAfrica, along with the jobs they provide. Vertical integration is a challenging prospect regardless of the LDCprovision extension. Some investment in textilemilling has occurred in Africa, but investors have found it difficult to consistently source high quality cotton in largevolumes. While there is agreement thatvertical integration is the key to a thriving African textile and apparel industry, the question is how to facilitate thisprocess. (6) Agricultural Products. The growth of agricultural trade holds potential for improved economic growth inAfrica. Most Africans rely on agricultural production for their income. It is estimated that 62% of the labor forcein Africa works in agriculture, and in thepoorer countries, that portion is as high as 92%. (7) By exporting to the U.S. market, African agricultural producers could receive higher prices for their goods. In order for this to occur, the United States may need to further open its market to African agricultural products, andprovide technical assistance to help Africanagricultural producers meet the high standards of the U.S. market. AGOA III seeks to improve African agricultural market access to the United States by providing assistance to African countries to enable them to meet U.S.technical agricultural standards. African agricultural producers have previously faced difficulties in meeting thesestandards. The AGOA Acceleration Actcalls for the placement of 20 full-time personnel to at least 10 countries in Africa to provide this assistance. Someobservers are skeptical about theeffectiveness of technical assistance without increased market access. Others are concerned that U.S. technicalassistance is hindered by laws restrictingagricultural technical assistance to products that would compete with U.S. farm products. However, technicalassistance proponents point to the lowinstitutional capacity in Africa as the main obstacle to African export-led development. They feel that U.S.-providedtechnical assistance can be an importantfactor in improving Africa's agricultural development and export performance. Table 1. Provisions from the AGOA Acceleration Act of 2004 | On July 13, 2004, the "AGOA Acceleration Act of 2004" was signed by the Presidentand became P.L.108-274. This legislation amends the African Growth and Opportunity Act (AGOA; P.L. 106-200, Title I), extending it to2015. AGOA seeks to spur economicdevelopment and help integrate Africa into the world trading system by granting trade preferences and other benefitsto Sub-Saharan African countries that meetcertain criteria relating to market reform and human rights. Congress first amended AGOA in 2002 (P.L. 107-210)by increasing a cap on duty-free apparelimports and clarifying other provisions. The new AGOA amendment, commonly referred to as "AGOA III," extendsthe legislation beyond its currentexpiration date of 2008 and otherwise amends existing AGOA provisions. For further information on AGOA, seeCRS Report RL31772, U.S. Trade andInvestment Relationship with Sub- Saharan Africa: The African Growth and Opportunity Act and Beyond. This report will be updated as needed. |
Mark Ralston/AFP/Getty Images
Below: A running list of violent incidents involving Donald Trump supporters, protesters, members of the media, and campaign/security staff at Trump events. (Note: List does not include incidents that took place only between protesters and police outside Trump rallies.)
20. June 18 in Las Vegas, Nevada. A 19-year-old man was arrested after attempting to grab a police officer's gun; the man later said he planned to use the weapon to kill the candidate.
Advertisement
19. June 16, 2016 in Dallas, Texas. A photographer was bloodied when he was hit in the head by a thrown rock after a Trump rally; it's not clear whether the object was thrown by a Trump supporter or an anti-Trump protester.
18. June 2, 2016 in San Jose, California. Protesters attacked and threw eggs at Trump supporters after a rally.
17. May 28, 2016 in San Diego, California. Trump supporters pepper-sprayed protesters outside a rally.
16. April 28, 2016 in Costa Mesa, California. A Trump supporter was bloodied, presumably by anti-Trump protesters, during a chaotic scene outside one of the candidate's rallies.
Advertisement
15. April 26, 2016, in Anaheim, California. Pro- and anti-Trump protesters scuffled outside a city council meeting that was considering a resolution denouncing the candidate; individuals on both sides of the issue appear to have used pepper spray as a weapon during the confrontation.
14. April 23, 2016, in Bridgeport, Connecticut. A protester was pulled away in a chokehold by a police officer after reportedly trying to run back into a Trump rally he was being escorted out of.
A police officer puts a protester in a chokehold and drags him out of Donald Trump's Bridgeport rally. pic.twitter.com/gNDfCRMUnt — Kyle Constable (@KyleConstable) April 23, 2016
13. April 11, 2016, in Albany, New York. A protester at a rally was shoved in the face twice by a Trump supporter. Video:
Trump later compared a different protester's actions at the same event to "what's happening with ISIS."
Advertisement
12. March 19, 2016, in Tucson, Arizona. A protester at a Trump rally was sucker-punched and kicked by a 32-year-old man who was then arrested on an assault charge.
Went to the Trump rally just to see how crazy it would be........this is insane pic.twitter.com/QFwSwmNoI0 — Alex Satterly (@alex_satterly) March 19, 2016
11. March 19, 2016, in Tucson, Arizona. A Trump security official and Trump campaign manager Corey Lewandowski appeared to grab and pull a protester at the same Tucson rally.*
Here is Donald Trump's campaign manager in the Tucson crowd grabbing the collar of a protester. pic.twitter.com/JZ9RntWlHY — Jacqueline Alemany (@JaxAlemany) March 19, 2016
10. March 9, 2016, in Fayetteville, North Carolina. A black protester being escorted out of a Trump rally was sucker-punched by a white bystander:
Though the individual who was attacked was then taken to the ground by law enforcement officials, the man who threw the punch does not appear to have been detained.
Advertisement
9. March 8, 2016, in Jupiter, Florida. Politico reports that Trump campaign manager Corey Lewandowski "forcibly" grabbed a female reporter for the right-wing Breitbart news site, "nearly bringing her down to the ground," when she attempted to ask a question after a Trump press conference.
8. March 1, 2016, in Louisville, Kentucky. A black woman was surrounded and shoved by a number of individuals at a Trump rally.
7. Feb. 29, 2016, in Radford, Virginia. A photographer was slammed to the ground by a Secret Service agent after cursing toward the agent during a dispute over where the photographer was allowed to work from during a Trump rally; the Secret Service says the incident is being investigated.
Secret Service agent slams down photographer at Donald Trump rally https://t.co/asGaRewTOLhttps://t.co/OGViFbP8Sp — Capital Journal (@WSJPolitics) February 29, 2016
6. Dec. 14, 2015, in Las Vegas. Individuals at a Trump rally yelled "Sieg Heil" and "light the motherfucker on fire" toward a black protester who was being physically removed by security staffers. Video:
And about ten minutes into the Trump rally, this happens. pic.twitter.com/65pXHjsJ3x — McKay Coppins (@mckaycoppins) December 15, 2015
Advertisement
5. Dec. 11, 2015, in New York City. Protesters "affiliated with various Arab-American and Muslim-American groups," per the AP, were "forcibly ejected" from a fundraiser at which Trump was speaking. Video:
4. Dec. 3, 2015, in New York City. A security guard took a sign from and struck an immigration activist during a protest after a Trump event. Video:
3. Nov. 21, 2015, in Birmingham, Alabama. A black protester at a Trump rally was punched, kicked, and, according to the Washington Post, briefly choked. Video:
A black protester at Trump's rally today in Alabama was shoved, tackled, punched & kicked: https://t.co/Aq0wuaAtax pic.twitter.com/cTRDMtjuBl — Jeremy Diamond (@JDiamond1) November 21, 2015
Trump later defended the crowd's treatment of the protester, saying that "maybe he should have been roughed up because it was absolutely disgusting what he was doing."
Advertisement
2. Oct. 23, 2015, in Miami. A man at a Trump rally knocked down and kicked a Latino protester.
1. Oct. 14, 2015, in Richmond, Virginia. Individuals at a Trump rally shoved and took signs from a group of immigration activists. One spit in a protester's face:
Immigration activists disrupt Trump rally. He gamely continues, but crowd raged. One spit on protestors @wusa9 pic.twitter.com/82T2s845Eu — Garrett Haake (@GarrettHaake) October 14, 2015
Contact me on Twitter if I've missed something that belongs on this list. ||||| WARREN, Mich. (CBSNewYork/AP) — Donald Trump says a violent episode involving a protester at one of his rallies “was amazing to watch.”
The Republican presidential front-runner told a Warren, Michigan audience on Friday that he’s tired of political correctness when it comes to handling protesters. He was interrupted several times during his remarks by yelling protesters, as he often is at his events.
MORE: Complete Campaign 2016 Coverage
During one interruption, Trump said, “Get him out. Try not to hurt him. If you do I’ll defend you in court.”
“Are Trump rallies the most fun?” he then asked the crowd. “We’re having a good time.”
He then recalled an incident at a New Hampshire rally where a protester started “swinging and punching.” Trump said some people in the audience “took him out.”
“It was really amazing to watch,” he said.
Police are investigating at least two alleged assaults against protesters at a recent Kentucky rally. One, captured on video, involves a young African-American woman who was repeatedly shoved and called “scum.”
Trump was also slamming his opponents while campaigning near Detroit Friday morning, just hours after the most recent GOP debate, CBS2 reported.
“Littke Marco, little Marco,” Trump said. “Lyin’ Ted Cruz, lyin’ Ted.”
Also Friday, Trump announced he would skip the Conservative Political Action Conference, or CPAC. His opponents immediately pounced.
“He really doesn’t belong at a conservative gathering,” U.S. Sen. Marco Rubio said. “Donald Trump is not a conservative.”
“Donald Trump started galloping to the center before the primary was over,” U.S. Sen. Ted Cruz said.
The bickering was much the same Thursday night in the debate, that often devolved into a yelling match. Throughout the night, Cruz and Rubio used attack after attack to try to take down Trump.
“He’s trying to con people into giving them their vote, just like he conned these people into giving them their money,” Rubio said.
But despite their tries – Cruz and Rubio may have missed their chance.
“Last night, someone needed to land a knockout punch, and it didn’t even look like they came to the box,” Republican Political Strategist Rick Davis said.
No matter how bad his opponents say Trump will be as a nominee and a president, they still say they’ll support him if he wins.
On Friday, Rubio said the reason for that is simple.
“We as Republicans feel that Hillary Clinton would be a disaster to the country — that’s how bad she is,” Rubio said. “I would look at that as a reflection of how bad she is, not how good Donald Trump is.”
Even though Trump is the most likely GOP nominee, he does fare the worst against Clinton in a head-to-head matchup, polling at 42 percent against Clinton’s 45.4 percent. Rubio, Cruz and GOP candidate John Kasich were all projected to win over Clinton, according to a recent RealClearPolitics analysis.
Word surfaced earlier Friday of a plot to get U.S. House Speaker Paul Ryan (R-Wis.) into the presidential race.
(TM and © Copyright 2016 CBS Radio Inc. and its relevant subsidiaries. CBS RADIO and EYE Logo TM and Copyright 2016 CBS Broadcasting Inc. Used under license. All Rights Reserved. This material may not be published, broadcast, rewritten, or redistributed. The Associated Press contributed to this report.) ||||| Jeers and violence erupted between Donald Trump supporters and protesters at the Republican frontrunner's rally in Fayetteville, N.C., on March 9. (Jenny Starrs/The Washington Post)
A Donald Trump supporter has been charged with assault after multiple videos showed him sucker-punching a protester at a campaign rally in Fayetteville, N.C.
The videos, which appeared on social media early Thursday and are shot from different perspectives, show an African American with long hair wearing a white T-shirt leaving Trump’s Wednesday-night rally as the audience boos. He is being led out by men in uniforms that read “Sheriff’s Office.” The man extends a middle finger to the audience on his way out.
Then, out of nowhere, the man is punched in the face by a pony-tailed man, who appears to be white, in a cowboy hat, black vest and pink shirt as the crowd begins to cheer. The protester stumbles away, and then is detained by a number of the men in uniforms.
[Suspect was wild west ‘gunslingin’ cowboy hobbyist]
“Chill, chill!” an onlooker says. “You don’t gotta grab him like that!”
Rakeem Jones, the man who was hit, said the punch came out of nowhere.
“Boom, he caught me,” Jones told The Washington Post in a telephone interview. “After I get it, before I could even gain my thoughts, I’m on the ground getting escorted out. Now I’m waking up this morning looking at the news and seeing me getting hit again.”
[Inside Trump’s inner circle, his staffers are willing to fight for him. Literally.]
John McGraw, 78, was charged with assault and disorderly conduct in connection with the incident, Cumberland County Sheriff’s Office spokesman Sean Swain told The Post on Thursday.
McGraw is due in court in April, Swain said. It was not immediately clear if he already has an attorney.
John McGraw. (Cumberland County Sheriff’s Office)
“We did the mission last night, we’re doing the follow-up, we’ve got the guy in custody,” said Swain, who also added: “People here in Cumberland Country realize what this sheriff does, and they didn’t have a complaint about what happened last night.”
Democratic presidential front-runner Hillary Clinton addressed the incident during an interview with MSNBC’s Rachel Maddow.
“Count me among those who are truly distraught and even appalled by a lot of what I see going on, what I hear being said,” Clinton said. “You know, you don’t make America great by, you know, dumping on everything that made America great, like freedom of speech and assembly and, you know, the right of people to protest.”
She added: “As the campaign goes further, more and more Americans are going to be really disturbed by the kind of campaign he’s running.”
Jones said he and four friends — a “diverse” group that included a white woman, a Muslim, and a gay man — had gone to the rally as a “social experiment.” He said the woman with them started shouting once Trump’s speech began.
“She shouted, but at the same time, they were shouting too,” Jones, a 26-year-old inventory associate, said. “Everyone was shouting, too. … No one in our group attempted to get physical.”
Jones blamed the Cumberland County officers escorting him from the rally for failing to protect him — then detaining him instead of the man who attacked him.
“It’s happening at all these rallies now and they’re letting it ride,” Jones said. “The police jumped on me like I was the one swinging.” He added: “My eye still hurts. It’s just shocking. The shock of it all is starting to set in. It’s like this dude really hit me and they let him get away with it. I was basically in police custody and got hit.”
Swain, however, said he didn’t think the officers who were filmed coming up the stairs saw what happened to Jones.
The incident is now the subject of an internal review, Swain said. Authorities are combing through video footage of the rally and conducting interviews to try to determine what happened.
“No one should be subjected to such a cowardly, unprovoked act as that committed by McGraw,” Sheriff Earl Butler said in a statement posted to Facebook. “Regardless of political affiliation, speech, race, national origin, color, gender, bad reputation, prior acts, or political demonstration, no other citizen has the right to assault another person or to act in such a way as this defendant did. I hope that the courts will handle this matter with the appropriate severity for McGraw’s severe and gross violation of this victim’s rights.”
[‘Hispanic males’ arrested after armed standoff with Trump supporter]
In footage published by Inside Edition, a man identified as McGraw is asked if he liked the event.
“You bet I liked it,” he says.
When asked what he liked, he responds: “Knocking the hell out of that big mouth.”
“We don’t know who he is, but we know he’s not acting like an American,” McGraw added. “The next time we see him, we might have to kill him.”
(Courtesy of Rakeem Jones)
Ronnie C. Rouse, a man who shot one of the videos, was with Jones at the rally.
“We’re definitely anti-Trump,” Rouse told The Post.
Rouse said as soon as Trump’s speech began, someone in the crowd singled out him and his friends, screaming, “You need to get the f— out of there!” Rouse said that his group had not said anything and that the comment was unprovoked. But he said they were almost immediately surrounded by eight Cumberland County sheriff officers, who escorted them out. On the way up the stairs, the attack came.
Rouse, a 32-year-old musician, said he didn’t see the punch but saw the aftermath — his friend “slammed” by officers to the ground. Noting that someone in the crowd shouted, “Go home n—–s,” he said he was taken aback.
“We’ve been watching all this stuff happen to everyone else,” Rouse said. “This isn’t Biloxi. This isn’t Montgomery. This is Fayetteville. … It’s a well-cultured area.”
Noting Fayetteville’s proximity to Fort Bragg, he added: “I wanted to take my 11-year-old child, to give him a touch of what’s happening political-wise. I’m glad I didn’t. I’ve never been more embarrassed to be from here in my life. It’s just appalling.”
Fayetteville is in Cumberland County, but an official from the Cumberland County Sheriff’s Office, reached by The Post early Thursday, said officers from that jurisdiction were not the ones who detained the man. The Fayetteville Police Department also told The Post it did not detain anyone at the rally, held at the city’s Crown Coliseum. Jones said he and his friends were not arrested.
Demonstrators with raised arms are heckled by supporters of Donald Trump as they are ejected from his campaign rally in Fayetteville, N.C. (Jonathan Drake/Reuters)
Trump rallies are getting a reputation for violence by Trump supporters against disruptive protesters. Police in Fayetteville had to form a line separating pro- and anti-Trump groups outside the coliseum.
Cumberland County sheriffs department stands by as man is assaulted after protesting at Trump rally in Fayetteville, NC. SHARE. Posted by Chris Doyle on Wednesday, March 9, 2016
According to CBS New York, police are investigating at least two alleged assaults at a recent Kentucky rally. One involved a young African American woman who was repeatedly shoved and called “scum.”
[Trump followers defend the sucker punch: ‘Just a little poke on the beak’]
Trump himself has not been quick to criticize the violence. After a fight erupted between protesters and police last year in Birmingham, Trump said: “‘Maybe he should have been roughed up.” Of a protester in Nevada last month, Trump said: “I’d like to punch him in the face.” In Kentucky, he said: “Get him out. Try not to hurt him. If you do I’ll defend you in court. … Are Trump rallies the most fun? We’re having a good time.”
According to CBS New York, he referred to an incident at a New Hampshire rally where a protester started “swinging and punching.” Trump said some people in the audience “took him out.”
“It was really amazing to watch,” he said.
At Thursday night’s Republican debate in Miami, CNN’s Jake Tapper asked Trump whether the candidate had “done anything to create a tone where this kind of violence would be encouraged.”
“I hope not,” Trump said. “I truly hope not. I will say this. We have 25 [thousand], 30,000 people. You’ve seen it yourself. People come with tremendous passion and love for the country, and when they see protest — in some cases — you know, you’re mentioning one case, which I haven’t seen, I heard about it, which I don’t like. But when they see what’s going on in this country, they have anger that’s unbelievable. They have anger.”
“They love this country,” Trump continued. “They don’t like seeing bad trade deals, they don’t like seeing higher taxes, they don’t like seeing a loss of their jobs where our jobs have just been devastated. And I know — I mean, I see it. There is some anger. There’s also great love for the country. It’s a beautiful thing in many respects. But I certainly do not condone that at all, Jake.”
At the Fayetteville rally, Trump called protesters “professional troublemakers,” as ABC reported. As video posted by the Raleigh News-Observer shows, his speech was repeatedly interrupted as protesters were escorted out and the crowd chanted, “USA.” He criticized one protester for wearing a “very dirty undershirt.”
Nor were the protesters enamored of Trump.
“He spreads hate,” protester Marianna Kuehn told WRAL.
1 of 45 Full Screen Autoplay Close Skip Ad × What Donald Trump is doing on the campaign trail View Photos Businessman Donald Trump officially became the Republican nominee at the party’s convention in Cleveland. Caption Businessman Donald Trump officially became the Republican nominee at the party’s convention in Cleveland. Donald Trump speaks at a campaign event at Trump Doral golf course in Miami. Carlo Allegri/Reuters Buy Photo Wait 1 second to continue.
This report has been updated. | A black protester got sucker-punched by a white Donald Trump supporter at a campaign rally in Fayetteville, North Carolina, Wednesday, as multiple videos posted to social media show. Rakeem Jones was attending the rally with friends when at least one member of the group started shouting; as Jones was being escorted out of the event by police (and flipping off the crowd along the way), a man in the crowd punched him. "Boom, he caught me," Jones tells the Washington Post of the sudden punch—after which officers forced Jones to the ground and handcuffed him, though he was not arrested. "The police jumped on me like I was the one swinging," Jones says, adding that violence is "happening at all these rallies now and [authorities are] letting it ride." (Indeed, Slate has a list of 10 violent incidents at Trump rallies, and CBS reports that at least two alleged assaults at a recent Kentucky rally are under investigation.) On Thursday, 78-year-old John McGraw was charged with assault and battery and disorderly conduct in connection with the incident, WRAL reports. At a Michigan rally last week, Trump waxed nostalgic about a brawl that broke out at a New Hampshire rally, calling it "really amazing to watch." |
On June 23, 2005, the Supreme Court handed down Kelo v. City of New London , one of its three property rights decisions during the 2004-2005 term. In Kelo , the Court addressed the City's condemnation of private property to implement its area redevelopment plan aimed at invigorating a depressed economy. By 5-4, the Court held that the condemnations satisfied the Fifth Amendment requirement that condemnations be for a "public use," notwithstanding that the property, as part of the plan, might be turned over to private developersâa private-to-private transfer. Under the Fifth Amendment, the United States may invoke its power of eminent domain to take private propertyâknown as "condemnation"âonly for a "public use." This public use prerequisite is made applicable to the states and their political subdivisions, as in Kelo , through the Fourteenth Amendment due process clause. In addition, states and their subdivisions must comply with state constitutions, which use phrases similar to "public use." The issue in Kelo was not whether the landowners were compensated; condemnation, under the federal or state constitutions, must always be accompanied by just compensation of the property owner. Rather, the issue was whether the condemnation was not for a public use and thus may not proceed at all , even given that just compensation is paid. Kelo prompted immediate debate whether Congress should respond by protecting property owners from the use of eminent domain for economic development. The overall issue is this: When does a private-to-private transfer of property through eminent domain, as in Kelo , satisfy the constitutional requirement that eminent domain only be used for a public useânotwithstanding that the transferee is a private entity. For our nation's first century, "public use" generally was construed to mean that after the condemnation, the property had to be either owned by the government (for roads, military bases, post offices, etc.) or, when private to private, by a private party providing public access to the property (as with entities, such as railroads and utilities, having common carrier duties). Beginning in the late 1890s, however, the Supreme Court rejected this public-access requirement for private-to-private condemnations, asserting that "public use" means only "for a public purpose." Even without public access, the Court said, private-to-private transfers by eminent domain, could, under proper circumstances, be constitutional. In 1954, the owner of a department store in a blighted area of the District of Columbia argued to the Supreme Court that the condemnation of his store for conveyance to a private developer, as part of an areawide blight-elimination plan, failed the public use condition. He pointed out that his particular building was not dilapidated, whatever the condition of other structures in the area might be. The Supreme Court in Berman v. Parker unanimously rejected the no-public-use argument. The Court declined to assess each individual condemnation, but rather viewed the blight-elimination plan as a whole. So viewed, the plan furthered a legitimate public interest. Indeed, the public use requirement was said to be satisfied anytime government acted within its police powers. Not surprisingly, the Berman decision is heavily relied upon by municipalities across the country engaged in blight removal. The 1980s saw further extensions of "public use" in the realm of private-to-private condemnations. In 1981, Detroit sought to condemn an entire neighborhood to provide a site for a General Motors assembly plant. Unlike in Berman , the neighborhood was not blighted ; the City simply wanted to improve its dire economic straits by bringing in the plant to increase its tax base. The Michigan Supreme Court in Poletown Neighborhood Council v. Detroit interpreted "public use" in its state constitution to allow the condemnation. A few years after Poletown , the U.S. Supreme Court in Hawaii v. Midkiff dealt with Hawaii's use of condemnation to relieve the highly concentrated land ownership there. The state's program allowed a land lessee to apply to the state to condemn the land from the owner, for sale to the lessee. Again unanimously, the Supreme Court perceived a public use, this time in the elimination of the claimed adverse impacts of concentrated land ownership on the state's economy. As in Berman , the Court declared that the public use requirement is "coterminous with the scope of the sovereign's police powers." The effect of Berman , Poletown , and Hawaii , and kindred decisions, was to lead some observers to declare that "public use" had been so broadly construed by the courts as to have been effectively removed from the Constitution. To exploit the new latitude in "public use," and with Poletown specifically in mind, local condemnations assertedly for economic development began to increase in the 1980sâsome of them pushing the envelope of what could be considered economic development with a primarily public purpose. Predictably, litigation challenges to such condemnations increased in tandem, property owners charging that even under the courts' expansive view of "public use," the particular project could not pass muster. In one of the early property-owner successes, a New Jersey court in 1998 rejected as not for a public use a proposed condemnation of land next to an Atlantic City casino, for the casino's discretionary use. A few other cases also rejected "public use" rationales for economic-development condemnations, either because the project's benefits were primarily private, or because economic development categorically was not regarded as a public use. Most dramatically, in 2004, the Michigan Supreme Court unanimously reversed Poletown . All this set the stage for Kelo . In the late 1990s, Connecticut and the city of New London began developing plans to revitalize the city's depressed economy. They fixed on a 90-acre area on the city's waterfront, adjacent to where Pfizer Inc. was building a $300 million research facility. The intention was to capitalize on the arrival of the Pfizer facility. In addition to creating jobs, generating tax revenue, and building momentum for revitalizing the downtown, the plan was also intended to make the city more attractive and create recreation opportunities. The redevelopment would include office and retail space, condos, marinas, and a park. However, nine property owners in the redevelopment area refused to sell, so condemnation proceedings were initiated. In response, the property owners claimed that the condemnations of their properties were not for a public use. In his opinion for the 5-justice majority, Justice Stevens held that the condemnations, implementing a carefully considered areawide revitalization plan in an economically depressed area, were for a public use, even though the condemned properties would be redeveloped by private entities. The majority opinion noted preliminarily that there was no suggestion of bad faith hereâno charge that the redevelopment was really a sweetheart deal with the private entities that would benefit. The case therefore turned on whether the proposed development was a "public use" even though private-to-private transfers with limited public access were involved. Without exception, said the majority, the Court's cases defining "public use" as "public purpose" reflect a policy of judicial deference to legislative judgmentsâaffording legislatures broad latitude in determining what evolving public needs justify. While New London was not confronted with blight, as in Berman , "their determination that the area was sufficiently distressed to justify a program of economic rejuvenation is entitled to our deference." But just as in Berman , the plan was comprehensive and thoroughly deliberated, so the Court again refused to consider the condemnations of individual parcels; because the overall plan served a public purpose, it said, condemnations in furtherance of the plan must also. The property owners argued for a flat rule that economic development is not a public use. Rejecting this, the Court said that promoting economic development is a long-accepted function of government, and that there is no principled way of distinguishing economic development from other public purposes that the Court has recognized as public usesâas in Berman and Hawaii . Nor is the incidental private benefit troublesome, as government pursuit of a public purpose often benefits private parties. And, a categorical rule against development condemnations is not needed to prevent abuses of eminent domain for private gain; such hypothetical cases, said the Court, can be confronted as they arise. Also rejected was the property owners' argument that for cases of this kind, courts should require a "reasonable certainty" that the expected public benefits of the project will accrue. Such a requirement, the Court noted, asks courts to make judgments for which they are ill-suited, and would significantly impede carrying out redevelopment plans. Finally, the majority opinion stressed that it was construing only the Takings Clause of the Federal Constitution. State courts, it pointed out, remain free to interpret state constitutions more strictly, and state legislatures remain free to prohibit undesired condemnations. Other opinions in Kelo warrant mention, as they have echoed in the ensuing congressional debate. Justice Kennedy, one of the majority-opinion justices, filed a concurrence emphasizing that while deference to the legislative determinationâ"rational basis review"âis appropriate, courts must not abdicate their review function entirely. A court should void a taking, he said, that by a "clear showing" is intended to favor a particular private party, with only incidental or pretextual public benefits. In dissent, Justice O'Connor, joined by the Court's three core conservatives, argued vigorously that "[u]nder the banner of economic development," the majority opinion makes " all private property ... vulnerable to being taken and transferred to another private owner, so long as it might be upgraded." Justice O'Connor allowed as how private-to-private condemnations without public access could on some occasions satisfy "public use"âas in Berman and Hawaii. But in those cases, she asserted, "the targeted property inflicted affirmative harm on society." In contrast, in Kelo the well-maintained homes to be condemned were not the source of any social harm, so their elimination to allow a new use produces only secondary benefit to the public, something that almost any lawful use of real property does. She also questioned whether Justice Kennedy's test for acceptable development condemnations was workable, given that staff can always come up with an asserted economic development purpose. Property rights advocates assert that Kelo marks a change in existing takings jurisprudence, but the reality is arguably more subtle. Very possibly, some of their adverse reaction is attributable to the opportunity lost in Kelo to do away with economic-development condemnations in one fell swoop. After Kelo , property rights advocates will have to pursue their goal in multiple state courts and legislatures. The doctrinal crux of the matter appears to lie in the majority and dissenters' divergent readings of the Court's prior public-use decisions. Justice Stevens for the majority finds no principled difference between economic-development condemnations and condemnations the Court has already approved, as in Berman and Hawaii , while Justice O'Connor for the dissenters does. Justice Stevens' view arguably takes insufficient account of the distinction between projects where economic development is only an instrumental or secondary aspect of the project and those where economic development is the primary thrust. On the other hand, the distinction drawn by Justice O'Connor between projects whose primary thrust is elimination of affirmative harms and other projects, while intuitively appealing, requires a dichotomy between elimination of harm and creation of benefit that the Court has previously critiqued as unworkable. Moreover, Justice O'Connor had to backpedal on the statement in Hawaii , which she authored, that "the public use requirement is coterminous with the scope of the sovereign's police powers." Some exercises of that police power, she now would hold, are not public uses. Of course, what Kelo really means will not be known until the lower courts have had a few years to interpret and apply it. It will be interesting to see whether Justice Kennedy's "meaningful rational basis" review has any content, or whether the dissenters' more skeptical view, that a plausible economic development purpose can always be conjured up by competent staff, will ultimately prove correct. In the meantime, frequent efforts can be expected by property owners and like-minded public interest law firms to expand the number of states whose courts find fault under state constitutions with development condemnations. The interest group alignment on how to respond to Kelo does not break down along stereotypical liberal-conservative lines. A conservative, one supposes, would side with the property owners, but having a states-rights orientation might resist federal constraints on what local governments can do. On the other hand, liberals might be comfortable with municipal efforts to guide the market toward economic development, but resist on the ground that such efforts disproportionately displace minority and low-income communities. Some options that Congress might consider for responding to Kelo areâ Kelo made plain that it was interpreting solely the Takings Clause in the U.S. Constitution. As in other constitutional areas, state courts remain free to interpret state constitutions more stringently, and indeed some state high courts have read their constitutions to bar condemnation for economic development. Moreover, whatever the state constitution says, state legislatures are free to statutorily prohibit development condemnations, and indeed, once again, at least a few have. In light of the foregoing, Congress might conclude that it was appropriate to let the matter simmer for a few years in the states, and then act only if unsatisfied with their response. Proposals have already surfaced in Congress to prohibit the use of federal money for state and local projects with an economic development purposeâusually through conditions on federal grants. There are several ways this could be done. The prohibitory condition could be attached solely to the monies for the particular economic development project involving the condemnation. More broadly, the condition could be applied to a larger pot of money (e.g., Community Development Block Grants) still having some relation to economic development condemnations. Most expansively, the condition could be attached to the largest federal funding program one can find (or all federal funding), though this course of action may run afoul of the constitutional requirement that conditions on federal funding must relate to the underlying purpose of the funding. The suggestion has been raised that Congress could direct how states exercise their eminent domain authority for economic development projects, regardless of whether federal funds are involved. Such legislation, however, arguably might exceed congressional power under the Commerce Clause and the Fourteenth Amendment, and may even raise Tenth Amendment issues. This statute requires compensation of persons who move from real property, voluntarily or by condemnation, due to a federal project or a state or local one receiving federal money. Its raison d'etre is that the constitutional promise of just compensation covers only the property taken, leaving the condemnee to bear the often substantial additional losses associated with having to move. Some of those losses are compensated under the URA. The statute, however, has long been criticized as inadequate both as to the losses covered and the amounts of compensation available. Moreover, it creates no cause of action allowing condemnees to enforce its terms. Expanding the Act would at least assure that persons displaced by economic-development condemnations receive fuller compensation. This is perhaps the most direct, but logistically difficult, option. | In Kelo v. City of New London , decided June 23, 2005, the Supreme Court held 5-4 that the city's condemnation of private property, to implement its area redevelopment plan aimed at invigorating a depressed economy, was a "public use" satisfying the U.S. Constitutionâeven though the property might be turned over to private developers. The majority opinion was grounded on a century of Supreme Court decisions holding that "public use" must be read broadly to mean "for a public purpose." The dissenters, however, argued that even a broad reading of "public use" does not extend to private-to-private transfers solely to improve the tax base and create jobs. Congress is now considering several options for responding to the Kelo decision. |
This is not the same exact Carrie Underwood you know and love. I mean, sitting in her manager's Nashville office sporting a vintage concert T-shirt and rolled-up jeans, she's still the same sunny picture. (Sidebar: It's hard to believe that less than a year ago the seven-time Grammy-winning, multiplatinum-selling entertainer had more than 40 stitches in her face due to a freak accident.) You'll really notice the difference — a little more introspective, a bit more open, even more confident — when you listen to her new album, Cry Pretty, which she says is "much more me" than the last five.
For this one, Carrie took the reins as a coproducer for the first time: "I had time and space and creative license in a way I haven't before. I got to do the dirty work." While her personal life remains rock-solid (she and her husband of eight years, hockey player Mike Fisher, are the proud parents of Isaiah, 3), she admits that her past year was an emotional roller coaster. She mined those lessons for song material — and talks to us a little about them here.
Your new album is titled Cry Pretty, so what moves you to tears?
I get teary in church a lot because I'm moved by the message — but I never remember to bring tissues! [Laughs] Rarely do I cry out of frustration. I cry happy tears maybe more than I cry sad tears.
You've said recently that you feel stronger than ever — why?
A lot happened in 2017 during my "off year." I love it when people say, "You took a year off." I'm like, "You know, I had this shoot and this thing, and I was writing this and doing that." There was always so much to do, but it was also a very soul-searching year for me.
What prompted that soul-searching?
There were some personal things that happened. And I had the accident and all of that to get through ... and just life. Life is full of ups and downs, and I might have had a few more downs than ups last year.
Did having a facial injury shake your confidence?
Any time someone gets injured, it looks pretty bad in the beginning, and you're like, "What is this going to wind up like?" You just don't know. It was also a perception thing, because I look at myself [now] and I see it quite a bit, but other people are like, "I wouldn't have even noticed." Nobody else looks at you as much as you think they do. Nobody notices as much as you think they will, so that's been nice to learn.
With her husband last year. Getty Images
There were so many rumors online — that you'd had plastic surgery, that it was a publicity stunt. Did that bother you?
I'm on some magazine every other week for something crazy. It's a little sad, because the truth is just as interesting I wish I'd gotten some awesome plastic surgery to make this [scar] look better. But I try not to worry too much about it. My mom will be like, "Did you see they are saying this about you?" And I'll be like, "Mama, I don't care. I'm just trying to raise my son and live my life."
Do you want a big family?
I'm 35, so we may have missed our chance to have a big family. We always talk about adoption and about doing it when our child or children are a little older. In the meantime, we're lucky to be a part of organizations that help kids, because our focus right now in our lives is helping as many kids as possible.
What advice do you have for young women to encourage them to be more confident?
The first thing I would tell them is that we're all insecure; that's just called being human. I feel like the most important thing to realize is that even people who seem to be super confident have insecurities that they are dealing with. Honestly, you just do the best you can. Don't worry about things you can't change.
Performing at the 2018 Academy of Country Music Awards show. Getty Images
If you could go back to that girl who took her first plane ride when she was trying out for American Idol, what would you tell her?
I don't know if I'd tell her much of anything, because I would want everything to turn out exactly how it has. Every lesson that I've learned was an important one and led me to where I am — and I like where I am now.
Do you think country music is ready for a Time's Up moment for women to get their due?
This is a conversation the industry has been having for a while now. I see so many amazingly talented women who make me go, "Why isn't she kicking button the radio?" Kelsea Ballerini, Maren Morris, and Lauren Alaina have finally gotten some great radio success, so it's starting to get better. But we need to keep the conversation going so there will be more room created for women.
You've accomplished so much already. What is on your bucket list for the next 10 years?
I'm hoping I'm still lucky enough to be making music. I love going on the road and putting together shows I'm proud of, but I don't know where I'll be in 10 years. I don't know where I'll be next week. By the grace of God, I'm just lucky enough to live another day, and that's good by me.
For Carrie Underwood, a good life doesn't mean having no roadblocks — it means going for what you want in spite of them. Matt Jones
This article originally ran in the September 2018 issue of Redbook.
Follow Redbook on Instagram. ||||| Carrie Underwood fans push back after singer says she 'missed' chance for more babies at 35
Carrie Underwood drew the heat of fans after she said at 35 she missed her chance to have a big family. (Photo: Rich Fury/Getty Images)
Was Carrie Underwood just being candid and honest about what is realistic? Or was she seriously misguided?
Either way, fans of the country superstar had some STRONG OPINIONS about an interview she gave with Redbook in which she talked about "missing" her chance to have a bunch of kids because she's 35 years old.
"I’m 35, so we may have missed our chance to have a big family. We always talk about adoption and doing it when our child or children are a little older."
Fans later found out that Carrie Underwood actually IS pregnant.
But before knowing that, those two sentences unleashed a commotion of emotions. Some fans were supportive and lauded her openness. Others encouraged her to keep trying. The "Cry Pretty" singer is married to Mike Fisher, 38, and they share one child together, Isaiah, 3.
But — wow — others took her comments personally, seeming to say keep your opinions about your ovaries to yourself.
#accessHollywood Exactly how did Carrie Underwood miss her chance to have more kids? She’s 35 and has more $$ than she’ll ever need! — paul cammarota (@pfcproduces) August 2, 2018
35 too old for kids? How about taking all that money you have and try #IVF some of us actually have fertility problems, what an insult @carrieunderwoodhttps://t.co/Rg7GyXcZ66 — Mari (@_Mini_Murph) August 7, 2018
More women 35 and older are having children
More women ages35 and older are giving birth, according to Centers for Disease Control health statistics.
But it's also well known that fertility problems increase with women 35 and older, who the medical industry refers to as those of "advanced maternal age," formerly known as — shudder — "geriatric pregnancy."
About one-third of couples in which the woman is older than 35 experience fertility problems, according to the Office of Women's Health in the U.S. Department of Health and Human Services.
Several fans came to Underwood's defense following the backlash, effectively saying lay off.
@carrieunderwood people need to relax. I struggled with fertility issues and felt the same way. Maybe its not so easy for them and there focusing on the joys they do have. Why always bash and attack someone just bc you dont like what they say. #getoverit#layoff — Kristi Becker (@KristisBlog) August 7, 2018
She never said she definitively couldn't. She said: I’m 35, so we may have missed our chance to have a big family.
Since fertility DOES decrease as women age, this is just TRUE. She MAY have missed that chance. @carrieunderwood you do what's best for your family. Best wishes. — Jessica | Lucie (@LucieLexington) August 7, 2018
Like All the Moms?
Follow us on Facebook and Twitter.
READ MORE:
Read or Share this story: https://usat.ly/2MsTNK7 ||||| Carrie Underwood isn’t known for being controversial. But the country singer angered some parents when she said that at age 35, she and husband Mike Fisher won’t be able to give their son, Isaiah, 3, many biological siblings.
“I’m 35, so we may have missed our chance to have a big family,” the American Idol alum told Redbook’s September issue. “We always talk about adoption and about doing it when our child or children are a little older.”
The comments came rolling in on Facebook. “I’m 38 and just had a baby . . . she’s being ridiculous,” wrote one woman. Added another: “You do know that everyone’s body is different, right?”
A third fan revealed her 20-year-old sister is struggling to conceive, while her mother-in-law became pregnant at 41 with no difficulty.
Meanwhile, back in April 2017, Underwood joked that Isaiah enjoys having his parents all to himself. “If a dog climbs up on my lap, I feel like he gets a little jealous!” she quipped to Entertainment Tonight at the time.
Underwood and Fisher, 38, were introduced by her bass player Mark Childers at one of her concerts in 2008. During a Behind the Music special in 2012, the former ice hockey player gushed of meeting his future wife, “First time I saw her, she was more beautiful person than on TV.” A smitten Underwood texted Childers: “Hot, hot, hot.” The couple became engaged in 2009 and tied the knot in Greensboro, Georgia, in July 2010.
For the latest beauty and style trends subscribe to our new podcast ‘Get Tressed With Us’ below!
Get $10 off your entire purchase at onetwocosmetics.com and use discount offer code LASHWEEKLY
Sign up now for the Us Weekly newsletter to get breaking celebrity news, hot pics and more delivered straight to your inbox! ||||| Carrie Underwood‘s pile of “Dirty Laundry” is about to grow!
The country superstar and her husband Mike Fisher are expecting their second child, the singer announced — along with her upcoming Cry Pretty tour — Wednesday morning on Instagram. The new baby will join the couple’s 3-year-old son Isaiah Michael.
“You might be wondering or asking, ‘Carrie, why is your tour starting in May?’ Well … yay!” she said, revealing balloons spelling out “BABY” above her head. “Mike and Isaiah and I are absolutely over the moon and excited to be adding another little fish to our pond.”
A pink-clad Underwood, 35, continued, “This has just been a dream come true with album and with baby news and all that stuff. We’re just so excited and just so glad you guys can share in this with us and be a part of this with us. Love you guys! We will see you on the road in 2019.”
The singer will release her latest album, Cry Pretty, on Sept. 14 ahead of hosting the CMA Awards for the 11th time in November. She’ll then break for maternity leave before kicking off her tour, supported by Maddie & Tae and Runaway June, on May 1. Tickets go on sale Aug. 17 at 10 a.m.
Want all the latest pregnancy and birth announcements, plus celebrity mom blogs? Click here to get those and more in the PEOPLE Babies newsletter.
Carrie Underwood Carrie Underwood/Instagram
RELATED: From Having More Kids to Touring with Her Son: Everything Pregnant Carrie Underwood Has Said About Motherhood
RELATED: Carrie Underwood’s Comments on Having a Baby After the Age of 35 Leave Fans Divided
The spouses’ news of their bundle of joy on the way comes after celebrating eight years of marriage. Underwood and Nashville Predators star Fisher, 38, said their “I dos” in July 2010, at the Ritz Carlton Reynolds Plantation in Greensboro, Georgia.
“You see each other when you can and you talk to each other as much as you can,” the singer told PEOPLE in 2013 of balancing marriage amid hectic schedules. “You just have to commit and make it work.”
That mantra definitely came in handy in 2016, when Underwood embarked on her 92-date Storyteller Tour — with her son in tow, of course.
Carrie Underwood and son Isaiah Carrie Underwood/Instagram
RELATED GALLERY: Carrie Underwood’s Cutest Family Snaps
“I feel the prettiest when my kid says something that’s just super sweet,” the singer — who received 40 to 50 stitches in her face and underwent surgery on her broken wrist after a November fall on the steps of her Nashville, Tennessee, home — told PEOPLE in May of Isaiah.
She continued, “This morning, Melissa, my hair and makeup artist, was starting to put my makeup on and he’s all in his pajamas and he said, ‘No, don’t do that!’ and I was like, ‘Why, baby, why are you upset?’ ”
“And he said, ‘I like you just how you are.’ He didn’t want me to put makeup on,” explained Underwood. “That made me feel pretty. I know I wasn’t [pretty] because I had just woken up and hadn’t brushed my teeth yet, but he made me feel pretty.”
RELATED VIDEO: Carrie Underwood on the Biggest Surprises of Motherhood
In her cover story for Redbook‘s September issue, the “Church Bells” singer opened up about her family life, including whether she and Fisher had plans to give Isaiah any siblings.
“I’m 35, so we may have missed our chance to have a big family,” she explained. “We always talk about adoption and about doing it when our child or children are a little older.”
“In the meantime, we’re lucky to be a part of organizations that help kids, because our focus right now in our lives is helping as many kids as possible,” Underwood added.
Mike Fisher and Carrie Underwood Mike Coppola/Getty
RELATED: Carrie Underwood Shares Adorable Snap of Son Isaiah Meeting a Turtle (and Admits She Prefers Spiders)
The couple celebrated their eight years of marriage last month, with Underwood sharing two sweet photos of herself and her husband to mark the occasion.
“Here’s to 8 years, babe! Where does the time go?!” she wrote in her Instagram caption. “I love you today more than yesterday … which was more than the day before … and so on and so forth.”
Added the mom-to-be, “Here’s to many more years together! ❤ you!”
Carrie Underwood and Mike Fisher
RELATED GALLERY: Carrie Underwood and Mike Fisher’s Love Story in Photos
The American Idol season 4 winner dished to PEOPLE in April about striking the balance between work and family life, saying the couple’s “whole life has changed” since welcoming Isaiah.
“I remember when we first found out we were gonna have him it [was] like, ‘How are we gonna do this? Our lives are so crazy as it is,’ ” said Underwood at the time.
“But you just make room and you learn how important that family time is, and to be able to spend time and carve out some of that and maybe get to go on vacation and maybe get to go on a cruise — that stuff is so important to, like I said, make time for family,” she added. “That’s what it’s all about.” | A week after airing fertility concerns that bugged plenty of fans, Carrie Underwood has some big fertility news: The country singer revealed she's expecting her second child with husband Mike Fisher in an Instagram video posted Wednesday, per People. The 35-year-old, whose son Isaiah is 3, said she'd be "adding another fish to our pond" before starting a concert tour in May. The news comes after a Redbook interview in which the singer suggested more biological children might not be in her future. "I'm 35, so we may have missed our chance to have a big family. We always talk about adoption and doing it when our child or children are a little older," Underwood said. Per USA Today, "those two sentences unleashed a commotion of emotions," including among some who viewed the remark as misguided, per US Weekly, or an insult to people with fertility issues who can’t afford treatment like, as they pointed out, she could. Others, however, suggested fans cut Underwood some slack. "Since fertility DOES decrease as women age, this is just TRUE. She MAY have missed that chance," one defender wrote. |
Medical devices can range in complexity from a simple tongue depressor to a sophisticated CT (computed tomography) x-ray system. Most of the devices reach the market through FDA’s premarket notification (or 510(k)) review process. Under its 510(k) authority, FDA may determine that a device is substantially equivalent to a device already on the market and therefore not likely to pose a significant increase in risk to public safety. When evaluating 510(k) applications, FDA makes a determination regarding whether the new device is as safe and effective as a legally marketed predicate device. Performance data (bench, animal, or clinical) are required in most 510(k) applications, but clinical data are needed in less than 10 percent of applications. An alternative mode of entry into the market is through the premarket approval (PMA) process. PMA review is more stringent and typically longer than 510(k) review. For PMAs, FDA determines the safety and effectiveness of the device based on information provided by the applicant. Nonclinical data are included as appropriate. However the answers to the fundamental questions of safety and effectiveness are determined from data derived from clinical trials. FDA also regulates research conducted to determine the safety and effectiveness of unapproved devices. FDA approval is required only for “significant risk” devices. Applicants submit applications for such devices to obtain an investigational device exemption (IDE) from regulatory requirements and approval to conduct clinical research. For an IDE, unlike PMAs and 510(k)s, it is the proposed clinical study that is being assessed—not just the device. Modifications of medical devices, including any expansion of their labeled uses, are also subject to FDA regulation. Applications to modify a device that entered the market through a PMA are generally linked to the original PMA application and are called PMA supplements. In contrast, modifications to a 510(k) device are submitted as new 510(k) applications. References may be made to previous 510(k) applications. FDA uses several measures of duration to report the amount of time spent reviewing applications. In this letter, we use only three of those measures. The first is simply the time that elapses between FDA’s receipt of an application and its final decision on it (total elapsed time). The second measure is the time that FDA has the application under its review process (FDA time). This includes both the time the application is under active review and the time it is in the FDA review queue. The amount of time FDA’s review process has been suspended, waiting for additional information from the applicant, is our third measure (non-FDA time). Our measures of review time are not intended to be used to assess the agency’s compliance with time limits for review established under the Federal Food, Drug, and Cosmetic Act (the act). The time limits for PMA, 510(k), and IDE applications are 180, 90, and 30 days, respectively. FDA regulations allow for both the suspension and resetting of the FDA review clock under certain circumstances. How review time is calculated differs for 510(k)s and PMAs. If a PMA application is incomplete, depending on the extent of the deficiencies, FDA may place the application on hold and request further information. When the application is placed on hold, the FDA review clock is stopped until the agency receives the additional information. With minor deficiencies, the FDA review clock resumes running upon receipt of the information. With major deficiencies, FDA resets the FDA clock to zero upon receipt of the information. In this situation, all previously accrued FDA time is disregarded. (The resetting of the FDA clock can also be triggered by the applicant’s submission of unsolicited supplementary information.) The amount of time that accrues while the agency is waiting for the additional information constitutes non-FDA time. For 510(k)s, the FDA clock is reset upon receipt of a response to either major or minor deficiencies. For this report, we define FDA time as the total amount of time that the application is under FDA’s review process. That is, our measure of FDA time does not include the time that elapses during any suspension, but does include time that elapsed before the resetting of the FDA clock. The total amount of time that accrues while the agency is waiting for additional information constitutes non-FDA time. (The sum of FDA and non-FDA time is our first measure of duration—total elapsed time.) The act establishes three classes of medical devices, each with an increasing level of regulation to ensure safety and effectiveness. The least regulated, class I devices, are subject to compliance with general controls. Approximately 40 percent of the different types of medical devices fall into class I. At the other extreme is premarket approval for class III devices, which constitute about 12 percent of the different types of medical devices. Of the remainder, a little over 40 percent are class II devices, and about 3 percent are as yet unclassified. In May 1994, FDA implemented a three-tier system to manage its review workload. Classified medical devices are assigned to one of three tiers according to an assessment of the risk posed by the device and its complexity. Tier 3 devices are considered the riskiest and require intensive review of the science (including clinical data) and labeling. Review of the least risky devices, tier 1, entails a “focused labeling review” of the intended use. In addition to the three tiers is a group of class I devices that pose little or no risk and were exempted from the premarket notification (510(k)) requirements of the act. Under the class and tier systems, approximately 20 percent of the different types of medical devices are exempted from premarket notification. A little over half of all the different types of medical devices are classified as tier 2 devices. Tiers 1 and 3 constitute 14 and 12 percent of the different types of medical devices, respectively. From 1989 through 1991, the median time between the submission of a 510(k) application and FDA’s decision (total elapsed time) was relatively stable at about 80 to 90 days. The next 2 years showed a sharp increase that peaked at 230 days in 1993. Although the median review time showed a decline in 1994 (152 days), it remained higher than that of the initial 3 years. (See figure 1.) Similarly, the mean also indicated a peak in review time in 1993 and a subsequent decline. The mean review time increased from 124 days in 1989 to 269 days in 1993. In 1994, the mean dropped to 166 days; however, this mean will increase as the 13 percent of the applications that remained open are closed. (See table II.1.) Of all the applications submitted to FDA to market new devices during the period under review, a little over 90 percent were for 510(k)s. Between 1989 and 1994, the number of 510(k) applications remained relatively stable, ranging from a high of 7,023 in 1989 to a low of 5,774 in 1991. In 1994, 6,446 applications were submitted. Of the 40,950 510(k) applications submitted during the period under review, approximately 73 percent were determined to be substantially equivalent. (That is, the device is equivalent to a predicate device already on the market and thus is cleared for marketing.) Only 2 percent were found to be nonequivalent, and 6 percent remained open. Other decisions—including applications for which a 510(k) was not required and those that were withdrawn by the applicant—account for the rest. (See appendix I for details on other FDA decision categories.) For applications determined to be substantially equivalent, non-FDA time—the amount of time FDA placed the application on hold while waiting for additional information—comprised almost 20 percent of the total elapsed time. (See table II.7.) Figure 2 displays FDA and non-FDA time to determine equivalency for 510(k) applications. The trends in review time differed for original PMAs and PMA supplements. There was no clear trend in review times for original PMA applications using either medians or means since a large proportion of the applications had yet to be completed. The median time between the submission of an application and FDA’s decision (total elapsed time) fluctuated from a low of 414 days in 1989 to a high of 984 days in 1992. Less than 50 percent of the applications submitted in 1994 were completed; thus, the median review time was undetermined. (See figure 3.) Except for 1989, the means were lower than the medians because of the large number of open cases. The percent of applications that remained open increased from 4 percent in 1989 to 81 percent in 1994. The means, then, represent the time to a decision for applications that were less time-consuming. When the open cases are completed, lengthy review times will cause an increase in the means. (See table III.1.) For PMA supplements, the median time ranged from 126 days to 173 days in the first 3 years, then jumped to 288 days in 1992. In 1993 and 1994, the median declined to 242 and 193 days, respectively. (See figure 4.) This trend was reflected in the mean review time that peaked at 336 days in 1992. Although the mean dropped to 162 days in 1994, this is expected to increase because 21 percent of the applications had not been completed at the time of our study. (See table III.7.) Applications for original PMAs made up less than 1 percent of all applications submitted to FDA to market new devices in the period we reviewed. PMA supplements comprised about 8 percent of the applications. The number of applications submitted for PMA review declined each year. In 1989, applications for original PMAs numbered 84. By 1994, they were down to 43. Similarly, PMA supplements decreased from 804 in 1989 to 372 in 1994. (See tables III.1 and III.7.) Of the 401 applications submitted for original PMAs, 33 percent were approved, 26 were withdrawn, and nearly a third remained open. The remainder (about 9 percent) fell into a miscellaneous category. (See appendix I.) A much higher percentage of the 3,640 PMA supplements (78 percent) were approved in this same period, and fewer PMA supplements were withdrawn (12 percent). About 9 percent of the applications remained open, and 2 percent fell into the miscellaneous category. For PMA reviews that resulted in approval, non-FDA time constituted approximately one-fourth of the total elapsed time for original PMAs and about one-third for PMA supplements. The mean FDA time for original PMAs ranged from 155 days in 1994 to 591 days in 1992. Non-FDA times for those years were 34 days in 1994 and 165 days in 1992. For PMA supplements, FDA review times were lower, ranging from a low of 105 days (1990) to a high of 202 days (1992). Non-FDA time for those years were 59 days (1990) and 98 days (1992), respectively. (See table III.13.) Figures 5 and 6 display the proportion of FDA and non-FDA time for the subset of PMAs that were approved. For IDEs, the mean review time between submission and FDA action was 30 days, and it has not changed substantially over time. Unlike 510(k)s and PMAs, IDEs are “deemed approved” if FDA does not act within 30 days. Of the 1,478 original IDE submissions from fiscal year 1989 to 1995, 33 percent were initially approved (488) and 62 percent were denied or withdrawn (909). The number of IDE submissions each year ranged from a high of 264 in 1990 to a low of 171 in 1994. (See table IV.1.) Our objective was to address the following general question: How has the time that 510(k), PMA, and IDE applications spend under FDA review changed between fiscal year 1989 and the present? To answer that question, we also looked at a subset of applications that were approved, distinguishing the portion of time spent in FDA’s review process (FDA time) from that spent waiting for additional information (non-FDA time). For applications that were approved, we present the average number of amendments that were subsequently added to the initial application as well as the average number of times FDA requested additional information from the applicant. (Both of these activities affect FDA’s review time.) We used both the median and mean to characterize review time. We use the median for two reasons. First, a large proportion of the applications have yet to be completed. Since the median is the midpoint when all review times are arranged in consecutive order, its value can be determined even when some applications requiring lengthy review remain open. In contrast, the mean can only be determined from completed applications. (In this case, applications that have been completed by May 18, 1995.) In addition, the mean will increase as applications with lengthy reviews are completed. To illustrate, for applications submitted in 1993, the mean time to a decision was 269 days for 510(k) applications that have been closed. However, 3 percent of the applications have yet to be decided. If these lengthy reviews were arbitrarily closed at May 18, 1995 (the cutoff date for our data collection), the mean would increase to 285 days. In contrast, the median review time (230 days) would remain the same regardless of when these open applications were completed. The second reason for using the median is that the distributions of review time for 510(k), original PMA, and PMA supplement applications are not symmetric, that is, having about the same number of applications requiring short reviews as lengthy reviews. The median is less sensitive to extreme values than the mean. As a result, the review time of a single application requiring an extremely lengthy review would have considerably more effect on the mean than the median. Figure 7 shows the distribution for 510(k)s submitted in 1993, the most recent year in which at least 95 percent of all 510(k) applications had been completed. The distribution is skewed with a mean review time of 269 days and a median review time of 222 days for all completed applications. Mean = 269 Median = 222 To provide additional information, we report on the mean review times as well as the median. The discrepancy between the two measures gives some indication of the distribution of review time. When the mean is larger than the median, as in the case of the 510(k)s above, it indicates that a group of applications required lengthy reviews. Another reason we report the means is that, until recently, FDA reported review time in terms of means. In appendix I, we provide the categories we used to designate the different FDA decisions and how our categories correspond to those used by FDA. Detailed responses to our study objective are found in tabular form in appendixes II, III, and IV for 510(k)s, PMAs, and IDEs, respectively. We report our findings according to the fiscal year in which the applications were submitted to FDA. By contrast, FDA commonly reports review time according to the fiscal year in which the review was completed. Although both approaches measure review time, their resultant statistics can vary substantially. For example, several complex applications involving lengthy 2-year reviews submitted in 1989 would increase the average review time for fiscal year 1989 in our statistics and for fiscal year 1991 in FDA’s statistics. Consequently, the trend for review time based on date-of-submission cohorts can differ from the trend based on date-of-decision cohorts. (See appendix V for a comparison of mean review time based on the two methods.) The two methods provide different information and are useful for different purposes. Using the date-of-decision cohort is useful when examining productivity and the management of resources. This method takes into consideration the actual number of applications reviewed in a given year including all backlogs from previous years. Alternatively, using the date-of-submission cohort is useful when examining the impact of a change in FDA review policy, which quite often only affects those applications submitted after its implementation. To minimize the effect of different policies on review time within a cohort, we used the date-of-submission method. We conducted our work in accordance with generally accepted government auditing standards between May and June 1995. Officials from FDA reviewed a draft of this report and provided written comments, which are reproduced in appendix VI. Their technical comments, which have been incorporated into the text where appropriate, have not been reprinted in the appendix. FDA believed that the report misrepresented the current state of the program as the draft did not acknowledge recent changes in the review process. FDA officials suggested a number of explanations for the apparent trends in the data we reported (see appendix VI). Although recent initiatives to improve the review process provide a context in which to explain the data, they were outside the scope of our work. We were not able to verify the effect these changes have actually had on review time. To the extent that these changes did affect review time, they are reflected in the review times as presented and are likely to be reflected in future review times. The agency also believed that the draft did not reflect the recent improvements in review time. We provided additional measures of review time in order to present the review times for the more recent years. We have also included more information on the difference between the date-of-submission and date-of-decision cohorts, and we have expanded our methodological discussion in response to points FDA made on the clarity of our presentation. (Additional responses to the agency comments are included in appendix VI.) As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its date of issue. We will then send copies to other interested congressional committees, the Secretary of the Department of Health and Human Services, and the Commissioner of Food and Drugs. Copies will also be made available to others upon request. If you or your staff have any questions about this report, please call me at (202) 512-3092. The major contributors to this report are listed in appendix VII. FDA uses different categories to specify the type of decision for 510(k)s, PMAs, and IDEs. For our analysis, we collapsed the multiple decision codes into several categories. The correspondence between our categories and FDA’s are in table I.1. Additional information requested; applicant cannot respond within 30 days Drug (CDER) review required (continued) The following tables present the data for premarket notifications, or 510(k)s, for fiscal years 1989 through May 18, 1995. The first set of tables (tables II.1 through II.6) presents the time to a decision—from the date the application is submitted to the date a decision is rendered. We first present a summary table on the time to a decision by fiscal year (table II.1). The grand total for the number of applications includes open cases—that is, applications for which there had not been any decision made as of May 18, 1995. As the distribution for time to a decision is not symmetric (see figure 1 in the letter), we present the means and percentiles to characterize the distribution. (The means and percentiles do not include open cases.) The second table is a summary of the time to a decision by class, tier, medical specialty of the device, and reviewing division (table II.2). The next four tables (II.3 through II.6) provide the details for these summary tables. The totals in these tables include only applications for which a decision has been rendered. The class, tier, and medical specialty of some of the devices have yet to be determined and are designated with N/A. Medical specialties other than general hospital or general and plastic surgery include anesthesiology; cardiovascular; clinical chemistry; dental; ear, nose, and throat; gastroenterology/urology; hematology; immunology; microbiology; neurology; obstetrics/gynecology; ophthalmic; orthopedic; pathology; physical medicine; radiology; and clinical toxicology. The five reviewing divisions in FDA’s Center for Devices and Radiological Health are Division of Clinical Laboratory Devices (DCLD); Division of Cardiovascular, Respiratory and Neurological Devices (DCRND); Division of General and Restorative Devices (DGRD); Division of Ophthalmic Devices (DOD); and Division of Reproductive, Abdominal, Ear, Nose and Throat, and Radiological Devices (DRAER). The second set of tables (tables II.7 through II.12) presents the mean time to determine equivalency. We provide the means for total FDA time, non-FDA time, and total elapsed time. FDA time is the total amount of time the application was under FDA review including queue time—the time to equivalency without resetting the FDA review clock. The total elapsed time, the duration between the submission of the application and FDA’s decision, equals the sum of the FDA and non-FDA time. We deleted cases that had missing values or apparent data entry errors for the values relevant to calculating FDA and non-FDA time. Therefore, the total number of applications determined to be equivalent in this group of tables differs from that in the first set. Again, we have two summary tables, followed by four tables providing time to determine equivalency by class, tier, medical specialty, and reviewing division (tables II.7 through II.12). In reviewing a PMA application, FDA conducts an initial review to determine whether the application contains sufficient information to make a determination on its safety and effectiveness. A filing decision is made—filed, filed with deficiencies specified, or not filed—based on the adequacy of the information submitted. The manufacturer is notified of the status of the application at this time, especially since deficiencies need to be addressed. As part of the substantive review, a small proportion of PMA applications are also reviewed by an advisory panel. These panels include clinical scientists in specific medical specialties and representatives from both industry and consumer groups. The advisory panels review the applications and provide recommendations to the agency to either approve, deny, or conditionally approve them. FDA then makes a final determination on the application. To examine in greater detail those cases where the intermediate milestones were applicable, we calculated the average duration between the various dates—submission, filing, panel decision, and final decision. The number of applications differs for each of the milestones as not all have filing or panel dates. (See figure III.1.) The following tables present information on review time for PMA applications for fiscal years 1989 through 1995. Original PMA applications are distinguished from PMA supplements. Some observations were deleted from our data because of apparent data entry errors. The first set of tables (tables III.1 through III.6) presents the time to a decision for original PMAs—from the date the application is submitted to the date a decision is rendered. The second set of tables (tables III.7 through III.12) provides similar information, in the same format, for PMA supplements. We first present a summary table on the time to a decision by fiscal year (tables III.1 and III.7). Again, the grand total for the number of applications includes the number of open cases—that is, applications for which there had not been any decision made as of May 18, 1995. As with 510(k)s, the distributions of time to a decision for original PMAs and PMA supplements are not symmetric. Thus we report means and percentiles to characterize these distributions. (These means and percentiles do not include open cases.) Figure III.2 presents the distribution for original PMAs submitted in 1989, the most recent year for which at least 95 percent of the applications had been completed. Figure III.3 presents the distribution for PMA supplements submitted in 1991, the most recent year with at least a 95-percent completion date. The second table is a summary of the time to a decision by class, tier, relevant medical specialty of the device, and reviewing division (tables III.2 and III.8). The two summary tables are followed by four tables (tables III.3 through III.6 and III.9 through III.12) presenting the details by class, tier, medical specialty, and reviewing division. The totals in these tables include only applications for which a decision has been rendered. The class, tier, and medical specialty of some of the devices have yet to be determined and are designated with N/A. Medical specialities other than cardiovascular or ophthalmic include anesthesiology; clinical chemistry; dental; ear, nose, and throat; gastroenterology/urology; general and plastic surgery; general hospital; hematology; immunology; microbiology; neurology; obstetrics/gynecology; orthopedic; pathology; physical medicine; radiology; and clinical toxicology. The third set of tables provides information on the time to an approval, for both original PMAs and PMA supplements (tables III.13 through III.18). Four different measures of duration are provided—total FDA time, non-FDA time, total elapsed time, and FDA review time. Total FDA time is the amount of time the application is under FDA’s review process. Non-FDA time is the time the FDA clock is suspended waiting for additional information from the applicant. The total elapsed time, the duration from the date the application is submitted to the date of FDA’s decision, equals the sum of total FDA and non-FDA time. FDA review time is FDA time for the last cycle—excluding any time accrued before the latest resetting of the FDA clock. Again, we first provide a summary table for time to an approval by fiscal year (table III.13). In this table, we also provide the number of amendments or the number of times additional information was added to the initial submission. Not all amendments were for information requested by FDA as can be seen from the number of requests for information. Table III.13 is followed by a summary by class, tier, medical specialty, and reviewing division (table III.14). Tables III.15 though III.18 provide the details for these two summary tables. The following tables present the average days to a decision for investigational device exemptions. The first table presents the averages for the years from October 1, 1988, through May 18, 1995. This is followed by summaries by class, tier, medical specialty, and then reviewing division. The next four tables (tables IV.3 through IV.6) provide the details for these summary tables. We reported our findings according to the fiscal year in which the applications were submitted to FDA (date-of-submission cohort). By contrast, FDA commonly reports review time according to the fiscal year in which the review was completed (date-of-decision cohort). This led to discrepancies between our results and those reported by FDA. The following table illustrates the differences in calculating total elapsed time by the year that the application was submitted and the year that a decision was rendered. Comparisons are provided for 510(k)s, PMA supplements, original PMAs, and IDEs. Our dataset did not include applications submitted before October 1, 1988. Consequently, the results presented in the following table understated the number of cases, as well as the elapsed time, when calculated by the year of decision. That is, an application submitted in fiscal year 1988 and completed in 1989 would not have been in our dataset. The following are GAO’s comments on the August 2, 1995, letter from FDA. 1. The purpose of our review was to provide to FDA’s congressional oversight committee descriptive statistics on review time for medical device submissions between 1989 and May 1995. It was not to perform an audit of whether FDA was in compliance with statutory review time, nor to examine how changes in FDA management practices may have resulted in shortening (or lengthening) review times. FDA officials suggested that a number of process changes and other factors may have contributed to the trends we reported—for example, the increased complexity of the typical submission that resulted from the agency’s exemption from review of certain low-risk devices. We are not able to verify the effect changes have actually had on review time, and it may be that it is still too early for their impact to be definitively assessed. 2. In discussing our methodology in the draft report, we noted the differences between FDA’s typical method of reporting review time according to the year in which action on applications is finalized, as opposed to our method of assigning applications to the year in which they were submitted. We also included an appendix that compares the results of the two different approaches. (See appendix V.) We agree with FDA that it is important for the reader to understand these differences and have further expanded our discussion of methodology to emphasize this point. (See p. 14.) 3. We agree with FDA that our report “deals only with calculations of averages and percentiles”—that is, with means, medians (or 50th percentile), as well as the 5th and 95th percentiles. However, FDA’s suggested additions do not extend beyond such descriptive statistics. We also agree that mean review times in the presence of numerous open cases may not be meaningful. For this reason, we have included open cases in our tables that report review time, but we have excluded them from the calculation of means. FDA suggests that we include open cases in our calculation of medians. We have adopted this suggestion and presented our discussion of trends in terms of the median review time for all cases. It should be noted, however, that including open cases increases our estimate of review time. (For example, including open cases raises the calculation of 510(k) median review time from the 126 days we reported for 1994 to 152 days.) Figure VI.1 depicts the relationship among the three measures of elapsed time for 510(k) submissions: the mean of closed cases, the median of closed cases, and the median of all cases. The two measures of closed cases reveal roughly parallel trends, with median review time averaging some 45 days fewer than mean review time. The two estimates of median review time are nearly identical from 1989 through 1990 since there are very few cases from that period that remain open. The divergence between the two medians increases as the number of open cases increases in recent years until 1995, when the median, including open cases, is larger than the mean of closed cases. Mean (Closed Cases) Median (Closed Cases) Median (All Cases) 4. While we are unable to reproduce the calculations performed by FDA, we agree in general with the trends indicated by FDA. Specifically, Our calculations, as presented in our draft report tables II.7 and following, showed a decrease from 1993 to 1994 in FDA review time for finding a 510(k) submission substantially equivalent. By our calculation, this declined from a mean of 173 days in 1993 to 100 days in 1994. The proportion of 510(k) applications reaching initial determination within 90 days of submission increased from 15.8 percent in 1993 to 32 percent in 1994 and 57.9 percent between October 1, 1994, and May 18, 1995. Clearly, since 1993, more 510(k) cases have been determined within 90 days, and the backlog of undetermined cases has been reduced. Because a review of the nature and complexity of the cases still open was beyond the scope of this study, we cannot predict with certainty whether, when these cases are ultimately determined, average review time for 1995 cases will be shorter than for cases submitted in 1993. 5. FDA time was reported in our draft report tables II.7 through II.12, and findings contrasting the differences between FDA time and non-FDA time were also included. Additional language addressing this distinction has been included in the text of the report. 6. FDA’s contends that 1989 was an atypical year for 510(k) submissions and therefore a poor benchmark. However, we do not believe that starting our reporting in 1989 introduced any significant bias into our report of the 510(k) workload. Indeed, our draft report concluded that the number of 510(k) submissions had “remained relatively stable” over the 1989-94 period. If we had extrapolated the data from the first 7-1/2 months of 1995 to a full year, we would have concluded that the current fiscal year would have a substantially lower number of 510(k) submissions (16 percent to 31 percent) than any of the previous 6 years. 7. The tier classification was created by FDA to manage its review workload; however, it was not our intention to evaluate or in any way assess the use of tiers for such purposes. The tier classification was based on “the potential risk and complexity of the device.” Accordingly, both class and tier provide a rough indication of a device’s complexity. 8. We agree that our draft report aggregated original PMA submissions and PMA supplements in summarizing its findings. We have now disaggregated PMA statistics throughout. 9. We interpret the figures presented by FDA to represent the mean number of days elapsed between receipt (or filing) of a PMA submission and a given month for cases that have not been decided. We agree with FDA that the average review time for open original PMAs does not appear to have increased substantially since the beginning of calendar 1994 and that the average review time has decreased for PMA supplements since late 1994. Decreasing these averages is the product of either an increasing number of new cases entering the system or of closing out older cases in the backlog or both. Since the number of PMAs (originals and supplements) submitted in recent years has declined, the evidence suggests that the drop in average time for pending PMA supplements resulted from eliminating lengthy backlogged cases. 10. As noted earlier, assessing the impact of specific management initiatives is beyond the scope of this report. However, we do agree with FDA that the approval rate for initial IDE submissions doubled between 1994 and 1995; by our calculations, it increased from 25 percent to 54 percent. We have not independently examined the total time to approval for all IDEs. Robert E. White, Assistant Director Bertha Dong, Project Manager Venkareddy Chennareddy, Referencer Elizabeth Scullin, Communications Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the Food and Drug Administration's (FDA) review of medical devices, focusing on how FDA review time has changed from fiscal year 1989 to May 18, 1995. GAO found that: (1) FDA review times for medical device applications remained stable from 1989 to 1991, increased sharply in 1992 and 1993, and dropped in 1994; (2) in 1994, the median review time for 510(k) applications was 152 days, which was higher than the median review time during 1989 through 1991; (3) the review time trend for original premarket approval (PMA) applications was unclear because many applications remained open; (4) the median review time for original PMA applications peaked at 984 days in 1992; (5) the review time trend for supplementary PMA applications fluctuated slightly in the first 3 years, peaked in 1992, and declined to 193 days in 1994; (6) in many instances, FDA placed 501(k) applications on hold while waiting for additional information, which comprised almost 20 percent of its total elapsed review time; and (7) the mean review time for investigational device exemptions was 30 days. |
The Federal Reserve System is involved in many facets of wholesale and retail payment systems in the United States, including providing wire transfers of funds and securities; providing for the net settlement of check clearing arrangements, automated clearinghouse (ACH) networks, and other types of payment systems; clearing checks and ACH payments; and regulating certain financial institutions and overseeing certain payment systems. Responding in part to a breakdown of the check-collection system in the early 1900s, Congress established the Federal Reserve System as an active participant in the payment system in 1913. The Federal Reserve Act directs the Federal Reserve System to provide currency in the quantities demanded by the public and authorizes the Federal Reserve System to establish a nationwide check clearing system, which has resulted in the Federal Reserve System’s becoming a major provider of check clearing services. Congress modified the Federal Reserve System’s role in the payment system through the Monetary Control Act of 1980 (MCA). One purpose of the MCA is to promote an efficient nationwide payment system by encouraging competition between the Federal Reserve System and private- sector providers of payment services. The MCA requires the Federal Reserve System to charge fees for its payment services, which are to be set to recover, over the long run, all direct and indirect costs of providing the services. Before the MCA, the Federal Reserve System provided payment services to its member banks for no explicit charge. The MCA expanded access to Federal Reserve System services, allowing the Federal Reserve System to offer services to all depository institutions, not just member banks. Congress again expanded the role of the Federal Reserve in the payment system in 1987 when it enacted the Expedited Funds Availability Act.This act expanded the Federal Reserve Board’s authority to regulate certain aspects of check payments that are not processed by the Federal Reserve System. Through specific regulatory authority and its general authority as the central bank, the Federal Reserve plays an important role in the oversight of the nation’s payment systems. The Federal Reserve Board has outlined its policy regarding the oversight of private-sector clearance and settlement systems in its Policy Statement on Payment Systems Risk. The second part of this policy incorporates risk management principles for such systems. The Federal Reserve System competes with the private sector in providing wholesale payment services. Wholesale payment systems are designed to clear and settle time-critical and predominantly large-value payments.The two major wholesale payment systems in the United States are the Fedwire funds transfer system, owned and operated by the Federal Reserve System, and the Clearing House Interbank Payments System (CHIPS), which is owned and operated by the Clearing House Service Company LLC, a subsidiary of the New York Clearing House Association LLC (NYCHA) for use by the participant owners of the Clearing House Interbank Payments Company LLC (CHIPCo). Fedwire is a real-time gross settlement (RTGS) system through which transactions are cleared and settled individually on a continuous basis throughout the day. CHIPS began operations in 1970 as a replacement for paper-based payments clearing arrangements. Since January 22, 2001, CHIPS has operated as a real-time settlement system. Payment orders sent over CHIPS are either simultaneously debited/credited to participants’ available balances or have been netted and set off with other payment orders and the resulting balance is debited/credited against participants’ available balances throughout the day. The transfer of balances into CHIPS and payments out occur via Fedwire. The Federal Reserve System oversees CHIPS’ compliance with its Policy Statement on Payment Systems Risk. The size and aggregate levels of wholesale transactions necessitate timely and reliable settlement to avoid the risk that settlement failures would pose to the financial system. Although wholesale payments constitute less than 0.1 percent of the total number of transactions of noncash payments, they represent 80 percent of the total value of these payments. Moreover, in 1999, the value of payment flows through the two major wholesale systems in the United States, Fedwire and CHIPS, was approximately 69 times the U.S. gross domestic product in that year. The Federal Reserve System also competes with the private sector in providing retail payment services. For example, the Federal Reserve System provides ACH and check clearing services. ACH systems are an important mechanism for high-volume, moderate to low-value, recurring payments, such as direct deposit of payrolls; automatic payment of utility, mortgage, or other bills; and other business- and government-related payments. The Federal Reserve System also competes with private-sector providers of check clearing services. To do this, the Federal Reserve operates a nationwide check clearing service with 45 check processing sites located across the United States. The Federal Reserve System’s market share of payment services as of year- end 1999 is represented in table 1. During forums held by the Federal Reserve System’s Committee on the Federal Reserve System in the Payments Mechanism, held in May and June, 1997, committee members and Federal Reserve staff met with representatives from over 450 payment system participants, including banks of all sizes, clearing houses and third-party service providers, consumers, retailers, and academics. Although a few large banks and clearing houses thought the Federal Reserve System should exit the check collection and ACH businesses, the overwhelming majority of forum participants opposed Federal Reserve System withdrawal. Participants were concerned that the Federal Reserve System’s exit could cause disruptions in the payment system. The Core Principles illustrates how the central banks see their roles in pursuing their objective of smoothly functioning payment systems.Further, the Core Principles outlines central banks’ roles in promoting the safety and efficiency of systemically important payment systems that they or others operate. The laws of the countries we studied support this aspect of the Core Principles. These countries charge their central banks with broad responsibility for ensuring the smooth operation and stability of payments systems. In their basic role as banks, central banks generally are charged with acting as a correspondent bank for other institutions, providing accounts, and carrying out interbank settlements. Nonetheless, countries’ laws vary regarding the specific roles a central bank should play in the payment system. Central banks in the G-10 countries and Australia have endorsed the Core Principles, which sets forth 10 basic principles that should guide the design and operation of systemically important payment systems in all countries as well as four responsibilities of the central bank in applying the Core Principles. (The principles and responsibilities are presented in app. II.) The overarching public policy objectives for the Core Principles are safety and efficiency in systemically important payment systems. Although the Core Principles generally is considered to apply to wholesale payment systems, some payments industry officials said that some payment systems that process retail payments could reasonably be considered systemically important because of the cumulative size and volume of the payments they handle. Providing for the safety of payment systems is mostly a matter of mitigating the risks inherent to the systems. These risks are listed and defined in table 2. Core Principle IV seeks to mitigate settlement risk by endorsing prompt final settlement, preferably during the day but, minimally, at the end of the day. The two major types of wholesale payment settlement systems are RTGS and multilateral netting systems. Recently, several hybrid systems have also been developed. (These two major types of systems are described further in app. III.) In general, multilateral netting systems offer greater liquidity because gross receipts and deliveries are netted to a single position at the end of the day. An institution can make payments during the day as long as its receipts cover the payments by the end of the day. However, multilateral netting systems without proper risk controls can lead to significant systemic risk. Because transactions are processed throughout the day, but not settled until the end of the day, the inability of a member to settle a net debit position could have large unexpected liquidity effects on other system participants or the economy more broadly. RTGS systems rely on immediate and final settlement of transactions, and these systems have much less exposure to systemic risk that could result from a settlement failure. Without a system for the provision of adequate intraday credit, these systems cause potential liquidity constraints because they require that funds or credit be available at the time that a payer initiates a transaction. Efficiency in payment systems can be characterized as both operational and economic. Operational efficiency involves providing a required level and quality of payment services for minimum cost. Cost reductions beyond a certain point may result in slower, lower quality service. This creates trade-offs among speed, risk, and cost. Going beyond operational efficiency, economic efficiency refers to (1) pricing that, in the long run, covers all of the costs incurred and (2) charging those prices in a way that does not inappropriately influence the choice of a method of payment. The Core Principles sets forth four responsibilities of the central bank in applying the core principles, two of which address oversight functions. The first is that the central bank should ensure that the systemically important systems it operates comply with the Core Principles, and the second is that the central bank should oversee compliance with the Core Principles by systems it does not operate, and it should have the ability to carry out this oversight. Therefore, the Core Principles affirms the importance of central banks’ oversight responsibility for their countries’ systemically important payment systems, including those that they do not own or operate. The laws of most of the countries we studied give the central bank broad responsibility for ensuring that payment systems operate smoothly. In addition, in their basic role as banks, central banks are generally charged with providing accounts to certain financial institutions and effecting interbank settlement. While some countries are specifically charged with providing additional payment services or regulating private payment systems, others are not. Similarly, regulatory and oversight authority is not always specified in laws but is obtained through historical development and the broader mission of the central bank. The European Central Bank (ECB) is the central bank for the countries that have adopted the euro. In conjunction with the euro area countries’ national central banks, the ECB oversees payment systems for the euro area and operates the Trans-European Automated Real-time Gross settlement Express Transfer (TARGET) system, the primary payment system for euro payments.The ECB’s powers and responsibilities are similar to those of national central banks. We therefore analyzed the ECB along with countries’ national central banks. In developing TARGET, the ECB set out strict rules regarding the national central banks’ provision of payment services, requiring each central bank to provide a RTGS system, which serves as a local component of TARGET. The laws of Canada, France, Japan, and the United Kingdom cast the central bank as a monitoring entity having general powers to ensure that payment systems do not pose systemic risk. The central banks in those countries are not specifically charged with providing particular payment clearing services. However, as a matter of practice, the central bank in France, which plans to discontinue its check clearing service in 2002, will continue to operate services related to check fraud. Although Australia’s law recognizes a limited role for the Reserve Bank of Australia to act as a service provider, the Reserve Bank of Australia’s primary purpose regarding payments systems is to serve as an oversight and regulatory mechanism designed to control risk to promote the overall efficiency of Australia’s financial system. German law authorizes the Bundesbank to furnish payments services, and the Bundesbank performs retail payment functions, including check processing, credit transfers, checks, and direct debits as well as owning and operating RTGSplus, which is an RTGS hybrid system for wholesale payments. The central banks we studied have general authority to take actions to protect against systemic risk. In some cases, the banks are to serve a particular regulatory function. For example, under Canadian law, the central bank decides upon the qualifications of payment systems determined by the central bank to pose systemic risk. However, except for Germany, Australia, and the United States, the laws of the countries we reviewed generally do not contemplate that the central bank is to regulate the provision of payment services for purposes unrelated to systemic risk. All of the central banks we studied provide settlement for wholesale payment systems. Moreover, these central banks participated in the design and development of, and have oversight over, wholesale payment systems. Most central banks play a role in providing these wholesale payment services. However, as demonstrated by the central banks we studied, central bank involvement in wholesale payment systems varies. Some central banks have full ownership and operational involvement in the payment system; others have little operational involvement beyond settlement services. Other central banks participate in partnerships. In some cases, the central bank is a major provider or perhaps the only provider of wholesale payment services. The Federal Reserve System, as previously noted, is a major provider of wholesale payment services. Each of the central banks we reviewed has participated in the design and development of its country’s wholesale payment system. For example, the Bundesbank collaborated in developing the RTGSplus system. The Bank of France played a major role in the development of France’s systems. The Bank of England cooperated with the Clearing House Automated Payment System (CHAPS) in the development of a new system, NewCHAPS; the Bank of Canada assisted in the design and development of the Large Value Transfer System. In the G-10 countries, the first automated RTGS system was Fedwire in the United States, which is owned and operated by the Federal Reserve System. Although there are some net settlement systems for wholesale payments today, many countries are transitioning to RTGS systems. In Europe, various decisions over the past 5 to 10 years have encouraged current and potential euro area countries to develop national RTGS systems. The trend toward RTGS systems extends beyond Europe’s boundaries, as countries worldwide are adopting RTGS systems. Central banks we studied played various roles in providing and overseeing wholesale payment services. All central banks provide key settlement services for wholesale payment systems. Some central banks own and operate wholesale payment systems that include clearance and settlement while others only provide oversight and settlement, leaving clearance and other processing activities to other parties. There is no clear pattern in the roles played by central banks in clearing wholesale payments. In addition to the United States, two of the central banks we studied, the Bundesbank and the Bank of France, have full ownership of their respective wholesale payment systems. The Bundesbank owns and operates the RTGSplus system, which was developed with the input of the German banking industry. The Bundesbank has full control over the practices of the system for large-value payments. The Bank of France owns and manages Transferts Banque de France, which is a RTGS system that is one of the two wholesale payment systems in France. The Bank of France is also a joint owner of the company that owns and operates France’s other wholesale payment system, which is a hybrid, real-time net settlement system. Although the Bank of France is only a partial owner of this system, it can exert considerable influence over it by virtue of its ownership role in the controlling company. The Bank of England is a member and shareholder of CHAPS Inc., which operates England’s sterling and euro RTGS systems. Although the Bank of England does not own or manage any payment clearing system, CHAPS payments settle by transferring funds among participating institutions’ Bank of England accounts. The Bank of England is the settlement bank for both the CHAPS Sterling and CHAPS Euro. The Bank of Canada has a more limited operational role in its system. The Bank of Canada entrusts the ownership and operation of the Large Value Transfer System (LVTS) to the Canadian Payments Association, which the Bank of Canada chairs. The Bank of Canada expressly guarantees settlement of LVTS in the event of the simultaneous default of more than one participant, and losses exceed available participant collateral. This guarantee is likened to “catastrophic insurance with a very large deductible,” with the latter being the collateral provided by the participants. Although the extent of central bank oversight over retail payment operations varies, central banks generally consider retail payments as an important component of the payment system. As such, central banks have some responsibility for promoting well-functioning retail payment systems. The operational role of the central bank in retail payment clearing varies considerably among the countries we studied. The basic structure of retail payment systems depends largely on the structure of the underlying financial system and on the historical evolution of payment processes. Factors that influence central bank involvement in retail payment systems include the history and structure of the country’s payment system and banking industry. While we identified several factors that influenced the involvement of a central bank in its country’s retail payment system, these factors interact uniquely and occur to varying degrees in the systems we studied. Retail payments are generally lower in value and urgency from the perspective of the financial system than wholesale payments, but retail payments occur more frequently. They typically include consumer and commercial payments for goods and services. Noncash retail payment instruments are generally categorized as paper-based (most commonly checks) or electronic (most commonly credit cards, credit transfer, debit cards, and direct debits). These payment instruments are further described in table 3. Central banks provide settlement for retail payments, but commercial banks also settle retail payments. Where the central bank provides settlement, it does so for “direct participants”—that is, institutions having settlement accounts at the central bank. Settlement of payments at the central bank sometimes requires tiering arrangements. Under these arrangements, “direct participants” settle payments through their accounts at the central bank, with indirect participants’ settling accounts with a direct participant with whom they have a settlement arrangement. Such is the case with the Bank of England, which acts as a banker to the settlement banks that are direct members of the United Kingdom’s primary payment clearing association. Settlement of retail payments may also occur through settlement agents, third-party arrangements, or correspondent accounts that institutions hold with each other for bilateral settlement. Although many central banks work to ensure that their retail payment systems are well-functioning, their approaches diverge. Some central banks play a prominent regulatory and operational role in retail payments and see these roles as keys to fostering well-functioning retail systems, while others assume more limited roles. Whatever the level of involvement in oversight or operations, most central banks consider retail payments as important components of the payment system and therefore assume some responsibility in promoting well-functioning retail payment systems. A number of structural factors influence the central bank’s role in retail payments. For example, the involvement of the central bank in check clearing can vary. In countries with a concentrated banking industry, on-us check clearing will occur with higher frequency. On-us checks are checks that are deposited at the same bank on which they are drawn, so that no third party, including the central bank, is required for clearing or settlement. For example, Canada has few banks, heavy check use, and little central bank involvement in clearing retail payments. On the other hand, the United States has a large number of banks and its central bank is heavily involved in providing check clearing services. If a country has many smaller banks, such as savings, rural, and cooperative banks, there will be more need for some kind of retail clearance system, thereby creating greater potential need for central bank involvement. Identifying the extent to which payment preferences influence central bank involvement in clearing payments is difficult. Some have suggested that central banks in countries that rely heavily on paper-based instruments are more involved in clearing retail payments, and that central banks of countries that are more reliant on electronic payments provide fewer clearing services. Central banks involved in check clearing include those in Germany, France, and the United States. France and the United States rely heavily on checks for retail payments. In contrast, the Bundesbank is heavily involved in clearing a variety of retail payment instruments, but Germany is not particularly reliant on checks as a means of payment. The physical size of a country determines the distances that payment instructions might have to travel between the paying and the drawing banks. This has particular relevance in countries that rely heavily on paper- based instruments such as checks, which might have to be physically moved great distances to be processed. For example, this is the case in the United States, which is much larger than any European country. The United States currently has approximately 19,000 depository institutions. Canada, on the other hand, has far fewer financial institutions but is also physically large and uses checks extensively. Private- sector correspondent banks clear many checks and compete with the central bank. The central bank, however, is perceived as a reliable and neutral intermediary to clear payments and provide settlement on a large scale for a diverse set of institutions. Table 4 shows the relative importance of noncash payment instruments in selected countries. A central bank’s role in the retail payment system reflects historical events and developments that have shaped retail payment systems in a particular country over many years. For example, the GIRO system serves as a primary retail payment in many European countries. The GIRO system was originally developed by the European Postal agencies, rather than by banks. Historically, European banking systems were largely decentralized and in most cases highly regulated. Therefore, in the absence of an efficient payment system for retail payments developed by the banking industry, payers in most European countries turned to national institutions, such as the postal service, which offered credit transfers (so-called GIRO payments) through a nationwide network of branches. Commercial banks subsequently began to offer GIRO services. As a result of these events, many European countries have well-developed systems that do not rely on central bank clearing for credit transfers. These systems were originally established by the public sector to respond to needs that were not being met by the private sector. Similarly, as previously noted, the Federal Reserve System was established to respond to events that pointed to the lack of a private remedy to market problems. We received comments on a draft of this report from the Board of Governors of the Federal Reserve System. These comments are reprinted in appendix IV. Board staff also provided technical comments and corrections that we incorporated as appropriate. We are sending copies of this report to the chairman of the House Subcommittee on Domestic Monetary Policy, Technology, and Economic Growth; the chairman of the Board of Governors of the Federal Reserve System; the president of the Federal Reserve Bank of Atlanta, and the president of the Federal Reserve Bank of New York. We will make copies available to others on request. Please contact me or James McDermott, Assistant Director, at (202) 512-8678 if you or your staff have any questions concerning this report. Other key contributors to this report are James Angell, Thomas Conahan, Tonita W. Gillich, Lindsay Huot, and Desiree Whipple. The objectives of this report are to (1) identify internationally recognized objectives for payment systems and central bank involvement in those systems, (2) describe the roles of central banks in the wholesale payment systems of other major industrialized countries and the key factors that influence those roles, and (3) describe the roles of central banks in the retail payment systems of other major industrialized countries and the key factors that influence those roles. In analyzing the roles of other central banks in payment systems, we focused on countries with relatively modern, industrialized economies. These countries included Australia, Canada, France, Germany, Japan, the United Kingdom, and the United States. To identify widely held public policy objectives for payment systems, we reviewed Core Principles for Systemically Important Payment Systems, which was developed by the Committee on Payment and Settlement Systems (CPSS), of the Bank for International Settlements. The CPSS established the Task Force on Payment System Principles and Practices in May 1998 to consider what principles should govern the design and operation of payment systems in all countries. The task force sought to develop an international consensus on such principles. The task force included representatives not only from G-10 central banks and the European Central Bank but also from 11 other national central banks of countries in different stages of economic development from all over the world and representatives from the International Monetary Fund and the World Bank. The task force also consulted groups of central banks in Africa, the Americas, Asia, the Pacific Rim, and Europe. We also reviewed materials available on the Web sites of the central banks we studied; these sites often included mission statements, basic data, and authorizing statutes. We reviewed a variety of legal analyses and commentaries to analyze those statutes. Where we make statements regarding to central banks’ authorizing statutes, they are based on these sources rather than on our original legal analysis. To describe the roles of central banks in the wholesale and retail payment systems of other major industrialized countries and the key factors that influence those roles, we reviewed materials available on central bank Web sites as well as other articles and publications from various central banks. We reviewed publications available from the Bank for International Settlements, and also the European Central Bank’s Blue Book: Payment and Securities Settlement Systems in the European Union. We also reviewed numerous articles and commentaries on the roles of central banks as well as discussions of recent reform efforts. To enhance our understanding of these materials, we interviewed Federal Reserve officials, members of trade associations, and officials from private-sector payment providers. We conducted our work in Washington, D.C., and New York, N.Y., between June 2001 and January 2002 in accordance with generally accepted government auditing standards. The core principles for systemically important payments systems (core principles) are shown in table 5. The responsibilities of the central bank in applying the core principles are as follows: The central bank should define clearly its payments objectives and should disclose publicly its role and major policies with respect to systemically important payments systems. The central bank should ensure that the systems it operates comply with the core principles. The central bank should oversee compliance with the core principles by systems it does not operate and should have the ability to carry out this oversight. The central bank, in promoting payment system safety and efficiency through the core principles, should cooperate with other central banks and with any other relevant domestic or foreign authorities. Different forms of settlement for wholesale payments result in different risks. Various wholesale payment systems in major industrialized countries use similar means to transmit and process wholesale payments. However, they sometimes use different rules for settling those transactions. In general, wholesale payments are sent over separate, secure, interbank electronic wire transfer networks and are settled on the books of a central bank. That is, settlement is carried out by exchange of funds held in banks’ reserve accounts at a central bank. However, various wholesale payment systems use different rules for settling these large-value payments. Some systems operate as real-time gross settlement (RTGS) systems, which continuously clear payment messages that are settled by transfer of central bank funds from paying banks to receiving banks. Other systems use net settlement rules, wherein the value of all payments due to and due from each bank in the network is calculated on a net basis before settlement. Each form of settling wholesale payments presents different risks to participants. Recently, some hybrid systems have been developed, building on the strengths and minimizing the risks associated with pure RTGS or netting systems. RTGS systems are gross settlement systems in which both processing and settlement of funds transfer instructions take place continuously, or in real time, on a transaction by transaction basis. RTGS systems settle funds transfers without netting debits against credits and provide final settlement in real time, rather than periodically at prespecified times. In most RTGS systems, the central bank, in addition to being the settlement agent, can grant intraday credit to help the liquidity needed for the smooth operation of these systems. Participants typically can make payments throughout the day and only have to repay any outstanding intraday credit by the end of the day. Because RTGS systems provide immediate finality of gross settlements, there is no systemic risk—that is, the risk that the failure to settle by one possibly insolvent participant would lead to settlement failures of other solvent participants due to unexpected liquidity shortfalls. However, as the entity guaranteeing the finality of each payment, the central bank faces credit risk created by the possible failure of a participant who uses intraday credit. In the absence of collateral for such overdrafts, the central bank assumes some amount of credit risk until the overdrafts are eliminated at the end of the day. In recent years, central banks have taken steps to more directly manage intraday credit, including collaterization requirements, caps on intraday credit, and charging interest on intraday overdrafts. Fedwire was established in 1918 as a telegraphic system and was the first RTGS system among the G-10 countries. Presently, account tallies are maintained minute-by-minute. The Federal Reserve Banks generally allow financially healthy institutions the use of daylight overdrafts up to a set multiple of their capital and may impose certain additional requirements, including collateral. In 1994, the Federal Reserve System began assessing a fee for the provision of this daylight liquidity. Other central banks have only recently adopted RTGS systems and have established a variety of intraday credit policies, such as intraday repurchase agreements, collateralized daylight overdrafts, and other policies. Other networks operate under net settlement rules. Under these rules, the value of all payments due to and due from each bank in the network is calculated on a net basis bilaterally or multilaterally. This occurs at some set interval—usually the end of each business day—or, in some newly developed systems, continuously throughout the day. Banks ending the day in a net debit position transfer reserves to the net creditors, typically using a settlement account at the central bank. Net settlement systems, with delayed or end of business day settlement, enhance liquidity in the payment system because such systems potentially allow payers to initiate a transaction without having the funds immediately on hand, but available pending final settlement. However, this can increase the most serious risk in netting systems, which is systemic risk. Recognizing that systemic risk is inherent in netting systems, central banks of the G-10 countries formulated minimum standards for netting schemes in the Lamfalussy Standards. The standards stress the legal basis for netting and the need for multilateral netting schemes to have adequate procedures for the management of credit and liquidity risks. Although netting arrangements generally reduce the need for central bank funds, they also expose the participants to credit risks as they implicitly extend large volumes of payment-related intraday credit to one another. This credit represents the willingness of participants to accept or send payment messages on the assumption that the sender will cover any net debit obligations at settlement. The settlement of payments, by the delivery of reserves at periodic, usually daily, intervals is therefore an important test of the solvency and liquidity of the participants. In recent years, central banks in countries using net settlement rules have taken steps to reduce credit risks in these systems as part of overall programs to reduce systemic risks. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full-text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO E-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily e-mail alert for newly released products” under the GAO Reports heading. Web site: www.gao.gov/fraudnet/fraudnet.htm, E-mail: fraudnet@gao.gov, or 1-800-424-5454 or (202) 512-7470 (automated answering system). | The central banks of major industrialized countries have agreed on common policy objectives and presented them in the Core Principles for Systematically Important Payment Systems. Intended to help promote safer and more efficient payment systems worldwide, the Core Principles outline specific policy recommendations for systematically important payment systems and describe the responsibilities of the central banks. All of the central banks GAO studied seek to ensure that their wholesale payment systems operate smoothly and minimize systemic risk. All of the central banks provide settlement services for their countries' wholesale payment systems. Some central banks also provide wholesale clearing services. Other central banks own the system but have little operational involvement in clearing, while others participate in partnerships with the private sector. All of the central banks GAO studied provide settlement for some retail payment systems. Some, but not all, central banks exercise regulatory authority over retail payment systems in their countries. Central banks also tend to have less operational involvement in countries where there is a relatively concentrated banking industry. In some cases, laws governing payments and the structure of the financial services industry direct the involvement of central banks in retail payment systems. |
Daylight Saving Time (DST) is a period of the year between spring and fall when clocks in the United States are set one hour ahead of standard time. It is not a new concept. In 1784, Benjamin Franklin, Minister to France, had an idea that part of the year when the sun rises while most people are still asleep, clocks could be reset to allow an extra hour of daylight during waking hours. He calculated that French shopkeepers could save one million francs per year on candles. In 1907, William Willett, a British builder, Member of Parliament, and fellow of the Royal Astronomical Society, proposed the adoption of advanced time. The bill was reported favorably, asserting that DST would move hours of work and recreation more closely to daylight hours, reducing expenditures on artificial light. After much opposition, however, the bill was not adopted. During World War I, in an effort to conserve fuel, Germany began observing DST on May 1, 1916. As the war progressed, the rest of Europe adopted DST. The plan was not formally adopted in the United States until 1918. "An Act to preserve daylight and provide standard time for the United States" was enacted on March 19, 1918 (40 Stat 450). It both established standard time zones and set summer DST to begi n on March 31, 1918. The idea was unpopular, however, and Congress abolished DST after the war, overriding President Woodrow Wilson's veto. DST became a local option and was observed in some states until World War II, when President Franklin Roosevelt instituted year-round DST, called "War Time," on February 9, 1942. It ended on the last Sunday in September 1945. The next year, many states and localities adopted summer DST. By 1962, the transportation industry found the lack of nationwide consistency in time observance confusing enough to push for federal regulation. This drive resulted in the Uniform Time Act of 1966 (P.L. 89-387). The act mandated standard time within the established time zones and provided for advanced time: clocks would be advanced one hour beginning at 2:00 a.m. on the last Sunday in April and turned back one hour at 2:00 a.m. on the last Sunday in October. States were allowed to exempt themselves from DST as long as the entire state did so. If a state chose to observe DST, the time changes were required to begin and end on the established dates. In 1968, Arizona became the first state to exempt itself from DST. In 1972, the act was amended (P.L. 92-267), allowing those states split between time zones to exempt either the entire state or that part of the state lying within a different time zone. The newly created Department of Transportation (DOT) was given the power to enforce the law. Currently, the following states and territories do not observe DST: Arizona, Hawaii, American Samoa, Guam, the Northern Mariana Islands, Puerto Rico, and the Virgin Islands. During the 1973 oil embargo by the Organization of the Petroleum Exporting Countries (OPEC), in an effort to conserve fuel, Congress enacted a trial period of year-round DST ( P.L. 93-182 ), from January 6, 1974, to April 27, 1975. From the beginning, the trial was hotly debated. Those in favor pointed to the benefits of increased daylight hours in the winter evening: more time for recreation, reduced lighting and heating demands, reduced crime, and reduced automobile accidents. The opposition was concerned about children leaving for school in the dark. The act was amended in October 1974 ( P.L. 93-434 ) to return to standard time for the period beginning October 27, 1974, and ending February 23, 1975, when DST resumed. When the trial ended in 1975, the country returned to observing summer DST (with the aforementioned exceptions). DOT, charged with evaluating the plan of extending DST into March, reported in 1975 that "modest overall benefits might be realized by a shift from the historic six-month DST (May through October) in areas of energy conservation, overall traffic safety and reduced violent crime." However, DOT also reported that these benefits were minimal and difficult to distinguish from seasonal variations and fluctuations in energy prices. Congress then asked the National Bureau of Standards (NBS) to evaluate the DOT report. In an April 1976 report to Congress, Review and Technical Evaluation of the DOT Daylight Saving Time Study , NBS found no significant energy savings or differences in traffic fatalities. It did find statistically significant evidence of increased fatalities among school-age children in the mornings during the four-month period January-April 1974 as compared with the same period (non-DST) of 1973. NBS stated that it was impossible to determine what proportion of this increase, if any, was due to DST. When this same data was compared between 1973 and 1974 for the individual months of March and April, no significant difference was found for fatalities among school-age children in the mornings. Yes. In 1986, Congress enacted P.L. 99-359 , which amended the Uniform Time Act by changing the beginning of DST to the first Sunday in April and having the end remain the last Sunday in October. In 2005, Congress enacted P.L. 109-58 , the Energy Policy Act of 2005. Section 110 of this act amended the Uniform Time Act, by changing DST to begin the second Sunday in March and end the first Sunday in November. The act required the Secretary of the Department of Energy (DOE) to report to Congress on the impact of extended DST on energy consumption in the United States. In October 2008, DOE sent its report to Congress. After reviewing the DOE report, Congress retained the right under the law to revert DST to the 2005 time schedule. For more information on the legislation that changed DST, see CRS Report RL32860, Energy Efficiency and Renewable Energy Legislation in the 109th Congress , by [author name scrubbed]. Arizona (except for the Navajo Nation), Hawaii, Puerto Rico, American Samoa, Guam, the Northern Mariana Islands, and the U.S. Virgin Islands do not recognize daylight saving time. DST is observed in approximately 70 countries, including most of those in North America and Europe. For a complete listing of countries that observe DST, please see Worldtimezone.com and WebExhibits .org websites . The Department of Transportation, Office of the General Counsel, oversees and regulates DST. Under the Uniform Time Act, moving an area on or off DST is accomplished through legal action at the state level. Some states require legislation, whereas others require executive action such as a governor's executive order. Information on procedures required in a specific state may be obtained from that state's legislature or governor's office. If a state decides to observe DST, the dates of observance must comply with federal legislation. As of September 2015, 12 states were considering opting out of the Uniform Time Act of 1966. An area's time zone can only be changed by law. Under the Standard Time Act of 1918, as amended by the Uniform Time Act of 1966, moving a state or an area within a state from one time zone to another requires DOT regulation. The governor or state legislature makes the request for a state or any part of the state; the highest county-elected officials may make the request for that county. The standard for deciding whether to change the time zone is the area's convenience of commerce. The convenience of commerce is defined broadly to consider such circumstances as the shipment of goods within the community; the origin of television and radio broadcasts; the areas where most residents work, attend school, worship, or receive health care; the location of airports, railways, and bus stations; and the major elements of the community's economy. After receiving a request, DOT determines whether it meets the minimum statutory criteria before issuing a notice of proposed rulemaking, which would solicit public comment and schedule a public hearing. Usually the hearing is held in the area requesting the change so that all affected parties can be represented. After the comment period closes, comments are reviewed and appropriate final action is taken. If the Secretary agrees that the statutory requirement has been met, the change would be instituted, usually at the next changeover to or from DST. A number of studies have been conducted on DST's impact on energy savings, health, and safety. Following are some recent examples from database searches, such as EbscoHost, ProQuest, and ScienceDirect, including a few select sample reports that discuss the impacts of DST on the listed topic. This is not a comprehensive literature review. The first national study since the 1970s was mandated by Congress and conducted by the DOE in 2006 and in 2008: U.S. Department of Energy (2006), Potential Energy-Saving Impacts of Extending Daylight Saving Time: A National Assessmen t : "Total potential electricity savings benefits of EDST are relatively small. Total potential electrical savings of 1 Tera Watt-hour (TWh) are estimated (with an uncertainty range of ± 40 percent), corresponding to 0.4 percent per day for each day of EDST or 0.03 percent of electricity use over the year. The United States consumed 3,548 TWhs in 2004. Total potential energy benefits are small. Total potential primary energy savings are estimated from 7 to 26 Trillion Btu (TBtu), or 0.01 percent to 0.03 percent of total annual U.S. energy consumption." U.S. Department of Energy's Report to Congress (2008), Impact of Extended Daylight Saving Time on National Energy Consumption : "The total electricity savings of Extended Daylight Saving Time were about 1.3 TeraWatt-hour (TWh). This corresponds to 0.5 percent per each day of Extended Daylight Saving Time, or 0.03 percent of electricity consumption over the year." M.B. Aries and G.R. Newsham (2008), "Effect of Daylight Saving Time on Lighting Energy Use: A Literature Review," Energy Policy , 36(6), 1858–1866. "The principal reason for introducing (and extending) daylight saving time (DST) was, and still is, projected energy savings, particularly for electric lighting. This paper presents a literature review concerning the effects of DST on energy use. Simple estimates suggest a reduction in national electricity use of around 0.5%, as a result of residential lighting reduction. Several studies have demonstrated effects of this size based on more complex simulations or on measured data. However, there are just as many studies that suggest no effect, and some studies suggest overall energy penalties, particularly if gasoline consumption is accounted for. There is general consensus that DST does contribute to an evening reduction in peak demand for electricity, though this may be offset by an increase in the morning. Nevertheless, the basic patterns of energy use, and the energy efficiency of buildings and equipment have changed since many of these studies were conducted. Therefore, we recommend that future energy policy decisions regarding changes to DST be preceded by high-quality research based on detailed analysis of prevailing energy use, and behaviours and systems that affect energy use. This would be timely, given the extension to DST underway in North America in 2007." M.J. Kotchten (2011), "Does Daylight Saving Time Save Energy? Evidence from a Natural Experiment in Indiana," The Review of Economics and Statistics , 93(4): 1172–1185. "Our main finding is that, contrary to the policy's intent, DST increases electricity demand." A. Huang and D. Levinson (2010), "The Effects of Daylight Saving Time on Vehicle Crashes in Minnesota," Journal of Safety Research , 41 (6), 513-520 : "Our major finding is that the short-term effect of DST on crashes on the morning of the first DST is not statistically significant." T. Lahti et al. (2010). "Daylight Saving Time Transitions and Road Traffic Accidents," Journal of Environmental and Public Health , 657167: "Our results demonstrated that transitions into and out of daylight saving time did not increase the number of traffic road accidents." Y. Harrison (2013), "The Impact of Daylight Saving Time on Sleep and Related Behaviours," Sleep Medicine Reviews ,1(4), 285-292: "The start of daylight saving time in the spring is thought to lead to the relatively inconsequential loss of 1 hour of sleep on the night of the transition, but data suggest that increased sleep fragmentation and sleep latency present a cumulative effect of sleep loss, at least across the following week, perhaps longer. The autumn transition is often popularised as a gain of 1 hour of sleep but there is little evidence of extra sleep on that night. The cumulative effect of five consecutive days of earlier rise times following the autumn change again suggests a net loss of sleep across the week. Indirect evidence of an increase in traffic accident rates, and change in health and regulatory behaviours which may be related to sleep disruption suggest that adjustment to daylight saving time is neither immediate nor without consequence." MR Jiddou et al. (2013). "Incidence of Myocardial Infarction with Shifts to and From Daylight Savings Time," The American Journal of Cardiology , 111(5), 631-5: "Limited evidence suggests that Daylight Saving Time (DST) shifts have a substantial influence on the risk of acute myocardial infarction (AMI). Previous literature, however, lack proper identification necessary to vouch for causal interpretation. We exploit Daylight Saving Time shift using non-parametric regression discontinuity techniques to provide indisputable evidence that this abrupt disturbance does affect incidence of AMI." P.L. 109-58 , the Energy Policy Act of 2005 (introduced as H.R. 6 ), was enacted on August 8, 2005. Section 110 of this act amended the Uniform Time Act, changing the beginning of DST to the second Sunday in March and the ending date to the first Sunday in November. This is the only bill related to DST that has been enacted since 1966. Between the 95 th and 109 th Congresses, there were generally a few DST-related bills introduced each Congress. None of the bills were enacted. Illustrative examples include the following: H.R. 1646 —To amend the Uniform Time Act of 1966 to modify the State exemption provisions for advancement of time. H.R. 4212 —To direct the Secretary of Energy to conduct a study of the effects of year-round daylight saving time on fossil fuel usage. H.R. 3756 —To establish a standard time zone for Guam and the Commonwealth of the Northern Mariana Islands, and for other purposes. S. 1999 —Daylight Savings Time Amendments Act of 1991 Amends the Uniform Time Act of 1966 to extend the period of daylight savings time from the last Sunday of October to the first Sunday in November . H.R. 2636 —A bill to amend the Uniform Time Act of 1966 to provide for permanent year-round daylight savings time. The Department of Transportation, Office of the General Counsel, oversees and regulates DST. The Naval Observatory also has useful information, as does NASA . | Daylight Saving Time (DST) is a period of the year between spring and fall when clocks in the United States are set one hour ahead of standard time. DST is currently observed in the United States from 2:00 a.m. on the second Sunday in March until 2:00 a.m. on the first Sunday in November. The following states and territories do not observe DST: Arizona (except the Navajo Nation, which does observe DST), Hawaii, American Samoa, Guam, the Northern Mariana Islands, Puerto Rico, and the Virgin Islands. |
WASHINGTON — Federal Reserve Chairman Ben S. Bernanke is poised to roll out another stimulus program — and risk the wrath of conservative critics.Bernanke and his central bank colleagues gather Wednesday for a pivotal two-day session fresh off last week's disappointing jobs report, the latest in a string of signs that the economic recovery is faltering.And with Europe, China and other nations taking steps to bolster their struggling economies in the face of a global slowdown, Washington and Wall Street expect the Fed to unveil plans for a third round of its controversial bond-buying program when the meeting ends Thursday.Economists said the move could help boost the economy if the Fed focuses its efforts on the slowly improving housing market. By buying mortgage-backed securities, it could edge down already historically low borrowing rates.Lower rates also could make banks more likely to lend because it would be easier to sell the loans to the Fed and avoid risks of losses. That, in turn, could help stimulate economic activity and help create jobs."There's a bigger bang for the dollar when you're hitting a market that's already showing signs of healing," said Diane Swonk, chief economist with Mesirow Financial.Given expectations of some action, the Fed could trigger a market sell-off if it opts to do nothing."It would be a pretty big disappointment if the Fed did not act at this stage of the game," Swonk said. "The employment report was the game changer."There are downsides to another round of so-called quantitative easing.Lower interest rates hit elderly Americans hard by reducing the income they could draw from their savings, particularly certificates of deposit."It's badly pinching seniors who rely on CDs," said James Chessen, executive vice president of the American Bankers Assn., an industry trade group.And those low rates, which the Fed has promised to extend at least through 2014, could be keeping businesses from borrowing now because they know they'll have access to cheap money for a long time.The previous two rounds of bond buying have caused the Fed's balance sheet to more than triple to $2.8 trillion over the last four years, angering Republicans and fueling a conservative movement to dismantle the central bank.Launching a stimulus program in the weeks before a presidential election also could lead Congress to take steps to rein in the Fed's authority, said Gregory D. Hess, an economics professor at Claremont McKenna College.A new round of bond buying could be "another shovel full of dirt as the Fed digs its own grave as a politically independent institution," he said.But with the economy still struggling to create jobs, Bernanke and other Fed officials have signaled that they're ready to act. And though short-term interest rates are near zero, another large-scale bond-buying effort would be the central bank's main weapon.After the government reported last week that the economy added just 96,000 jobs in August, economists polled by Reuters said there was a 60% chance the Fed would launch another round of bond buying.The first round, from March 2009 to March 2010, involved the purchase of $1.25 trillion in mortgage-backed securities, along with $200 billion in debt issued by government-backed agencies such as Fannie Mae and $300 billion in longer-term Treasury securities.With the economy still struggling in November 2010, the Fed launched a second round of quantitative easing, purchasing $600 billion in Treasury securities through June 2011. ||||| WASHINGTON — In September 1992, the Federal Reserve culminated a long-running effort to stimulate the sluggish economy by cutting its benchmark interest rate to 3 percent, the lowest level it had reached in almost three decades.
The cut was avidly sought by the administration of President George H. W. Bush, but it was not enough to change the course of the presidential election. Years later, Mr. Bush told an interviewer that the Fed’s chairman, Alan Greenspan, had cost him a second term by failing to act more quickly and more forcefully.
“I reappointed him and he disappointed me,” Mr. Bush said.
On Thursday, the Federal Reserve is poised to announce that it will once again seek to stimulate the economy in the middle of a presidential election season.
Fed officials insist that they do not consider politics in setting policy. But the imminence of the election makes it inevitable that the decisions reached by the Fed’s policy-making committee will be judged through a political lens.
Republicans have warned the Fed against taking action, and are seeking to impose new limits on its management of monetary policy. Some Democrats now echo Mr. Bush’s lament that new actions are too little, too late.
Experts say Fed officials are sensitive to the danger of a political reaction. But Randall S. Kroszner, a Fed governor from 2006 to 2009, said the Fed’s current chairman, Ben S. Bernanke, has concluded that the best defense of the Fed’s independence is to demonstrate its value by reaching decisions on the economic merits, then offering clear explanations to politicians and the public. “Any decision the Fed will make will make someone unhappy, but what you want out of an independent agency is a careful deliberative process,” said Mr. Kroszner, a professor of economics at the University of Chicago Booth School of Business.
“Providing as much substantive economic explanation as possible for the actions that the Fed is taking, that’s the best way to maintain the Fed’s independence,” he added.
Mr. Bernanke and other officials have argued in recent weeks that the economy needs help, and that the Fed still has the means to stimulate growth. Officials also are concerned about the combination of federal tax increases and spending cuts scheduled to take effect next year.
In a reminder of the government’s fiscal problems, Moody’s Investors Service reiterated Tuesday that it could downgrade federal debt if Congress did not reach a debt reduction deal. Interest rates on federal debt remain near record lows despite a downgrade last year by Standard & Poor’s, but a second downgrade could force some investors to reduce their holdings of Treasury securities.
Mr. Bernanke outlined the actions that the Fed was most likely to take in a speech last month at a conference at Jackson Hole, Wyo. The central bank could announce a new round of asset purchases, expanding its balance sheet for the third time since 2008. It also could announce its intent to keep its benchmark interest rate near zero beyond late 2014. Both are methods of reducing long-term interest rates, and encourage borrowing and riskier investments.
While some analysts have argued that the Fed is less likely to act in an election year, history offers evidence to the contrary.
The Fed has announced policy changes in September or October during 10 of the last 15 presidential election years, dating back to 1952. During the last presidential election, the Fed slashed interest rates repeatedly as it responded to the financial crisis. In 2004, the Fed raised rates in June, August and September.
“There is plenty of precedent for Fed action ahead of a presidential election,” Credit Suisse said in a research note reviewing that history.
The decisions have always been subject to political pressure. The central bank’s independence was never intended to be complete. The Fed’s governors are appointed by the president. Its chairman has lunch regularly with the Treasury secretary and meets occasionally with the president. Congress dictates the Fed’s mission and requires it to report regularly on its actions.
The Fed, then-chairman William McChesney Martin Jr. told Congress in 1957, “should be independent — not independent of government, but independent within the structure of government.” That meant, he said, having the freedom necessary to decide how best to meet the goals of national economic policy.
It has won that much autonomy only gradually. When Arthur Burns became Fed chairman in January 1970, President Richard Nixon said at the swearing-in ceremony, “I respect his independence. However, I hope that independently he will conclude that my views are the ones that should be followed.”
He was not joking. Two years later, Mr. Nixon pressured Mr. Burns to help the economy by printing vast amounts of money, contributing to an era of crippling inflation.
Beginning in the 1980s, Paul A. Volcker and his successor, Mr. Greenspan, successfully established the benefits of independent monetary policy in reducing the rate of inflation, giving the Fed more leverage to resist White House pressure.
During that same period, presidents and their advisers gradually stopped talking in public about monetary policy, concluding that it was counterproductive because the Fed was then forced to respond by demonstrating its independence.
The victory remained partial. When Leon Panetta, then White House chief of staff, said publicly in 1995 that the Fed should cooperate with efforts to revive the economy, his comments were quickly disavowed. But the Clinton administration privately pushed Mr. Greenspan to stimulate growth ahead of the 1996 election and rewarded those efforts by nominating Mr. Greenspan for a new term.
The current White House says that it does not comment on monetary policy. Mr. Bernanke met only three times with President Obama last year, and most recently met with the president at the end of May, although he continues to meet regularly with Treasury Secretary Timothy F. Geithner. Even Congressional Democrats have generally avoided public calls for the central bank to take new action.
Republicans have been openly critical. Mitt Romney, the Republican Party’s presidential nominee, has pledged to replace Mr. Bernanke. But in recent weeks Mr. Romney also has seemed to take a more measured tone in his remarks. He told Fox News last week that he doubted the benefits of any new Fed policies.
“I don’t think there’s any action that they are going to take that will have an immediate impact on the economy,” Mr. Romney said.
Mr. Bernanke has simply repeated the Fed’s long-standing mantra that it does not listen to politicians nor think about politics.
“Our job is to do the right thing for the economy irrespective of politics,” Mr. Bernanke said earlier this year. “We’re not paying any attention to election calendars or political debates. We’re looking at the economy.” ||||| If the world's investors are right, the Federal Reserve is about to take a bold new step to try to invigorate the U.S. economy.
And many expect the Fed to unleash its most potent weapon: a third round of bond purchases meant to ease long-term interest rates and spur borrowing and spending. It's called "quantitative easing," or QE.
Others foresee a more measured response when the Fed ends a two-day policy meeting Thursday. They think it will extend its timetable for any rise in record-low short-term rates beyond the current target of late 2014 at the earliest.
On one point few disagree: The Fed feels driven to act now because the U.S. economy is still growing too slowly to reduce high unemployment. The unemployment rate has topped 8 percent every month since the Great Recession officially ended more than three years ago.
In August, job growth slowed sharply. The unemployment rate did fall to 8.1 percent from 8.3 percent. But that was because many Americans stopped looking for work, so they were no longer counted as unemployed.
Chronic high unemployment was a theme Fed Chairman Ben Bernanke spotlighted in a speech to an economic conference in Jackson Hole, Wyoming, late last month. Bernanke argued that QE and other unorthodox Fed actions had helped ease borrowing costs and boosted stock prices.
Higher stock prices increase Americans' wealth and confidence and typically lead individuals and businesses to spend more.
In his speech, Bernanke cited research showing that the two previous rounds of QE had created 2 million jobs and accelerated economic growth. Still, he said persistently weak hiring remains "a grave concern" that inflicts "enormous suffering."
His remarks sent a clear signal that the Fed will do more.
"He had a sense of urgency in that Jackson Hole speech," said David Jones, chief economist at DMJ Advisors. "I think he is convinced that there is a need to do something."
Some critics, inside and outside the Fed, remain opposed to further bond buying. They fear that by pumping so much cash into the financial system, the Fed is raising the risk of high inflation in the future. And many don't think more bond purchases would help anyway because interest rates are already near record lows.
Some economists who doubt the Fed is about to begin more bond buying say the European Central Bank has eased some pressure on the Fed. Last week, the ECB announced a plan to buy unlimited amounts of government bonds to help lower borrowing costs for countries struggling with debts.
If the ECB's plan succeeds in bolstering Europe, the U.S. economy could benefit, too. Europe's financial crisis and recession have slowed the U.S. economy, in part by reducing European purchases of U.S. goods.
Some also think the Fed might be reluctant to launch a bond-buying program in the final two months of the presidential campaign. Many Republicans have been critical of the Fed's unconventional methods to boost the economy. After the financial crisis struck in 2008, the Fed bought more than $2 trillion in Treasury and mortgage-backed securities.
The Fed "is already a campaign issue, and enlarging its balance sheet will make it even more of one," argues Vincent Reinhart, chief economist at Morgan Stanley and a former top economist at the Fed. Reinhart thinks the Fed will prefer to wait until at least December before announcing more bond buying.
By then, he says, the Fed will have reviewed more employment data. The effect of Europe's debt crisis on the U.S. economy will be better known. And Congress' plans for addressing a U.S. fiscal crisis at year's end will be clearer. Without a budget deal, higher taxes and deep spending cuts will kick in next year.
If the Fed takes the more modest step Thursday of extending its timetable for any rate increase, many analysts think it would push its target date to mid-2015. The goal would be to lower borrowing rates by assuring investors that short-term rates will likely stay near zero even longer than previously thought.
Yet Bernanke's remarks in Jackson Hole about unemployment were so downbeat, and his defense of Fed bond purchases so strong, that many economists suspect a bond-buying program will be unveiled Thursday.
So do investors. In part because of anticipation of a QE3, they've boosted the Dow Jones industrial average nearly 2 percent in September, a month that's typically weak for stocks. On Tuesday, the Dow rose 69 points. And Treasury yields have dropped on expectations that a new Fed bond-purchase program would lower interest rates.
The concern Bernanke expressed in Jackson Hole followed a Fed policy meeting in which many officials felt more Fed action would "likely be warranted fairly soon" unless there was a "substantial and sustainable strengthening in the pace of the economic recovery," according to minutes of the meeting.
Friday's report that U.S. employers cut back sharply on hiring in August dimmed hopes of a strengthening job market.
If the Fed does unveil QE3, some economists think it might differ from the previous bond-buying programs. With its earlier purchases, the Fed announced a dollar amount and a time frame for the bonds it planned to buy.
This time, any new bond-purchase program might be more open-ended. Three regional Fed bank presidents _ Eric Rosengren of Boston, James Bullard of St. Louis and Charles Evans of Chicago _ have expressed openness to a program in which the Fed would buy bonds until the economy improved significantly and unemployment fell consistently _ as long as inflation remained tame.
None of those officials now have a vote on the Fed's policy committee. But they take part in the committee discussions that would allow them to push the idea.
Jones of DMJ Advisors says he thinks open-ended bond purchases will be discussed at this week's policy meeting. Still, he expects the Fed to announce a more conventional bond-buying program of around $500 billion. That would be less than the $600 billion in bonds in QE2 and well below the $1.75 trillion in QE1.
In light of Bernanke's recent comments, Jones doesn't think the Fed wants to delay further support for the economy until the election is over. Neither does Diane Swonk, chief economist at Mesirow Financial.
"This will be an effort on the part of Fed officials to pull out as much firepower as they can," Swonk said. "They are trying for as much shock and awe as they can muster." | Markets have soared in the expectation that the Fed will announce fresh steps to boost the US economy at the conclusion of its two-day meeting tomorrow—making it all the more likely that a stimulus will be forthcoming. Fed Chairman Ben Bernanke is widely expected to announce a third round of bond-buying to pump cash into the system, a move known as "quantitative easing," the AP reports. Last month, Bernanke called persistently high unemployment "a grave concern" that inflicts "enormous suffering" and said the previous two rounds of easing had created 2 million jobs. But a new stimulus this close to a presidential election is sure to anger conservatives, and could result in moves to limit the Fed's authority. Another round of bond-buying would be "another shovel full of dirt as the Fed digs its own grave as a politically independent institution," an economics professor tells the Los Angeles Times. The most recent precedent is Alan Greenspan's decision to cut the Fed's benchmark interest rate to its lowest level in decades in September 1992, the New York Times reports. But the cut wasn't enough to get George HW Bush, whose administration had sought quicker and more forceful action, re-elected that November. |
Our objective was to assess IRS’ performance during the 1996 filing season, including some of IRS’ initiatives to modernize its processing activities. To achieve our objective, we interviewed IRS National Office officials and IRS officials in the Atlanta, Cincinnati, and Kansas City service centers who were responsible for the various activities we assessed;interviewed staff from the Department of the Treasury’s Financial Management Service (FMS) about the use of lockboxes to process Form 1040 tax payments; analyzed filing season related data from various IRS sources, including its Management Information System for Top Level Executives; visited four walk-in assistance sites (two in Atlanta and one each in Kansas City, MO, and Mission, KS) to interview staff and taxpayers; visited two banks in Atlanta and St. Louis that were being used by IRS as lockboxes to process tax remittances and analyzed cost/benefit data related to IRS’ use of lockboxes; reviewed data on the results of and costs associated with IRS’ decision to allow filers of paper returns to request direct deposits of their refunds; reviewed data on IRS efforts to identify and resolve questionable refund reviewed computer system availability reports and periodically attended weekly operational meetings held by IRS’ Network and Operations Command Center in February, March, and April 1996; analyzed IRS’ toll-free telephone system accessibility data, telephone activity data for forms distribution centers, and accessibility reports for the IRS system (known as TeleFile) that enables some taxpayers to file their returns by telephone; reviewed data compiled by IRS, including the results of a user survey, on the performance of TeleFile; and reviewed relevant IRS internal audit reports. We did our work from January 1996 through September 1996 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Commissioner of Internal Revenue or her designated representative. On November 6, 1996, several IRS officials, including the Assistant Commissioner for Forms and Submission Processing, the National Director for Submission Processing, and the National Director for Customer Service (Planning and Systems), provided us with oral comments. Their comments were reiterated in a November 18, 1996, memorandum from the Acting Chief of Taxpayer Service. IRS’ comments are summarized and evaluated on pages 24 and 25. IRS also provided some factual clarifications that we have incorporated in the report where appropriate. Appendix I has data on 12 indicators that IRS uses to assess its filing season performance. These indicators relate to workload, such as the number of answered telephone calls from taxpayers who are seeking assistance; timeliness, such as the number of days needed to process returns or issue refunds; and quality, such as the accuracy of IRS’ answers to taxpayer questions and the accuracy with which IRS processes individual income tax returns and refunds. As shown in appendix I, IRS met or exceeded 11 of the 12 performance goals for the 1996 filing season and almost met the 12th goal (the number of forms-ordering calls answered). Two specific aspects of IRS’ filing season performance that are of particular interest to taxpayers and that were the source of problems in 1995 are (1) the level of taxpayer service being provided during the filing season, especially the ability of taxpayers to reach IRS by telephone, and (2) the timely issuance of refunds. In 1995, as in the past several years, taxpayers who sought answers to questions about the tax law or their accounts had considerable difficulty reaching IRS by telephone. In 1996, IRS improved its telephone accessibility while, at the same time, it reduced the availability of face-to-face services at its walk-in sites. Also, in 1995, millions of persons had their refunds delayed as a result of new IRS procedures for verifying the SSNs of dependents and EIC-qualifying children. The new procedures were designed to better ensure that persons were entitled to the dependents and EICs they were claiming. In 1996, IRS implemented revised case selection criteria that resulted in many fewer refund delays than in 1995. Sufficient information was not available when we completed our audit work to assess the impact of IRS’ revised procedures on the identification and correction of questionable SSNs. IRS officials have reaffirmed that service to taxpayers remains a primary goal. However, IRS took steps in 1996 to change the blend of methods that it uses to deliver that service. IRS placed more emphasis on providing telephonic and computer-oriented service (such as a new World Wide Web site on the Internet) while walk-in, face-to-face assistance was deemphasized. As a result, telephone accessibility improved while many walk-in sites either closed or offered a reduced level of service. An important indicator of filing season performance is how easily taxpayers who have questions are able to contact an IRS assistor on the telephone (i.e., telephone accessibility). In reports on past filing seasons, we discussed the difficulty taxpayers have had in reaching IRS over its toll-free tax assistance telephone line. Accessibility, as we define it, is the total number of calls answered divided by the total number of calls received. The total number of calls received is the sum of the following: (1) calls answered, (2) busy signals, and (3) calls abandoned by the caller before an assistor got on the line. By our definition, accessibility of IRS’ toll-free telephone assistance improved in 1996, although it was still low. From January 1 to April 20, 1996, IRS reported receiving about 114 million call attempts, of which about 23 million were answered—an accessibility rate of 20 percent. For the same period in 1995, IRS reported receiving about 236.0 million call attempts, of which 19.2 million (8 percent) were answered. As the data for 1995 and 1996 indicate, a major reason for the improved accessibility in 1996 was the significant drop in call attempts. IRS attributed that drop to (1) fewer refund delay notices being issued, as discussed in more detail later in this report, and (2) IRS’ efforts to publicize other information sources, such as its World Wide Web site on the Internet. “For the period January 1, 1996, to April 20, 1996, IRS received calls made by 46 million callers. IRS answered 23 million calls, or 50% of the callers. Of the 114 million total call attempts received, 23 million or 20% received an answer. The remaining 91 million attempts, often the result of redials, received a busy signal or were terminated by the callers because they did not want to wait in queue for an assistor. The total number of callers mentioned earlier was determined by discounting for redials. Therefore, the 114 million call attempts equates to 46 million callers. This is an average of 2.5 attempts per caller.” As IRS’ data indicate, the accessibility of IRS’ toll-free telephone assistance during the 1996 filing season, whether measured as a percentage of calls or callers, was still not good. For the 1996 filing season, IRS closed 93 sites that had previously provided walk-in assistance, reduced the operating hours of some of the 442 sites that remained open, and eliminated free electronic filing at many of the sites. According to IRS, the closed sites were selected on the basis of their historical volume of work and their proximity to other walk-in sites. As an indication of the effect of these closures and cutbacks, IRS data showed that (1) walk-in sites served about 2.8 million taxpayers from January 1 to April 20, 1996, which was about 17-percent fewer taxpayers than were served during the same period in 1995, and (2) about 59,000 electronic returns were filed at walk-in sites in 1996, compared with about 104,000 in 1995. Concerned about the reduction in walk-in service, the House and Senate conference agreement on the Treasury, Postal Service, and General Government appropriation for fiscal year 1997 included a provision that requires IRS to maintain the fiscal year 1995 level of service, staffing, and funding for taxpayer services. While noting that this provision does not mean that IRS should be required to rehire staff or reopen offices, the conference report said that “IRS should be very sensitive to the needs of the taxpayers” who use walk-in sites during the filing season. Walk-in sites provide various free services, including copies of more commonly used forms and publications, help in preparing returns, and answers to tax law questions. We visited four walk-in sites and asked taxpayers where they would go if the office were closed. Many taxpayers commented that they would go to another IRS office or a professional tax preparer for assistance, and that they would call the toll-free forms-ordering telephone number for forms or pick them up at a library or post office. As indicated by the persons with whom we spoke, there are other ways taxpayers can obtain the free services offered by walk-in sites, although maybe not as easily. For example, according to IRS, it generally takes from 7 to 15 workdays to receive materials that are ordered by telephone—longer if the materials are not in stock. Persons with access to a computer can download forms from the Internet or the FedWorld computer bulletin board. Free forms are also available at libraries and post offices and through IRS’ “fax on demand” service. Taxpayers who need help in preparing their returns and do not want to pay for that help may be able to take advantage of the tax preparation services offered at sites around the country that are part of the Volunteer Income Tax Assistance (VITA) and Tax Counseling for the Elderly (TCE) programs. According to IRS, these programs help older, disabled, low-income, and non-English-speaking individuals prepare their basic returns. IRS data for the 1996 filing season indicate that there was an increased demand for services at the VITA and TCE sites. The data showed that although the number of VITA and TCE sites around the country decreased by 513 compared with the 1995 filing season, about 71,000 additional taxpayers took advantage of the service. Taxpayers who need answers to tax law questions can call IRS’ toll-free tax assistance number or IRS’ TeleTax system, which has prerecorded information on about 150 topics. From January 1 to April 27, 1996, the number of tax law calls to TeleTax increased by about 11 percent over the same period in 1995 (i.e., 6.9 million in 1996 compared with 6.2 million in 1995). Still another option for free assistance is IRS’ World Wide Web site on the Internet. Among other things, IRS’ Web site includes copies of forms, information similar to that on TeleTax, and some interactive scenarios that taxpayers can use to help them answer some commonly asked questions. IRS reported that, as of May 1, 1996, its Web site had been accessed more than 52 million times since January 8, 1996, when it first became available. In 1995, IRS took several steps in an attempt to better ensure that persons were entitled to the dependents and EICs they were claiming. The most visible of those efforts involved the delay of about 7 million refunds to allow IRS time to verify SSNs, with an emphasis on returns claiming the EIC. The delays caused adverse reaction from taxpayers and tax return preparers during the 1995 filing season. Although IRS’ efforts in 1995 and the publicity surrounding those efforts appeared to have had a significant deterrent effect (e.g., according to IRS, 1.5 million fewer dependents were claimed in 1995 than were claimed in 1994), the efforts were not without problems. For example, although IRS identified about 3.3 million returns with missing or invalid SSNs and delayed any related refunds, it was able to pursue only about 1 million of those returns. For those cases it was unable to pursue, IRS eventually released any refunds, after holding them for several weeks, without resolving the problems. Also, IRS delayed about 4 million EIC-related refunds for taxpayers whose returns had valid SSNs to check for fraudulent use of the same SSN on more than one return. IRS eventually released almost all of those refunds, after several weeks, without doing the checks. For the 1996 filing season, IRS was more selective in deciding which cases to review and which refunds to delay. IRS tried to limit the number of delayed refunds to the volume of cases it could review and to focus its resources on the most egregious cases. The most significant change for the 1996 filing season was that IRS did not delay EIC refunds on returns with valid SSNs. IRS statistics on the number of refund delay notices sent to taxpayers in 1996, concerning dependent and EIC claims, indicated that IRS delayed far fewer refunds in 1996. As of September 6, 1996, IRS had mailed about 350,000 such notices compared with about 7 million in 1995. Another indicator that fewer refunds were delayed in 1996 is the decrease in the number of “where is my refund” calls to IRS. Taxpayers wanting to know the status of their refunds can call TeleTax and get information through the use of an interactive telephone menu. During the 1996 filing season, as of June 8, 1996, IRS reported receiving 48.2 million such calls, which was a decrease of about 15 percent from the 56.6 million it reported receiving for the same period in 1995. In contrast to the negative reaction from taxpayers and practitioners during the 1995 filing season, an executive of the largest tax preparation firm told us that IRS generally did a better job in 1996. The executive said that the firm’s clients received refunds quicker and received fewer notices about problems, such as SSN mismatches. Likewise, in March 28, 1996, testimony before the Oversight Subcommittee, a representative of the National Association of Enrolled Agents said the following: “Our members report they have encountered far fewer problems this year compared to last year in the area of refund processing . . . .” As part of IRS’ increased emphasis on verifying SSNs in 1995, the Examination function followed up on about 1 million returns that IRS’ computer, using certain criteria, had identified as having questionable SSNs. As of June 30, 1996, about 986,000 of those cases had been closed—about 500,000 (51 percent) with no change in tax liability and about 486,000 (49 percent) with changes totaling about $808 million. In 1996, IRS (1) revised the criteria used to select cases in an attempt to better focus its efforts and (2) identified about 700,000 returns for follow-up, which is about 300,000 fewer than in 1995. Because it takes time for IRS to complete its reviews, information on results was not available at the time we completed our audit work. Thus, we do not know the impact of IRS’ reduced level of effort in 1996. However, a decrease in the number of cases reviewed does not necessarily mean that IRS identified less noncompliance in 1996 than in 1995 because only about one-half of the cases reviewed in 1995 were productive. It is possible that IRS’ revised criteria, despite generating fewer cases, might have identified more productive cases in 1996. The SSN verification/refund delay efforts previously discussed were generally directed at identifying and correcting erroneous refunds caused by honest mistakes or negligence. Since the 1970s, IRS has had a Questionable Refund Program (QRP) directed at identifying fraudulent refund schemes. QRP results for January 1996 through September 1996 showed that IRS had identified 20,521 fraudulent returns (involving claimed refunds of about $55.4 million) during those 9 months. These results are a significant decline from the 59,241 returns and about $124.8 million in refunds reported for the first 9 months of 1995. QRP officials attributed the decline to three things. First, and most significant in their opinion, was a staffing reduction that was part of IRS’ cost-cutting efforts in anticipation of reduced funding levels. According to the officials, the 10 IRS service centers were allocated a total of about 379 full-time equivalent staff for the QRP in fiscal year 1996 compared with 553 full-time equivalent staff in 1995, which was a decrease of 31 percent. The other two reasons cited by the QRP officials were (1) the impact of enhanced upfront filters in the electronic filing system that prevented bad returns from getting into the system and (2) a decision to focus QRP efforts on certain kinds of cases. Although IRS was able to meet its processing goals (such as cycle time,processing accuracy, and refund timeliness) in 1996, those goals were based on expectations as to what IRS could achieve with the systems and procedures currently in place. In that regard, there is general agreement that much can be done to improve those systems and procedures. IRS has initiated several efforts toward that end, including (1) providing alternatives to the filing of paper returns, (2) using scanning and imaging technology to eliminate the manual data transcription of paper returns, and (3) using lockboxes and direct deposits to expedite the processing of tax payments and refunds, respectively. Despite IRS’ generally successful performance during the 1996 filing season, there are still several concerns centering around IRS’ modernization efforts. For example, although more returns were filed using alternatives to the traditional paper form, the number of returns filed through one of those alternatives (electronic filing) fell short of IRS’ projections. Also, although a document scanning and imaging system that was intended to streamline parts of IRS’ paper-processing operations performed better in 1996, the system still is not meeting IRS’ performance expectations and may eventually cost much more than originally estimated. Although data on the results of IRS’ use of lockboxes to process Form 1040 tax payments indicate that the government is saving money, those savings are being diminished significantly by the extra cost associated with having taxpayers send not only their payments but also their returns to the lockbox banks. Finally, expansion of the direct-deposit option for refunds to taxpayers who filed a paper return was not as widely received by taxpayers as IRS had anticipated. As of October 18, 1996, IRS had received about 118.1 million individual income tax returns, which was about 1.5 percent more than the 116.4 million returns received as of the same period in 1995. While the increase in the overall number of returns filed was small, the increase in the number filed through alternative methods was substantially higher than in 1995 (about 50 percent). IRS offers three alternatives to the traditional filing of paper returns (i.e., electronic filing, TeleFile, and Form 1040PC). As shown in table 1, most of the growth in alternative filings was due to TeleFile and Form 1040PC. Table 1 also shows that, of the three alternatives, only electronic filing failed to meet IRS’ projections. Electronic filing has several benefits. It enables taxpayers to receive their refunds sooner than if they had filed on paper and gives them greater assurance that IRS has received their returns and that the returns are mathematically accurate. The benefit for IRS is that electronic filing reduces processing costs and facilitates more accurate processing. IRS began offering electronic filing in 1986. Since that time, 1995 was the first year that the number of individual income tax returns received electronically decreased from the number received the prior year. IRS attributed that decline to the secondary effects of measures it implemented to combat filing fraud. IRS took several steps in an attempt to increase the use of electronic filing in 1996. For example, IRS (1) put increased emphasis on the availability of On-Line Filing, a program that allows taxpayers to file their returns, through a third party, via a personal computer-modem link, and (2) extended the period during which returns could be filed electronically by moving the closing date from August 15 (the filing deadline for taxpayers who get one extension to file) to October 15 (the filing deadline for taxpayers who get a second extension). Taxpayers’ use of electronic filing recovered somewhat in 1996—increasing to about 12.1 million individual income tax returns as of October 18 (about a 9-percent increase). According to IRS, a major contributor to this increase was growth in the Federal/State electronic filing program. Under that program, taxpayers can file both their federal and state income tax returns through one submission to IRS. A taxpayer’s federal and state data are combined into one electronic record that is transmitted to IRS, which, in turn, makes the state portion of the data available to the state. IRS reported that about 3.2 million returns were filed under the Federal/State program in 1996 compared with about 1.6 million in 1995. Some of the increase in electronic filing in 1996 was also due to the steps discussed in the preceding paragraph. According to IRS data, 158,284 taxpayers had used the On-Line Filing option as of October 18, and about 22,000 taxpayers had filed electronically between August 9 and October 18, 1996. Despite the increase in 1996, electronic filings that year were still below the 13.5 million individual returns filed electronically in 1994 and below IRS’ projection of about 13.6 million returns in 1996. A major impediment to the growth of electronic filing is that the method is not completely paperless. Taxpayers must send IRS their W-2s and a signature document (Form 8453) after their return has been electronically transmitted. IRS must then manually input these data and match them to the electronic return. In an attempt to eliminate the paper associated with electronic returns, IRS tested the use of digitized signatures during the 1996 filing season. The goal of that test was to gauge the willingness of taxpayers and preparers to use an electronic signature pad in place of signing a Form 8453. The electronic signature was attached to the electronic return and both were transmitted to IRS. The test was conducted at three locations (two VITA sites located on military bases and a private, tax return preparation office). According to IRS officials, about 50 percent of the taxpayers who were offered the chance to participate in the test agreed to do so. Given the level of participation in 1996 and positive preparer feedback, IRS plans to expand the test in 1997, but details of that expansion will not be finalized until just before the filing season begins. Besides eliminating the paper associated with electronic returns, there are other steps IRS could take to increase the use of electronic filing. In October 1995, we reported that without some dramatic changes in IRS’ electronic filing program, many of the benefits available from electronic filing could go unrealized. We recommended that IRS (1) identify those groups of taxpayers that offer the greatest opportunity to reduce IRS’ paper-processing workload and operating costs if they filed electronically and (2) develop strategies that focus on eliminating or alleviating impediments that inhibit those groups from participating in the program. As of October 9, 1996, IRS was finalizing a new electronic filing strategy. TeleFile generally provides the same benefits to taxpayers and IRS as electronic filing. However, TeleFile is more convenient and less costly than electronic filing because the latter requires that taxpayers go through a third party. The increase in taxpayer use of TeleFile in 1996 was due primarily to the program’s expansion nationwide. As shown in table 1, IRS received about 2.8 million TeleFile returns in 1996, when TeleFile was available to taxpayers in 50 states, compared with 680,000 in 1995, when Telefile was available in only 10 states. Although most of the increase was due to the program’s nationwide expansion in 1996, TeleFile use also showed a significant rate of increase in the 10 states that were in the program in 1995 (from 680,000 returns in 1995 to 804,732 in 1996—an 18-percent increase). A major change that might have contributed to the increase in TeleFile use was IRS’ decision to make TeleFile paperless in 1996. Unlike past years, taxpayers did not have to mail their W-2s or a signature document to IRS. Instead of the signature document, taxpayers used a personal identification number that was provided by IRS. IRS’ Internal Audit Division reviewed the 1996 TeleFile Program and concluded that management had “effectively prepared for and successfully implemented” the nationwide expansion of TeleFile. For example, Internal Audit noted that (1) its sample of returns filed through TeleFile showed that all tax calculations were correctly computed and that data had been posted accurately to IRS’ master file of taxpayer accounts and (2) taxpayer demand for TeleFile during the 1996 filing season was generally met. However, Internal Audit also noted that IRS had not completed a system security certification and accreditation and thus had no assurance that taxpayer data were adequately secured. According to Internal Audit, certification is a comprehensive evaluation of a system’s security features; accreditation is a declaration that the system is approved to operate. As of November 21, 1996, according to the TeleFile Project Manager, IRS was working to complete the certification and accreditation. Internal Audit’s evaluation and various statistics compiled by IRS, including the results of an IRS survey of TeleFile users, indicate that TeleFile worked very well in 1996. For example, about 92 percent of the users surveyed by IRS said that they were very satisfied with TeleFile. However, it is important to note that only about 10 to 14 percent of the more than 20 million 1040EZ filers who IRS estimated would be eligible to use the system in 1996 actually used it. IRS did not survey the nonusers because, according to IRS officials, past surveys showed that the most important reason eligible users cited for not using TeleFile was their preference for a paper version. However, those past surveys did not probe into why nonusers preferred paper. According to the TeleFile Project Manager, IRS plans several changes to TeleFile for the 1997 filing season, which he estimates will increase the participation rate to about 25 percent. For example, he said that eligibility to use TeleFile will be extended to married persons filing jointly and TeleFile users will be able to take advantage of the direct-deposit option that was available to other taxpayers in 1996 (this option is discussed later in this report). The most significant change for 1997, in terms of its potential impact on taxpayer participation, is IRS’ decision to revise the tax package sent to persons eligible to use TeleFile. Instead of sending eligible users a package that also contains a Form 1040EZ and related instructions, in case they choose not to use TeleFile, IRS has decided to send them a much smaller package that contains only the TeleFile worksheet and instructions. Although this action may encourage more persons to use TeleFile and reduce IRS’ overall printing and mailing costs, it could be seen as imposing a burden on persons who, for whatever reason, prefer not to use TeleFile and would, in that case, need a Form 1040EZ. It is unclear how taxpayers will react to this change. On the one hand, IRS summaries of three 1040EZ/TeleFile focus groups held in August and September 1996 indicated that focus group participants did not view the noninclusion of Form 1040EZ as a burden because they could easily get a copy, if needed, from their local library or post office. On the other hand, a mail survey that IRS sent to a random number of TeleFile users in 1996 showed that about 28 percent of the respondents thought it was very important that the 1040EZ information be included in the TeleFile package. The increase in the use of Form 1040PC during the 1996 filing season resulted, in part, from the largest user’s (a tax return preparation firm) rejoining the program after dropping out in 1995. For the 1995 filing season, IRS initially required that preparers provide taxpayers with a specifically formatted legend explaining the Form 1040PC. However, after the 1995 filing season began, IRS decided not to require the specifically formatted legend but to allow preparers to provide any type of descriptive printout that explained each line on the taxpayer’s Form 1040PC. According to an executive of the previously mentioned tax return preparation firm, (1) the firm chose not to participate in the program in 1995 rather than comply with the requirement for a specifically formatted legend and (2) IRS’ decision to change its requirement came too late for the firm to change its plans. The firm then rejoined the program for the 1996 filing season. The Form 1040PC was developed to reduce the number of pages that a standard Form 1040 requires, which is a benefit to taxpayers and IRS, and to streamline paper processing. Although use of the Form 1040PC reduces the amount of paper, IRS has not yet realized the full processing efficiencies available from that form. Because of problems encountered with IRS’ new document scanning and imaging system, as discussed in the next section of this report, IRS terminated plans to have Forms 1040PC scanned and, instead, is manually keying data from the forms into its computers. The Distributed Input System (DIS), which is IRS’ primary data entry system for paper tax returns and other paper documents submitted by taxpayers, has been in operation since 1984. Although DIS generally performed without major problems during the 1996 filing season, its age is a source of concern within IRS. IRS had planned to replace DIS with two document scanning and imaging systems. The first replacement system, the Service Center Recognition/Image Processing System (SCRIPS), was implemented nationwide in 1995 and is not yet performing to IRS’ expectations at that time. On October 8, 1996, IRS announced that the second planned system, the Document Processing System (DPS), was being terminated. IRS experienced significant performance problems with SCRIPS in 1995, which was the system’s first year of nationwide operation. Two major problems were significant system downtime and slow processing rates. IRS made some hardware and software modifications that helped improve the performance of SCRIPS during the 1996 filing season. IRS officials in all five SCRIPS service centers told us that SCRIPS performed significantly better during the 1996 filing season than it did in 1995. Specifically, IRS data for April through June of 1995 and 1996 (the first 3 months for which IRS had comparable data) indicate that system downtime decreased from 791 hours in 1995 to 43 hours in 1996. Despite the improved performance in 1996, SCRIPS (1) is still not processing all of the forms that it was expected to process and (2) may cost more than originally estimated. In an October 1994 business case for SCRIPS, IRS said that, by 1996, the system would be processing all Federal Tax Deposit coupons and information returns, all Forms 1040EZ, 50 percent of the Forms 1040PC, and 93 percent of the Forms 941 (Employers Quarterly Federal Tax Return). In fiscal year 1996, SCRIPS processed all Federal Tax Deposit coupons and information returns, as expected. However, SCRIPS only processed about 50 percent of the Forms 1040EZ and did not process any Forms 1040PC or Forms 941. In addition, the cost estimate for SCRIPS has increased from $133 million in October 1992 to a current estimate of $288 million. Part of the increase is due to the inclusion of certain costs, such as for maintenance, that were not part of the original estimate. We will be issuing a separate report that has more information on SCRIPS’ problems in 1995, its performance in 1996, and IRS’ plans for the system in the future. A second scanning system, DPS, was to replace SCRIPS and expand IRS’ imaging capability to more complex tax forms. IRS expected DPS to begin handling some of the DIS workload by the start of the 1998 filing season. However, due to concerns about the future of DPS, IRS reassessed its strategy for processing paper tax returns. According to IRS, part of the reassessment involved options, such as outsourcing the processing of some returns and/or acquiring a new manual data entry system to replace DIS. As of September 26, 1996, according to a cognizant IRS official, the reassessment was done but a final decision had not yet been reached. That reassessment took on added importance when IRS announced, on October 8, 1996, that DPS was being terminated. IRS attributed that decision, at least in part, to budgetary concerns and “the need to prioritize investments in systems that have a direct and immediate benefit on improved customer service, such as better telephone access.” The uncertainty of IRS’ plans for processing paper returns means that IRS may have to continue to rely on DIS longer than it had originally expected. In a February 1996 report, Internal Audit said that DIS could be required to process forms until 2003. Over the course of the 1996 filing season, various service center officials expressed concern about IRS’ ability to adequately maintain and repair the system. Despite their concerns, DIS performed satisfactorily during the filing season. Officials also told us that, until this year, IRS had not kept detailed maintenance records to capture DIS downtime. Thus, an accurate comparison of DIS downtime and system reliability over the years is not possible. We recently began a review of IRS’ ability to maintain current operating levels with its existing systems. IRS envisions that by 2001, most tax payments will be processed by lockbox banks rather than by IRS service centers. The banks process the payments and transfer the funds to a federal government account. The payment and payer information are then recorded on a computer tape and forwarded to IRS for use in updating taxpayer accounts. One reason for using lockboxes is the expectation that tax payments will be deposited faster into the Treasury. Faster deposits mean that the government has to borrow less money to fund its activities and less borrowing means lower interest costs (otherwise known as “interest cost avoidance”). Since 1989, IRS has used lockboxes to process payments sent in with estimated tax returns (Forms 1040ES). For the last several years, IRS has been testing the use of lockboxes to process payments sent in by individuals when they file their income tax returns (Forms 1040). For the 1996 test, IRS sent special Form 1040 packages to specific taxpayers. These packages included (1) mailing instructions and (2) a payment voucher that could be scanned by optical character recognition equipment. The test packages contained one return envelope with two different tear-off address labels. One label, which was addressed to a lockbox, was to be used for a return with an accompanying tax payment, and the other label, which was addressed to a service center, was to be used for a return with no payment. Taxpayers with payments were instructed to put their returns, payments, and vouchers in the envelope in their tax packages and to affix the label addressed to the lockbox. The bank that serviced the lockbox was to separate the returns from the payments, deposit the payments, record the payment information on a computer tape, sort the returns, and forward the returns and the computer tape to IRS for processing. IRS had tested another mailing method during the 1994 and 1995 filing seasons. This test involved the use of two envelopes. One envelope was addressed to a service center, and the other envelope was addressed to a lockbox. Taxpayers were instructed to put their tax returns in the envelope addressed to the service center and to put any payments and vouchers in the envelope addressed to the lockbox. The bank was to process the payments and vouchers as previously described. IRS has decided, for the 1997 filing season, to continue testing the two-label method in certain tax packages. According to an IRS official responsible for the lockbox program, IRS will no longer use the two-envelope approach due to the increased taxpayer burden IRS anticipates the approach would cause. She explained that IRS has found, in its studies of taxpayer behavior, that, among other things, taxpayers who participated in the test preferred to keep their remittances and returns together. Because of this, IRS believes that asking taxpayers to split their tax payments from their returns is burdensome. The studies referred to by IRS, all of which were done by a contractor in 1993 and 1994, included mail and telephone surveys of about 1,900 taxpayers, interviews with 46 individuals, and 5 taxpayer focus groups. We reviewed the contractor’s reports and considered the results to be inconclusive as they related to burden. For example, of the people surveyed by mail and telephone who said they remembered what they did in the test, 45.9 percent said that they felt uneasy about mailing their checks and returns in separate envelopes while 41.2 percent said that they did not feel uneasy (the other 12.9 percent did not know). The results of the 46 interviews showed a similar lack of consensus, in our opinion. Several people said that they preferred using one envelope because it was easier or because they were worried about the payments and the tax returns not getting linked if they were sent to two different places. But, several other people said that they preferred using two envelopes because they were concerned about the confidentiality of their tax returns or the increased risk of their returns getting lost. Even some of those who preferred one envelope expressed concern about the banks’ involvement in handling their returns. Burden is one issue to consider in deciding on the use of lockboxes; cost is another. Information we received from IRS and FMS indicates that having taxpayers send their returns to the lockboxes along with their payments has substantially increased the cost of the lockbox service to the government. During the first 8 months of the 1996 filing season, according to IRS, the lockbox banks had processed about 7 million Form 1040 payments. According to FMS, the government paid the banks an average of $2.03 per payment in 1996—98 cents to process each payment, 92 cents to sort each accompanying tax return, and 13 cents to ship each return to a service center—and the same fees will be in effect until April 1, 1997.Fees after that date are subject to negotiation between FMS and the banks. Cognizant FMS staff said that the banks have been charging such a high fee for sorting returns to encourage IRS to stop having the returns sent to the banks. Service centers process returns received from a lockbox bank in the same manner as they process returns that come directly from taxpayers, with one exception—the returns coming from the bank do not have to be sorted by IRS. According to IRS data, not having to sort the returns saves IRS about 37 cents a return—much less than the 92 cents per return being charged by the banks. Thus, assuming a volume of 7 million returns, the government paid about $6.4 million for a service (return sorting) that it could have done itself for about $2.6 million, or about $3.8 million less. Shipping those returns cost the government another $910,000. According to FMS, the use of lockboxes to process Form 1040 tax payments enabled the government to avoid interest costs of $15.7 million in fiscal year 1996. This interest cost avoidance compares with $1.6 million in fiscal year 1995. Because these savings result from faster processing of tax payments, having the banks sort and ship the tax returns does not add to the savings and could, by increasing the banks’ workload, cause processing delays that would reduce any savings. In an August 30, 1996, letter to Treasury’s Fiscal Assistant Secretary, IRS’ Deputy Commissioner acknowledged the high costs associated with having returns sent to lockboxes. In a September 11, 1996, reply, the Assistant Secretary also expressed some concern about the costs associated with the processing of Form 1040 tax payments through lockboxes. The Assistant Secretary said that “he most appealing option from a cost standpoint is the two-envelope concept. This option . . . makes good business sense as tax payments and tax returns are sent to the appropriate place best prepared to handle them.” As a way to lower costs, the Assistant Secretary suggested that IRS explore the possibility of not having the banks sort the returns and have the sorting done by the service centers. We discussed this option with officials in IRS’ National Office and at one service center. We were told that it would be difficult for service centers to sort the returns once they had been separated from the payments because the service center would not know if the taxpayer had fully paid his or her tax liability. According to the IRS officials, that distinction is important because, as previously discussed, returns involving less than full payment are given priority processing to enable more timely issuance of the balance-due notice to the taxpayer. IRS had considered adding a checkbox on the return for the taxpayer to indicate whether full payment was enclosed with the return. According to IRS, asking taxpayers to check such a box would be another form of burden—although not a significant one. Security is a third issue that needs to be considered in deciding how to use lockboxes. As previously noted, several individuals who participated in the focus groups and interviews about IRS’ use of lockboxes expressed concern that their returns would be lost or their return data would be misused. We did not do a thorough analysis of security at the lockbox banks. However, we reviewed security and processing procedures at 2 of the 10 lockbox banks and found that controls exist to minimize the risk of lost or misused tax data. IRS’ lockbox procedures require that the tax returns be separated from the payment as soon as the envelope is opened. Only personnel who open the envelopes and their supervisors are to have access to the returns. Security cameras are to monitor all of the lockbox processing. The returns are to be bundled and packed into boxes as soon as they are separated from the payment. Each day, the boxes of returns are to be shipped by bonded courier to the service center. Background checks, such as a criminal record check, are to be done on lockbox personnel hired by the bank. These are the same checks that are to be done on IRS service center personnel with the same duties. Bank personnel, like service center employees, are to sign statements of understanding about the confidentiality of the information they will process and the penalties for disclosing any of this information. IRS and FMS lockbox coordinators are to visit the banks to ensure compliance with these procedures and are to submit quarterly reports on the basis of those visits. An FMS staff person who was responsible for IRS’ lockbox processing program told us there have been no known incidents of disclosure of taxpayer information from a lockbox bank. During our visits to the two banks, we observed the security-surveillance cameras in operation and verified that badges were being worn by all personnel and that access to the processing area was controlled by a guard. We also reviewed judgmental samples of personnel files and, for each employee whose file we reviewed, we (1) found that disclosure statements were maintained and (2) saw evidence that background checks had been done. Unlike past years, IRS allowed taxpayers who filed paper returns in 1996 to request that their refunds be deposited directly to their bank account through an electronic fund transfer. IRS included a Form 8888 (Request for Direct Deposit of Refund) in almost all paper tax packages. IRS estimated that about 5 million taxpayers who filed paper returns would request the direct-deposit option and, on average, that the option would enable paper filers to get their refunds 10 days faster than if they had waited for a paper check. IRS also estimated that it would cost about 25-percent less to process a Form 8888 than it costs to mail a paper refund check (20 cents per form v. 27 cents per paper check). Only about 1.6 million taxpayers took advantage of the direct deposit option. An IRS official said that IRS will retain its goal of about 5 million direct-deposit refunds for the 1997 filing season. IRS has taken a couple of steps to enhance its chances of achieving that goal. Most significantly, it has eliminated the Form 8888. Instead of having a separate form, most of the individual income tax forms will be revised to provide space for the taxpayer to request a direct deposit and to provide the necessary bank account information. Also, as previously noted, TeleFile users will be able to request a direct deposit in 1997. Was the 1996 filing season a success? The answer depends on one’s perspective. From IRS’ standpoint, it was a success. IRS met or exceeded all but one of its performance goals and was very close to meeting the other. IRS was able to process individual income tax returns and refunds without any apparent problem, with its aging computer systems having made it through another filing season. From the taxpayer’s perspective, the filing season was also successful in many key respects. For example, relatively few refunds were delayed in 1996, unlike 1995 when millions of taxpayers were angered by IRS’ decision to delay their refunds while it checked dependent and EIC claims; more taxpayers were given the opportunity to file by telephone and to have their refunds directly deposited into their bank accounts; and IRS’ World Wide Web site on the Internet provided a convenient source of information for taxpayers with access to a computer. However, there were some problems in 1996. Although the accessibility of IRS’ toll-free telephone assistance improved, taxpayers continued to have problems reaching IRS by telephone, and some taxpayers may have been inconvenienced by the reduction in IRS’ walk-in services. IRS has several efforts under way to modernize the systems and procedures it has used for many years to process returns, remittances, and refunds. These efforts are essential if IRS is to successfully meet the demands of future filing seasons. To date, the results of those efforts have been mixed. IRS has taken steps to enhance its efforts. For example, IRS is (1) expanding eligibility for TeleFile and taking other steps in an effort to increase the use of that filing alternative, (2) working to make electronic filing paperless by broadening its test of digitized signatures, (3) making it easier for taxpayers to request direct deposits of their refunds, and (4) reassessing its strategy for processing paper tax returns. Even if IRS is successful in increasing the TeleFile participation rate to 25 percent in 1997, that would still leave a large number of eligible users who choose not to use TeleFile. We believe that IRS’ efforts to expand the use of TeleFile could be enhanced if it had more specific information on why eligible users prefer to file on paper. More specifics might help IRS identify barriers to TeleFile use and develop mitigating strategies. We also question whether IRS’ decision to have taxpayers send both their tax returns and their tax payments to lockboxes and to have banks sort those returns adequately considered both the costs to the government and taxpayer burden. Although it is important to minimize taxpayer burden, the evidence we were given was not convincing concerning the amount of burden associated with using two envelopes, especially in light of the extra cost to the government associated with using one envelope (about $4.7 million during the first 8 months of the 1996 filing season). It is understandable that persons contacted by IRS’ contractor, when asked to choose between one or two envelopes, would pick one, because it is easier to put everything into one envelope than to segregate things into two envelopes and pay additional postage. But, it is not clear that those persons considered the use of two envelopes an unreasonable burden. Nor is it clear how those persons might have responded if they were told that the use of one envelope causes the government to spend several million dollars more than it would if taxpayers used two envelopes. The cost associated with using lockboxes to process Form 1040 tax payments might become less of an issue if the government is able to negotiate bank fees for sorting that are more comparable to the service center costs for that activity. Absent lower fees, an alternative is to continue to have returns sent to the bank but to have the banks ship the returns to the service centers unsorted. That would require IRS to add a checkbox to the return (which would also be required if IRS decided to use two envelopes) but checking a box would likely be perceived by taxpayers as less of a burden than using two envelopes. However, while a reduction in bank fees or a decision to accept returns from the banks unsorted would make the one-envelope method more advantageous, they would not relieve the anxiety expressed by some taxpayers about their returns being lost or misused by bank personnel. If most eligible TeleFile users do not use the system during the 1997 filing season, as IRS is anticipating, we recommend that the Commissioner of Internal Revenue conduct a survey to determine why, including more specific information on why the nonusers prefer to file on paper, and take steps to address any identified barriers to increased user participation. If the government is unable to negotiate lockbox fees that are more comparable to service center costs and in the absence of more compelling data on taxpayer burden, we recommend that the Commissioner, for filing seasons after 1997, either discontinue having returns sorted by the banks or reconsider the decision to have taxpayers send their tax returns to the banks along with their tax payments. We are not making any recommendations in this report to address problems with telephone accessibility and electronic filing because we have recently issued separate reports on these topics. We will also be issuing a separate report on SCRIPS. We requested comments on a draft of this report from the Commissioner of Internal Revenue or her designated representative. Responsible IRS officials, including the Assistant Commissioner for Forms and Submission Processing, the National Director for Submission Processing, and the National Director for Customer Service (Planning and Systems), provided IRS’ comments in a November 6, 1996, meeting. Those comments were reiterated in a November 18, 1996, memorandum from the Acting Chief of Taxpayer Service. IRS officials also provided some factual clarifications that we incorporated in the report where appropriate. IRS agreed with our recommendation that it determine why more eligible taxpayers do not use TeleFile, including more specific information as to why nonusers prefer to file on paper. IRS officials told us that by the end of fiscal year 1997, IRS would conduct a focus group study of TeleFile nonusers to determine why they prefer to file on paper and to identify any barriers. IRS officials said that steps have also been taken to address some concerns identified by past nonuser surveys. IRS believes that taxpayers’ preference for paper returns is linked to their familiarity with the form. The TeleFile worksheet that taxpayers had been instructed to fill out and maintain as a record of their filing did not have the same “official” appearance as a tax form. For the 1997 filing season, according to IRS officials, TeleFile users will be instructed to complete a TeleFile Tax Record instead of a worksheet. As described by the officials, the TeleFile Tax Record will (1) include lines for the taxpayer’s name and address, (2) look more like the Form 1040EZ, and (3) be an official document. IRS hopes this change will provide potential TeleFile users with a higher comfort level. IRS officials also said that advertisements and other publicity tools that were used in 1996 will be emphasized again in 1997 to educate the public on the simplicity of using TeleFile. In commenting on our second recommendation, IRS officials said that IRS, in conjunction with FMS, has formed a task force to identify a long-term solution for 1998 and beyond for directing Form 1040 tax payments to lockboxes. According to the officials, the group has been tasked with (1) identifying options that complement Treasury’s goals of increasing the availability of funds and reducing the cost of collecting federal funds, (2) reviewing what is required of lockboxes by IRS to minimize operational and ancillary costs, and (3) making recommendations to management. The group is scheduled to present their findings to management by March 1997. This time frame should provide IRS with information to make a decision on Form 1040 tax payment processing that could be implemented for the 1998 filing season. We are sending copies of this report to the Subcommittee’s Ranking Minority Member, the Chairmen and Ranking Minority Members of the House Committee on Ways and Means and the Senate Committee on Finance, various other congressional committees, the Secretary of the Treasury, the Commissioner of Internal Revenue, the Director of the Office of Management and Budget, and other interested parties. Major contributors to this report are listed in appendix II. Please contact me on (202) 512-9110 if you have any questions. Answer 19.2 million calls 22.9 million calls were answered (116% of schedule) answered (119.5% of schedule) Provided 50% level of access 90% were answered accurately 91% were answered accurately 4.2 million calls answered (95.7% of schedule) 3.9 million calls answered (98.3% of schedule) (Table notes on next page) Code and Edit staff prepare returns for computer entry by, among other things, ensuring that all data are present and legible. The “returns processing productivity” indicator is based on the number of weighted returns processed, which includes all returns whether they were processed manually, through scanning equipment, or electronically. The different types of returns are weighted to account for their differing processing impacts. For example, a paper Form 1040 has a higher weighting factor than a paper Form 1040EZ, which in turn has a higher weighting factor than electronically processed returns. Cycle time is the average number of days it takes service centers to process returns. The “refund timeliness” indicator is based on a sample of paper returns and is calculated starting from the signature date on the return to the date the taxpayer should receive the refund, allowing 2 days after issuance for the refund to reach the taxpayer. As discussed in our report on the 1995 filing season (GAO/GGD-96-48), the 36-day accomplishment cited for 1995 was slightly understated by the exclusion of certain refunds that, according to IRS’ standards, should have been included. That issue was not a problem in 1996. The “calls scheduled to be answered” indicator is the number of telephone calls IRS believes its call sites will be able to answer with available resources. The indicator does not reflect the number of calls IRS expects to receive. The “level of access” indicator is the number of calls answered divided by the number of individual callers. See pages 4 to 6 for more information on this indicator. Katherine P. Chenault, Senior Evaluator Jyoti Gupta, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the Internal Revenue Service's (IRS) overall performance during the 1996 tax filing season, focusing on: (1) changes in 1996 that relate to taxpayer services and the processing of taxpayer refunds; and (2) some of IRS' efforts to modernize its processing activities. GAO found that: (1) IRS met or exceeded its timeliness and accuracy goals for processing individual income tax returns and issuing taxpayer refunds, answered more telephone calls from taxpayers seeking assistance than it had planned to answer, and received more returns through alternative filing methods than it had projected; (2) for the 1996 tax filing season, IRS revised its procedures to limit the number of delayed refunds to the volume of cases it could review, and focus on the cases most in need of review, and as a result, IRS delayed many fewer refunds in 1996 than it did in 1995 and avoided the kind of negative press it received in 1995 as taxpayers and tax return preparers reacted to the delays; (3) recognizing that much could be done to improve its systems and procedures, IRS has initiated several modernization efforts, and those efforts achieved mixed results in 1996; (4) IRS is developing a strategy to increase the use of electronic filing and reassessing its strategy for processing paper returns; and (5) IRS' decision to have taxpayers send not only their payments but also their tax returns to a lockbox and to have the banks sort those returns before sending them to IRS has increased program costs, unnecessarily in GAO's opinion, by $4.7 million. |
Jake Gyllenhaal in "End of Watch" Pick of the week: An all-time cop-movie classic Pick of the week: Jake Gyllenhaal and Michael Peña take on East L.A. in the dark, thrilling “End of Watch”
Cop movies, and specifically Los Angeles cop movies – set against that vast, horizontal landscape of blue skies, palm trees and crap two-story architecture – are such a vital part of American cinema that I didn’t realize how much I missed them until the void was suddenly filled. “End of Watch,” the electrical, pulse-pounding new action drama from cop-obsessed writer and director David Ayer, will no doubt be presented in certain quarters as pro-LAPD propaganda – or, if you prefer this phrasing, as a paean to heroism and patriotism in the face of lawlessness and disorder. I’m not much interested in those interpretations, which try to boil down this mythic, thrilling and brilliantly made motion picture to a political point system. It’s at least the best cop movie since James Gray’s “We Own the Night,” and very likely since Antoine Fuqua’s memorable “Training Day” (which, not coincidentally, was written by Ayer).
I’m not claiming that political and social questions are irrelevant when it comes to a movie set inside America’s most notorious big-city police department. They can’t be. But those things are not the purview of Brian Taylor (Jake Gyllenhaal) and Mike Zavala (Michael Peña), the knights in blue whose odyssey into deepening violent nightmare forms the narrative of “End of Watch.” Ayer tips his hand on this, about as much as he could, in an opening monologue delivered by Gyllenhaal on top of a hair-raising chase scene, viewed from the surveillance cam of Taylor and Zavala’s black-and-white. “I am the police, and I will arrest you,” Taylor intones. He is an inexorable force, he goes on, whose task is to enforce laws he did not make and may not agree with, using whatever degree of force is necessary and with zero regard for the reasons – good or bad, understandable or not – why someone may have violated them.
A muscular charmer with a shaved head, faint intellectual pretensions and a reputation as a ladykiller, Taylor also knows when he’s out of his depth. “Look, I’m just a ghetto street cop,” he tells a mysterious federal agent carrying an assault rifle at one point. This is just after Taylor and Zavala have blundered into an East L.A. house and busted an ultra-sinister Sinaloa cartel hitman who is babysitting a cage full of filthy and dehydrated illegal immigrants. What the federale tells them, not in so many words, is that they were screwed before and they’re double-screwed now, and that some of the scariest criminals in the world want to kill them. Being ghetto street cops, Taylor and Zavala pretty much shrug this off: Scary people want to kill us every day, dude.
One way of explaining the social context of “End of Watch” is to say that it’s a war movie in all but name, and as in, say, “Full Metal Jacket” or “The Hurt Locker,” the question of whether the war is worth waging in the first place hovers over the whole enterprise, but is never directly asked, let alone answered. If Ayer relies a little too much on the 21st-century premise that everyone in “End of Watch” is shooting their own videos – Taylor inside the police cruiser, the posse of heavily armed Latino gangbangers in their stolen minivan, the participants at a backyard African-American barbecue as it gets shot up – there’s no disputing the formal ingenuity of this movie or its breakneck forward momentum. (The cinematography is by Roman Vasyanov, who shot the delightful Russian musical “Hipsters.”) Beneath their studied casualness, macho bluster and chops-busting repartee, Taylor and Zavala are conscious of the mounting and imminent menace in every moment. While Zavala exchanges chitchat out the car window with a guy on the street, only we can see that he has his sidearm drawn and pressed against the door, just below the other guy’s line of sight.
Gyllenhaal has already shown tremendous range as an adult actor, way beyond what could have been predicted from his “Donnie Darko” youth. While I think he was terrific, for example, as the feckless male lead in the underperforming rom-com “Love and Other Drugs,” his performance here as Taylor might be his best since “Brokeback Mountain,” and pushes into a new direction. Taylor is a practical joker and a bit of a cowboy, whose Achilles heel as a cop is his headstrong, ambitious nature – exactly what makes him so appealing. He has wonderful chemistry with Peña, who has spent his whole career (or so it seems) playing upright Latino cops, but never one as complicated and well-developed as Zavala. His self-assigned role is to be the more level-headed of the duo, the straight man who feeds Taylor his gag lines, the married man who has to convince his wayward bro that his latest girlfriend, Janet (Anna Kendrick), is the for-sure, must-marry real deal and not just another you-know-what. ||||| 2.5 out of 4 stars Title End of Watch Written by David Ayer Directed by David Ayer Starring Jake Gyllenhaal, Michael Pena Genre Action Classification 18A Country USA Language English Year 2012
Early in his career, David Ayer made his rep with Training Day. Now, clearly, he's made his peace with Hollywood. Ayer is back with the uniforms on the mean streets of South Central L.A., but his trademark grit and authenticity have been traded in for a far more palatable commodity in Tinseltown: the appearance of grit and authenticity, a patina that lends itself well to soft story arcs and tidy love interests and a melodramatic climax. So, where once the police badges were tarnished, now they glow like a saint's halo. Yep, what we have here is a panegyric to the boys in blue, where squad-car buddies compete for their moral merit badges. Really, this is an open and shut case of good cop/better cop.
But back to authenticity's appearance. Sporting a shaved head and a winning smile respectively, Taylor and Zavala (Jake Gyllenhaal and Michael Peña) share not only that squad car but also a penchant for mini-cameras worn on their vests. Seems Taylor is an aspiring filmmaker shooting a day-in-the-life video for classroom credit, a happy coincidence that allows Ayer to go all YouTubey on us – you know, lots of point-of-view shots and spinning angles and general graininess. More happily still, the bad guys out there share an identical taste in cinematography; apparently, when not busy packing heat or gang-banging or being crackheads, they too are wannabe auteurs, or so the direction would have us think. Anyway, add up the visuals and eureka – faux grit.
Real banter, though. During the downtime behind the wheel, buddy cops do love to chat and, here, Ayer hasn't lost his rough touch: The dialogue is quick, witty and entirely credible. So are the lead performances. Rambling on about their workplace worries and their domestic arrangements – shaved head has an exciting new girlfriend, winning smile has a beloved wife – Gyllenhaal and Peña make for believable buds and give the picture its breezy pace. We definitely buy into the humanizing banter.
Story continues below advertisement
It's the lionizing action that's no sale. Suspicions are raised early, in a sequence where Zavala, bad-mouthed by a gangsta, sheds his gun-belt and challenges the dude to a fair fistfight. Which he inevitably wins. Which prompts the gangsta to give him props for being an honourable fellow. Sorry, but I'm calling b.s. on the authenticity scale ("Ya' feel me?").
More troubling is the script's penchant for overdemonizing the ghetto's black residents. Now I wouldn't expect junkies to be model parents but, to carve out some quiet time for a fix, do they really duct-tape their babies' mouths and lock the tykes in a closet? On the subject of imperilled infants, don't forget the ones left behind in a burning building. Our heroes don't – they brave the flames to make the rescue, then, aw shucks, deny their heroism. A stash of illicit firearms, a garage-full of illegal immigrants, and a nasty drug cartel later, good cop and better cop are in deep do-do in a dark alley. Guns blaze, blood is let, yet rest assured that, even in the cold aftermath as the credits roll, their sacred bond is resurrected. Not to mention the banter.
No doubt, these twin saviours are a likeable tandem, and they bear their cross lightly. Still, End of Watch suffers from no end of sanctimony. Sainthood is all well and fine but it ain't drama and, on screen at least, the question cries out: Where's a corrupt cop when you need him? ||||| From the first tires-squealing, sirens-blaring, guns-blazing car chase to the last quiet conversation, "End of Watch" is a visceral story of beat cops that is rare in its sensitivity, rash in its violence and raw in its humor.
For David Ayer, who has long made the minefield of police work his metier, this blood-drenched and unexpectedly moving film is his best cut yet on what life is like on that thin blue line. Jake Gyllenhaal and Michael Peña star as partners fighting crime on the streets of South-Central Los Angeles. Their beat is poverty riddled and gang infested. Drug running, turf wars and lethal grudges that only end badly frame their days. It's all captured with a gritty hand-held intensity that keeps you on edge and unsettled, waiting for the next shoe to drop.
The story is, in a way, an ordinary one — regular cops who aren't corrupt, guys who chalk up any heroics to just doing their job. In a reflection of reality, much of the film is spent with officers Brian Taylor (Gyllenhaal) and Mike Zavala (Peña) in their squad car. The age of cellphone videos has imprinted the film's style. At times they feel barely an arm's length away.
No single bad guy emerges, no one case has to be solved. Instead the film's rhythm is set by the day-in and day-out routine of police work. Ayer has done a solid job of keeping Brian and Mike rocking between boredom and adrenaline-pumping action, starting with Sarge (Frank Grillo), whose admonishments begin their shift, to the streets where they never know what will happen and finally to the paperwork at end of watch. The crime is of the most depressing kind — crack mothers, dead bodies, drugs and even some human trafficking — the scenarios seemingly as random in their placement as the gruesome violence that colors them.
The worst of the worst is a Latino gang with ties to Mexico's Sinaloa cartel that keeps surfacing. They are a scary crew with names like Big Evil (Maurice Compte), Wicked (Diamonique), La La (Yahira Garcia) and Demon (Richard Cabral) and a rage that is full-blown crazy. Garcia and Compte are standouts at channeling anger in stomach-churning ways.
Back in the car, the officers pass the time giving each other grief. A pattern soon emerges, because Mike is running this road show: razzing Brian about his new girlfriend Janet (Anna Kendrick), offering up very graphic relationship advice, regaling him with stories of family life with Gabby (Natalie Martinez), who is pregnant with their first child. Kendrick and Martinez do their job to expose the softer side of their guys, but they are really just a few threads woven into the fabric of the partners' lives. There are other good turns around the edges, especially America Ferrera toughening up for her by-the-book cop and David Harbour as bitter, older officer Van Hauser, who the guys relentlessly prank.
But the only relationship that really matters is the one between Brian and Mike. There is a lot of love in that car, and Peña and Gyllenhaal make you feel it. The easy back and forth between them — topics ranging from raunchy nonsense to philosophical musings — have an organic feel that is hard to come by and usually worth the wait. These moments, seeded through the film, nearly always bring tension-releasing laughter, which we need as much as they do.
In Brian, it feels as if Gyllenhaal has finally found his way back home after struggling through a series of roles that didn't quite fit in the years after his "Brokeback Mountain" breakout. He's got a way of playing things so close to the vest that it requires a character with a rich interior life that he can expose in a look or a laugh. As good as Gyllenhaal is in this, Peña nearly steals the show. From the moment Mike Zavala steps into view, he is an LAPD beat cop in every move he makes — whether duking it out with a drunk or dancing with his wife.
Ayer has been best known until now for his searing script for 2001's "Training Day," also a story of cops in L.A. A proven writer, his previous forays into directing — "Harsh Times" and "Street Kings" — have been less skilled, their cop stories more of the same. "End of Watch" is different — distinctive and street worthy.
betsy.sharkey@latimes.com ||||| Bullets And Buddies On The Streets Of South Central
toggle caption Scott Garfield/Open Road Films
End of Watch Director: David Ayer
Genre: Action
Running Time: 109 minutes Rated R for strong violence, some disturbing images, pervasive language and some drug use With: Jake Gyllenhaal, Michael Peña, Anna Kendrick
Watch Clips 'Look At Me' 'Look At Me' 'Quinceanera' 'Quinceanera' 'A Pattern' 'A Pattern'
Street gangs, drugs and the Los Angeles Police Department have been ingredients in so many police thrillers that it's hard to imagine a filmmaker coming up with a fresh take — though that hasn't stopped writer-director David Ayer from trying. He's made four cops-'n'-cartels dramas since his Oscar-winning Training Day a decade ago; the latest, End of Watch, easily qualifies as the most resonant.
It begins with a tire-squealing, pedal-to-the-metal chase through South Central L.A., shot through the windshield of a police cruiser manned by officers Brian Taylor (Jake Gyllenhaal) and Mike Zavala (Michael Peña). Because Taylor's taking a film class in his time off, they travel with one extra piece of equipment in their patrol car: a camcorder — useful when the chase ends in a shootout, but still much to the annoyance of his superiors.
Still, if these cine-savvy partners don't exactly go by the book, they're good guys — good cops, dealing daily with all kinds of horrors, from missing kids to human trafficking, and somehow staying unwarped, professional and decent. This should not be a remarkable story, but in Hollywood, where cops are mostly considered interesting only when they go rogue, it kind of is.
Yes, these guys are cocky and pumped up, those badges on their chests a power trip as they strut into gang territory with guns drawn. But the badges are also a responsibility they take seriously. Taylor and Zavala are "street" in the words of one of the gang-bangers they deal with, meaning worthy of trust.
Their street cred is sufficient enough that they're even given a heads-up from a guy they've previously arrested when they cross a drug cartel that's moving into the area. Not that they listen, of course, which ratchets up the stakes in a tale that's already drum-tight with tension.
Ayer, who wrote and directed, is hardly breaking new ground here; the elements he employs are time-honored, from the Cops-style minicam footage that lets the audience ride shotgun, to the shootouts pumped up by Red Bull and coffee. Even the buddy banter that tells you these guys would lay down their lives for each other sounds familiar.
But Ayer and his cast are serving all of it up not just with the urgency the director is so good at, but with an emotional undercurrent that makes it feel remarkably authentic. End of Watch is one thriller where the adrenaline rush, considerable as it is, is almost always put in the service of character. Happily, the character on display turns out to be considerable, too. | Critics are raving about End of Watch, starring an excellent Jake Gyllenhaal and Michael Peña as good-guy cop buddies in Los Angeles. It's some of director David Ayer's best work since Training Day, which he wrote: The "blood-drenched and unexpectedly moving film" is Ayer's "best cut yet on what life is like on that thin blue line," writes Betsy Sharkey in the Los Angeles Times. It's "a visceral story of beat cops that is rare in its sensitivity, rash in its violence and raw in its humor," and the characters feel "barely an arm's length away." "End of Watch is one thriller where the adrenaline rush, considerable as it is, is almost always put in the service of character. Happily, the character on display turns out to be considerable, too," writes Bob Mondello at NPR. At Salon, Andrew O'Hehir calls the movie "mythic, thrilling and brilliantly made." It "goes beyond gritty and realistic violence into a zone where Hieronymus Bosch collides with Dante," he writes. But at the Globe and Mail, Rick Groen has some reservations about the "sanctimonious" film. "Gyllenhaal and Peña make for believable buds and give the picture its breezy pace," he writes. "It’s the lionizing action that’s no sale." |
ATHENS, Aug 17 (Reuters) - With two euros in his pocket, Yorgos Vagelakos, an 81-year-old retired factory worker, scouts the farmer’s market in his working-class Athens neighborhood for anything he can afford.
Like most pensioners, he was hit hard by Greece’s economic crisis. Over eight years, the country’s international bailouts took aim at its pension system and more than a dozen rounds of cuts pushed nearly half its elderly below the poverty line.
Now, the country is looking to the end of its third and final rescue package next week, but for Vagelakos, there is little to cheer about.
“For the oranges I’m going to buy I’ll pay you next week,” he tells a vendor at the market. Half his money has already gone to a few bunches of grapes.
“Two euros next week. Will you be here?” he asks, picking up his bag of fruit. The response is affirmative, and he jokes:
“Well then I won’t come so I won’t have to pay you!”
Reuters first interviewed Vagelakos in 2012, when Greece signed up to a second bailout that saved it from bankruptcy and a euro zone exit. Back then, he was going to the market with 20 euros in his pocket. His monthly income, including his pension and benefits, had been cut to about 900 euros from 1,250 euros.
Today it is down to 685 euros and debts are growing, he said.
With unemployment reaching almost 28 percent at its peak, a quarter of children living in poverty and benefits slashed, many families grew dependent on grandparents for handouts during the downturn. Vagelakos can no longer support the families of his two sons and can barely cover his and his wife’s needs.
“I wake up in the morning to a nightmare,” he said. “How will I manage my finances and my responsibilities? This is what I wake up to every morning.”
Sitting at the kitchen table of his modest home, he goes through a notebook listing debts to the pharmacy and others: “36.8 (euros), 47.5 plus 13... If we add to this the rest of the debt that we have to pay, what is left for us to live on?”
Pensioners have staged numerous protests against the austerity measures imposed by the bailouts, but although the Greek economy is finally starting to grow again, albeit modestly, they may face yet more pain. Changes to pension regulations mean more cuts are expected in 2019.
“The memorandum (bailout) will never end,” Vagelakos said. Referring to a plan by Greece’s European partners to closely monitor its finances after the bailout ends on Aug. 20, he said:
“Even if they end in August, we have the permanent surveillance, which is not a memorandum but a continuous memorial service for us.” (Writing by Karolina Tagaris Editing by Gareth Jones) ||||| In this Friday, Aug. 17, 2018 photo Pedestrians walk in Plaka neighborhood of Athens. The one area of the economy that's flourishing is tourism, with officials projecting a record-high 32 million arrivals... (Associated Press)
In this Friday, Aug. 17, 2018 photo Pedestrians walk in Plaka neighborhood of Athens. The one area of the economy that's flourishing is tourism, with officials projecting a record-high 32 million arrivals this year. Greeks, however, are finding it increasingly expensive to go on holiday in their own... (Associated Press)
ATHENS, Greece (AP) — There'll be no dancing in the moonlit streets of Athens.
For all the official pronouncements that Greece's eight-year crisis will be over as its third and last bailout program ends Monday, few Greeks see cause for celebration.
Undeniably, the economy is once again growing modestly, state finances are improving, exports are up and unemployment is down from a ghastly 28 percent high.
But one in five Greeks are still unemployed, with few receiving state benefits, and underpaid drudgery is the norm in new jobs. The average income has dropped by more than a third, and taxes have rocketed. Clinical depression is rife, suicides are up, and hundreds of thousands of skilled workers have flitted abroad.
After the end of the bailout Monday, Greece will get no new loans and will not be asked for new reforms. But the government has agreed to a timetable of savings so strict as to plague a future generation and a half: For every year over the next four decades, governments must make more than they spend while ensuring that the economy — that shrank by a quarter since 2009 — also expands at a smart rate.
"Personally, I can see no hope for me in the coming years," says Paraskevi Kolliabi, 62, who lives on a widow's pension and helps out in her son's central Athens silver workshop. "Everything looks black to me."
Pensioners face pre-agreed new income cuts next year, while a further expansion of the tax base is due in 2020. But tax collection remains scrappy in a country where compliance was never strong, and the taxman's increasingly extravagant demands, coupled with often slapdash policing, only strengthened the sense of injustice.
"My pension has been cut about thirty percent since the start of the crisis," Kolliabi said. "I have never in my life gone through such (financial) hardship as during the past two years. There were entire days when not a single customer would enter" the shop in the Monastiraki district.
Greece's once cheerfully spendthrift middle class, whose rapid growth before the state finances imploded drove a consumption-fuelled economy, has been squeezed hard by intense taxation, mortgages from the bygone days of easy credit, and job losses.
"What I see is that the rich are becoming richer and the poor poorer," Kolliabi said. "We used to cater to the middle class, and the middle class is dead, they can't make ends meet."
Following one of the latest rounds of cutbacks, her son, Panagiotis, now sees more than 60 percent of his income gobbled up by taxes, pension and social security contributions. That kills any ambition for growing the business.
"The prospects for after Aug. 20 are not good," he said. "There's no way I will be able to make an investment ... to expand my business."
In the northern city of Thessaloniki, Christos Marmarinos, 55, had to close his clothes manufacturing unit after 25 years in business due to lack of customers. Instead, he plunged what funds he had into something altogether different, a cafeteria and grocery store.
"We found this way out, and employ ten people," he said. But Greece needs more than cafeterias if the economy is to pick up again and modernize, he says. "We need real investments in manufacturing."
Part of the sufferings of Greece's private sector are due to disastrous government attempts in the panicky first months of the crisis to shield from cutbacks the bloated public sector, which has traditionally been the political fiefdom and key source of votes for any ruling party.
But while considerably smaller and poorer than before the crisis, the public sector remains largely ineffective and disgruntled, providing ever shoddier services.
The one area of the economy that's undoubtedly flourishing is tourism, contributing some 20 percent of GDP, with officials projecting a record-high 32 million arrivals this year. Greeks, however, are finding it increasingly expensive to go on holiday in their own country, while a boom in short-term rentals in residential districts of Athens has driven rents beyond the reach of many locals.
Even the governing coalition, which swept to power in 2015 promising to instantly end austerity and cancel Greece's debt — only to reverse course and sign a new tough bailout program — is low-key about the end of the bailout era.
"We're not planning any parties," said Costas Zahariadis, an official in the dominant leftwing Syriza party. "We don't believe we should start celebrating as if a large section of Greek society didn't have serious financial problems. But of course we won't be shedding tears over Greece leaving the bailout era."
Financial analyst Manos Chatzidakis, who is head of research at Beta Securities, says much has been done over the past eight years, although the tax and judiciary systems need further work. He said that if future governments stick to agreed reforms and fiscal policy then gradually returning confidence will allow Greece to sell its bonds at affordable rates — even if investors initially demand high returns — and attract investment.
The ability to tap bond markets is vital, because after the bailout program, Greece will have to finance itself, albeit initially assisted by a substantial cash buffer.
"I think it's all a question of commitment to the bailout program, to the privatizations, to everything that has been agreed" with Greece's creditors, he said. "I'm definitely more optimistic than in the past. Things had reached a point (in 2015) where they couldn't get worse."
Hatzidakis stressed that many of the bailout reforms were "unprecedented" for Greece, which took a long time to understand and implement them.
"So we should not be strict and expect everything to happen fast," he said. "It took time to reach this point and a lot of effort, which I think is starting to bear fruit."
___
Srdjan Nedeljkovic in Athens and Costas Kantouris in Thessaloniki contributed to this report. ||||| ‘The worst is over’ after eight very difficult years for the country, commissioner says
Greece has turned the page to become “a normal” member of the single currency, European Union authorities in Brussels declared as the country finally exited its eight-year bailout programme.
Its three bailouts during the eurozone crisis totalled €288.7bn (£258bn) – the world’s biggest-ever financial rescue. During that time, as the crisis threatened to lead to the nation’s ejection from the single currency – “Grexit” – Greece has had four governments and endured one of the worst recessions in economic history.
Greece's bailout is finally at an end – but has been a failure | Larry Elliott Read more
Marking the official end of the third bailout programme on Monday, Pierre Moscovici, the European commissioner for economic and financial affairs, said Greece was beginning a new chapter after eight “very difficult” years.
“We have had eight very difficult years, often very painful years, where we have had three successive programmes. But now Greece can finally turn the page in a crisis that has lasted too long,” he told journalists. “The worst is over.”
Evoking pensioners, workers and families who had suffered their own personal crisis, he added that he was “conscious that all those people may not feel that their situation has yet improved much if at all. My message to them is therefore simple: Europe will continue to work with you and for you.”
Moscovici said the end of the bailout “draws a symbolic line under an existential crisis” for all 19 countries that share the single currency.
Donald Tusk, the president of the European council, tweeted: “You did it! Congratulations Greece and its people on ending the programme of financial assistance. With huge efforts and European solidarity you seized the day.”
Donald Tusk (@eucopresident) You did it! Congratulations to Greece and its people on ending the programme of financial assistance. With huge efforts and European solidarity you seized the day.
Greece was granted €86bn in 2015 in its third bailout but only needed €61.9bn. Total rescue funds stand at €288.7bn, the largest amount ever dispersed by international creditors.
“It took much longer than expected but I believe we are there,” said Mário Centeno, the chair of the European Stability Mechanism – the EU bailout fund created as a result of the financial crisis. “Greece’s economy is growing again, there is a budget and trade surplus, and unemployment is falling steadily.”
EU officials say Greece is now “a normal country” because it no longer has a bailout programme with conditions imposed by international creditors. However Athens will face more exacting checks than any other eurozone member, so Brussels can monitor whether the government’s budgets are in line with EU stability and growth targets. Moscovici insisted this “enhanced post-programme surveillance” would be “much lighter” than anything imposed by the troika, the name for the three creditor institutions, which became a byword for Greece’s loss of sovereignty during the economic crisis.
However, away from the official optimism in Brussels and Luxembourg, huge questions remain about how a country scarred by austerity can recover.
Almost a fifth of Greece’s working-age population is out of work. By 2023 unemployment is forecast to fall to 14%, far higher than the current eurozone average 0f 8.3%. Meanwhile, youth unemployment remains at 43.6% – the worst in the EU.
Many analysts believe it will take a decade before Greece returns to pre-crisis living standards following a slump in which its economy contracted by 25% and unemployment peaked at 28%. After wages and pensions were slashed in the first bailout, economic output dropped, small businesses folded, suicide rates increased and levels of extreme poverty jumped. The population has fallen by 3% because of emigration and a lower birth rate.
Greece has the highest government debt in the EU, 177% of gross domestic product, and is forecast to be repaying loans until 2060.
The country’s former creditors remain divided about the way forward. As a condition of getting debt relief, Athens agreed to the EU’s demand to run a budget surplus of 3.5% of GDP until 2022 and thereafter 2%. However, the International Monetary Fund, a co-funder of the bailouts, has long argued this goal is too onerous for a country that has endured years of belt-tightening.
The IMF lost the argument but revealed its scepticism its latest country report on Greece. The fund said achieving a surplus of 3.5% of GDP would require high taxation and constrained social spending. It called on Athens to stay in line with creditor-mandated spending plans and reduce high tax rateswhile broadening the income tax base.
“Any delay in these reforms would seriously undermine the credibility of the assumptions underlying the debt relief measures agreed with European partners,” the IMF said.
Sign up to the daily Business Today email or follow Guardian Business on Twitter at @BusinessDesk
Allied to doubts about Greece’s budget targets, the fund fears the EU’s assumptions about Greek economic growth are too optimistic. “Very ambitious assumptions” about GDP growth and Greece’s ability to run large primary fiscal surpluses raise questions about the sustainability of debt over the long-run, suggesting “further debt relief” could be needed, it has said.
Angelos Chryssogelos, who teaches Greek politics at King’s College London, said weak growth is one of the biggest dangers. “Greece is in a position of purgatory,” he said. “It is going to be going to be in a high taxation, low growth cycle for the forseeable future.
Greece needed far-reaching reforms, he said, such as improving the functioning of the state, more digitalisation and making life easier for business startups. “You need really far-reaching structural reforms for growth to be kick-started, but that is something most Greek governments are unable to deliver.” ||||| Media playback is unsupported on your device Media caption Moscovici: 'An exceptionally tough period for the Greek people'
Greece has successfully completed a three-year eurozone emergency loan programme worth €61.9bn (£55bn; $70.8bn) to tackle its debt crisis.
It was part of the biggest bailout in global financial history, totalling some €289bn, which will take the country decades to repay.
Deeply unpopular cuts to public spending, a condition of the bailout, are set to continue.
But for the first time in eight years, Greece can borrow at market rates.
The economy has grown slowly in recent years and is still 25% smaller than when the crisis began.
"From today, Greece will be treated like any other Europe area country," the EU's Commissioner on Economic and Financial Affairs, Pierre Moscovici, said on Monday.
Its reforms had, he said, "laid the foundation for a sustainable recovery" but he also cautioned that its recovery was "not an event, it is a process".
According to the International Monetary Fund (IMF), only four countries have shrunk economically more than Greece in the past decade: Yemen, Libya, Venezuela and Equatorial Guinea.
Read more on this story:
How was Greece bailed out?
The last €61.9bn was provided by the European Stability Mechanism (ESM) in support of the Greek government's efforts to reform the economy and recapitalise banks.
The bailout - the term given to emergency loans aimed at saving the sinking Greek economy - began in 2010, when eurozone states and the IMF came together to provide a first tranche of €20bn.
The European single currency had fallen to its lowest level against the dollar since 2006 and there were fears the debt crisis in Greece would undermine Europe's recovery from the 2008 global financial crisis.
Image copyright Reuters Image caption Former Prime Minister George Papandreou saw his government crumble after agreeing to the bailout
At the worst moments of the crisis, there were doubts about whether the eurozone would survive at all. There seemed to be a real possibility that Greece and perhaps others might have to give up the euro.
The response included bailout loans, for a total of five countries, and a promise from the European Central Bank that it would, if necessary, buy the government debts of countries in danger of being forced out of the eurozone.
Set up by eurozone states, the ESM had been prepared to provide a further $27bn to Greece but said the country had not needed to call on it.
"Greece can stand on its own feet," said ESM chairman Mario Centeno.
'I can't buy my little grandchildren a present'
By Mark Lowen, BBC News, Athens
Tassos Smetopoulos and his team of volunteers run a food handout in central Athens.
"The numbers are actually rising," he says, chopping up vegetables for a huge pot to serve to those who wait. "The bailout might be ending on paper - but not in reality."
Fifty-four-year-old Fotini, who was laid off three years ago, is one of the few who will speak openly. This proud nation has struggled to accept its loss of dignity.
"I don't see the crisis coming to an end," she says. "We are stressed and angry because we don't have jobs. I'm embarrassed that I can't buy my little grandchildren a present. We just want to live comfortably in our own homes so we can look our children in the eyes."
Read more from Mark
How are Greeks coping?
At the height of the crisis, unemployment soared to 28% but today it is 19.5%.
Image copyright AFP Image caption Chemistry graduate Panagiota Kalliakmani has gone from the lab to the kitchen
Those employed often have jobs for which they are overqualified, such as chemistry graduate Panagiota Kalliakmani, 34.
Seeing career prospects in her home city of Thessaloniki shattered, she is now finding work as a chef.
"The crisis was a slap in the face," she told AFP news agency. "We had grown up accustomed to the benefits of living in a European country and suddenly everything came crashing down."
"Nothing is certain," she added. "The crisis taught us not to make long-term plans."
Some 300,000 Greeks have emigrated in search of work since the crisis began while those depending on state benefits have seen their income whittled away.
Image copyright Reuters Image caption Yorgos Vagelakos and his wife live in a suburb of Athens
Yorgos Vagelakos, an 81-year-old retired factory worker, took home a pension and benefits amounting to €1,250 before the debt crisis.
Today he gets €685 and his debts are growing, he told Reuters news agency. He can no longer support the families of his two sons and can barely cover his and his wife's needs.
"I wake up in the morning to a nightmare," he said. "How will I manage my finances and my responsibilities? This is what I wake up to every morning."
How will the loans be repaid?
While Greece's economy has stabilised, its accumulated debt pile stands at about 180% of GDP.
Under a deal hammered out with other eurozone states in June, it must keep strict control over its public spending, running a budget surplus, before interest payments, of at least 2.2% of GDP until the year 2060.
With serious questions over its ability to manage, some analysts predict it will still be paying off its current debt after 2060.
Greece's freedom to control its own economic affairs will be tempered by enhanced surveillance from the European Union's executive, the European Commission.
Professor Costas Meghir, an economist with Yale University based in the Greek capital Athens, told the BBC: "The Greek government has to be even more disciplined now because it has to rely on foreign markets at reasonable interest rates to be able to borrow."
What does the Greek milestone mean for Europe?
Professor Kevin Featherstone, director of the Hellenic Observatory at the London School of Economics, said Greece had helped to safeguard the future of the eurozone by agreeing to the terms of the bailout programme.
"For a political system to have gone through these years of austerity, this depth of economic hardship, and maintained a functioning society, a functioning democracy, is testament to the robustness of Greece as a modern state," he said. "Greece has saved the euro." | "I wake up in the morning to a nightmare. How will I manage my finances and my responsibilities? This is what I wake up to every morning." That reality is not set to change for Yorgos Vagelakos, an 81-year-old retiree living in Athens—even as Greece's reality changes in a major way. The country's eight-year bailout is officially over, with the country's third and final bailout program ending Monday, but the cuts to public spending it has had to make over the past eight years will live on. Reuters reports that Vagelakos, who had originally received a monthly pension of about €1,250 (about $1,430), now gets €685, or about $780, with further cuts likely next year. More: While the BBC notes Greece will now finally be able to borrow at market rates again, the AP notes "there'll be no dancing in the moonlit streets of Athens" over the end of the bailout, during which Greece received roughly $330 billion in loans. The Guardian reminds us that represents the "world's biggest-ever financial rescue." |
(M) **
Director: Oliver Stone.
Cast: Michael Douglas, Shia LaBeouf, Carey Mulligan, Josh Brolin, Frank Langella.
THE idea of a belated sequel is rarely appealing, but there was some promise in this timely reminder that greed still ain't necessarily good.
With the Global Financial Crisis fresh in people's minds, a follow-up to Stone's ode to '80s excess seemed perfect given that the recent excesses of US banks and financial institutions made Gordon Gekko's old paychecks look like pocket money.
But Wall Street 2 struggles under the weight of its own wealth of material, wanting to be so many different films in one that it ends up not properly being any.
Opening with Michael Douglas' iconic Gekko being release from jail for his crimes in the first movie, it quickly rolls on to 2008 where the former white-collar crim has published a tell-all book and is emerging as a doomsayer predicting dark financial days ahead.
Meanwhile, his estranged daughter Winnie (Mulligan) is dating financial whiz kid Jacob Moore (LaBeouf), who has a pet project in financing a fusion energy research facility but is about to take a serious hit when the crisis claims his company and his boss.
Stone has a lot to work with here and the script tries unsuccessfully to cram it all in together. At any given moment, the film is a family drama, a redemption story, a commentary on the GFC, a call for renewable energy, a revenge tale, a skewering of capitalism, a general social warning, and a friendly retread of the original film.
All these things could have worked together with a bit more focus, but Wall Street 2 wanders and ends up committing that most heinous of movie crimes - it feels painfully long. Some judicious editing of some of the subplots or a neater combination of its many themes would have helped.
The film also struggles to work out what it wants from Gekko. Douglas is as good as ever in a role he clearly relishes, but here he is meant to be the sympathetic underdog seeking a second chance as well as the duplicitous lizard that made him so reviled and remembered in the first film. It's an unwieldly combination that doesn't quite work and brings to mind that other fictional tycoon Mr Burns and his "trademark changes of heart" (as well as Homer Simpson's quote that "...some people never change. Or, they quickly change, then quickly change back.".
There are highlights. Charlie Sheen's cameo is a pleasant surprise, Douglas is always great to watch as Gekko, and LaBeouf and Mulligan provide good support, with the film working best when it's focusing on its core trio of Douglas, LaBeouf and Mulligan. Brolin is also a welcome addition as Gekko's old enemy, Bretton James, who represents the new breed of Gekko-like greed.
But for the most part, this return to the financial world's most famous address is a bit like talking to a bad financial adviser - mostly boring, overly complicated, and in the end you feel a little ripped off. ||||| ENLARGE PHOTO+ 20th Century Fox
(3 of 3)
Bubbles and Baubles
Stone made his writer-director rep with amped-up screeds on Important Subjects: assassinations (JFK), wars in South America (Salvador) and Southeast Asia (Platoon, Born on the Fourth of July, Heaven and Earth) and the media's fascination with serial murderers (Natural Born Killers) and right-wing demagoguery (Talk Radio). In the past decade, with Alexander, World Trade Center and W., he calmed down, and his films slumped into a long lull.
Money Never Sleeps slaps his oeuvre back to life. Rodrigo Prieto's cinematography and Kristi Zea's production design give the movie visual allure in spades. The ribbon of Dow tickers crawls across the actors' faces as if Wall Street were the Matrix. When Stone is not flashing Lou Zabel's ghost on a men's-room wall to haunt Jake's conscience like Christmas Past, he's packing the film with objective correlatives: Zabel speaks of stock bubbles, and we see a kid's soap bubbles rising blithely, precariously over Central Park. (More bubbles can be seen in the movie's recently added epilogue.) (See 10 perfect jobs for the recession and after.)
Costing about $70 million but looking as if it had been made on a budget only Lloyd Blankfein could pony up, the picture scampers across Manhattan to drop into the Metropolitan Museum, the World Financial Center atrium and Shun Lee Dynasty and to sport guest spots by Vanity Fair editor Graydon Carter, über-publicist Peggy Siegal, Scarmucci's fellow CNBC shaman Jim Kramer and Stone himself. Wretched excess rarely had such a swank face: a Metropolitan Museum charity dinner where all the swindlers gather, and every concubine is accessorized with gaudy earrings; Jake's engagement gift to Winnie of a Bulgari diamond Liz Taylor would envy; rich men's toys like motorbikes and crash helmets equipped with Bluetooth cell phones. (The ringtone on Jake's phone is Ennio Morricone's coyote-wail theme from The Good, the Bad and the Ugly.)
LaBeouf, who always seemed too seedy and smart to play action heroes in Indiana Jones and Transformers movies, is terrific here as a man who wants to make it big without breaking too many commandments. Most other members of the large cast invest themselves fully in the energy and piranha-like appetites of their roles. Only Mulligan, so charming as the precocious teen in An Education, is distressingly wan and weak as the token saint; she's much better in the current Never Let Me Go, to which she brings gravitas, not just waterworks.
Douglas, looking more Kirkian than ever, struts through most of the movie having almost too much fun; if he was worried that Gekko would be too appealing, it doesn't show in his born salesman's smile. But then he has a big scene, in which Gordon confesses to Winnie his despair over the suicide of her drug-addled brother. As he sobbingly takes responsibility for "how many mistakes I made as a father," Douglas boldly merges his character with his personal life. When the actor's son Cameron was recently sentenced to five years in prison for drug-dealing, Douglas owned up to "being a bad father" and added that without going to jail, Cameron "was going to be dead or somebody was going to kill him." (See pictures of TIME's Wall Street covers.)
As social commentary, the script is best when it's bitter. The first two acts are a splendid vaudeville of fast talking and dirty dealing. At the climax, though, the picture becomes Wall Street Weak. It starts flailing toward an Old Hollywood happy ending of revenge and redemption, forcing Gekko to commit a lapse repellent to his nature a good deed in his stab at reconstituting his family. A bleaker worldview, truer to the Street's carnivore ethics, would have demanded the abortion of one character's fetus as a final sting and judgment. Instead, the conclusion leaves the main players in place for a Wall Street 3, which Stone has said he's contemplating. So at the end, he assembles most of the cast for a gala birthday party, as if America's Gekkos deserved cake, not prison time, for their deeds. Unless Stone, Loeb and Schiff believe that the stock market really is just a game, with no calamitous consequences, they have seriously breached the satirist's code. Satire is supposed to leave bite marks, not lipstick traces.
Unlike the first movie, made before the 1987 crash, this Wall Street decries financial chicanery from the ethical altitude afforded by hindsight. Set in 2008, it allows Gekko to speak prescient lines written in 2009 for audiences in 2010. (The film does have one serendipitous subplot the peddling of offshore-oil-drilling leases suggesting an awareness of the BP scandal. In fact, the movie was finished early this year and had its world premiere at the Cannes Film Festival in May.) No deep thoughts here; this is a product of shiny surfaces and glittering patter, the cinematic equivalent of a derivatives offering. Instead of whacking Wall Street, Stone gives it a poke that ends up a tickle. The movie's cunning strategy that financiers can bring down their own villains from the inside should please the mass movie audience. And Anthony Scaramucci too.
See pictures of the global financial crisis.
See the Cartoons of the Week. ||||| Columbia Pictures Tom Hanks
In the book that "Captain Phillips" was based on, the merchant mariner Richard Phillips notes Mark Twain's remark that going to sea is like going to jail with a chance of drowning. For the man whose container ship was hijacked by Somali pirates in 2009, going to sea was like going to prison with a strong chance of dying. His story is extraordinary, and it's told by two perfectly matched collaborators—Tom Hanks, the Hollywood embodiment of American decency and self-effacing courage, and Paul Greengrass, the virtuoso director of action adventures with a documentary feel and a fidelity to facts. The film succeeds on its own terms—an exciting entertainment that makes us feel good about the outcome, and about the reach of American power, rather than its limits. Yet the narrative container is far from full. There isn't enough incident or complexity to sustain the entire length of this elaborately produced star vehicle.
Watch a clip for the film "Captain Phillips." Tom Hanks plays Captain Richard Phillips, who, in real life, sacrificed himself to save his ship's crew during a Somali pirate hijacking in 2009. (Photo/Video: Columbia Pictures)
The story starts with its instantly appealing hero leaving home in Vermont—Catherine Keener plays his wife—to go to work off the coast of Somalia. Rich Phillips is appealing not only because he's played by Mr. Hanks, but because he's a high-tech, globalized version of a venerable archetype, the quietly competent Yankee seafarer. It's fascinating to follow him aboard his vessel, the Maersk Alabama, as he takes command of the huge thing, but his comfortingly familiar routines don't last for long. A collision lies ahead—not with a ship, but with a third-world culture represented by new versions of the corsair archetype: four young desperados with deadly weapons, at least one of them tech-savvy and all of them determined to make a killing that will, in the best case, be purely monetary.
The leader of the Somalis, who board the Alabama from a scruffy skiff, comes to be called Skinny—Skeletal wouldn't have had the same ring. He's played with such ferocious charisma that it's almost impossible to believe that Somalia-born Barkhad Abdi, who lives in Minnesota, is making his acting debut in the role. (Mr. Greengrass also excels at directing actors.) Skinny insists that the raid is "just business—we want money; when everybody paid we go home." He also insists that he and his cohorts are fishermen who've been driven to crime by the predatory practices of Western fishing fleets. It may be more complicated than that, even if Billy Ray's script chooses not to explore the complications. But whoever Skinny may be, he's a captain, too, and the film's dramatic core—as distinct from its much hotter action core—concerns the two men's struggle for dominance, and for control of a ship being held for ransom.
Phillips tests, prods and observes his rival shrewdly. He is, by turns, affable and tough, fearless while appropriately fearful. The Somali is fearless too, but with dangerous displays of bravado. Having taken Phillips hostage, he's unfazed by the prospect of taking on America as well. All of this is suspenseful, as far as it goes, but it can't go further because, in an odd way, the film has fallen hostage to its own integrity.
Apart from the inevitable telescoping and heightening for dramatic purposes, plus some minor—and in a couple of cases foolish—spasms of invented heroics, "Captain Phillips," hews fairly closely to the events set forth in the book. The movie is essentially what it claims to be, based on a true story that happens to culminate in a display of U.S. naval power and Navy SEAL expertise. (Barry Ackroyd did the spectacular cinematography. The film was edited by Christopher Rouse.)
That sets it in sharp contrast to a recent film on the same subject, one that is, not coincidentally, coming out next week on DVD and Blu-ray. Tobias Lindholm's "A Hijacking," from Denmark in Danish, English and Somali, is a fiction film that barely registered as a blip on the scope of the international movie market. Yet "A Highjacking" is wonderfully dramatic in its interweaving of disparate elements—enthralling characters; intricate negotiations with startlingly sophisticated pirates; explorations of geopolitics and corporate imperatives; and, through it all, a question of life or death that's answered from the start in "Captain Phillips," a Hollywood action adventure with a beloved star in the title role.
I mention the Danish film not to suggest what "Captain Phillips" should have been, but to suggest why it is what it is. The script was based on a book that doesn't lack for personal heroism, or a thrilling climax. But the book recounts a string of factual events that, by their repetitive nature, lack the density of first-rate fiction. That's why the film plods along so perplexingly while the pirates search the Alabama from top to bottom for its apparently vanished crew; why its portraits of the pirates are as shallow as they are striking, and why so much time is given over to the pirates yelling and screaming at Rich Phillips, or at each other. It also sheds light on why Mr. Hanks's most powerful scene—you'll know it when you see it—is in fact a piece of fiction. That's not a bad thing, mind you, but a good thing. It's based on true feelings.
Watch a clip from the film "A River Changes Course." Filmmaker Kalyanee Mam travels to her homeland to capture the stories of three Cambodians struggling to maintain their way of life while the modern world closes in around them. (Photo: Migrant Films)
'A River Changes Course'
Ever so calmly, sometimes languorously, Kalyanee Mam's documentary feature reveals the anguishing sense of loss behind a profusion of ravishingly beautiful images. Ms. Mam was born in Cambodia during the Khmer Rouge regime; she and her family fled that shattered nation's refugee camps and emigrated to the U.S., where she was educated, became a lawyer and a gifted cinematographer. In recent years she has returned to her homeland several times to film "A River Changes Course." Beautiful images can be a distraction in a serious documentary, but that's hardly the case here. They draw us in so we can better understand the hurtling changes that endanger the future of Cambodia and, by extension, much of the developing world.
The story is told through three families connected only by pressures they understand imperfectly, if at all. Khieu Mok and her mother cultivate rice in a small village outside the capital, Phnom Penh. Mired in debt, they're forced to borrow more money to buy a buffalo and additional land that they need to survive. Sari Math and his family live in a floating village on central Cambodia's Tonlé Sap River. He's forced to quit school and go to work when the family's livelihood is threatened by dwindling catches that result from large fish traps they can't afford, and by the proliferation of illegal fishing. Sav Samourn and her family live high in the remote mountains of northeast Cambodia in what could pass for a jungle Eden—butterflies, orchids, cashew orchards, rice paddies, dense forest—except that the forests are being devoured by corporate loggers.
Ms. Mam follows these families and their distinctive ways of life with her eyes wide open. What's so unusual about her approach is that she sees just as clearly as an artist, a journalist and a de facto anthropologist. (Her film is enhanced by David Mendez's haunting music.)
Enlarge Image Close Film Collaborative Khieu Mok in 'A River Changes Course'
The pastoral and littoral scenes are lovely, but the loveliness masks a degrading environment. The children's faces are photographed luminously, but the kids' well-being is shadowed by pollution and the lure of urbanization. Khieu, who goes to work in a Phnom Penh garment factory, thinks of herself as "divided in half." She fantasizes that someone will bring employment to her village by building a factory, but that the village will somehow remain unspoiled, an idyllic idealization of Phnom Penh. Sav has an undivided fear of what has already happened, and what's to come. "Before," she says of her cherished surroundings, "we wouldn't dare walk through here. There were tigers, bears and elephants. Now, all the wild animals are gone. We're no longer afraid of wild animals, and ghosts. Now we're afraid of people. The elders say they're afraid of people cutting down the forests."
Rewind
DVD // Streaming // Download
'Bloody Sunday' (2002)
An air of fatefulness suffuses "Bloody Sunday," Paul Greengrass's stunning re-creation of the day in September 1972, when 13 unarmed civilians were killed by British soldiers during a peace march in Northern Ireland. Thirty years later, the bloodbath is still surrounded by controversy, so the film can't be taken as a definitive factual account. Yet it makes a powerful case for the contention that 3,000 British paratroopers, presumably sent to keep the peace, were looking for a fight when they arrived, and that their contingency plan was nothing more than a sequence of ghastly blunders waiting to happen.
'The Bourne Supremacy' (2004)
For his Hollywood debut, Paul Greengrass directed this impressive sequel in which Matt Damon's haunted assassin, Jason Bourne, is still trying to figure out who he is. He starts out in Goa, a photogenic state on the western coast of India, and he remains on the goa from start to finish. The movie keeps you in a state of reasonably contented attentiveness. Still, the nonstop action reminded me of George Carlin's alarmed response to the ticket agent who tells him he's on a nonstop flight. At some point you yearn for solid ground.
'United 93' (2006)
"Captain Phillips" isn't Paul Greengrass's first fact-inspired film about a hijacking. Five years after the fact, he directed this literally spellbinding vision of what happened on the ground as the twin towers of the World Trade Center were struck, and what may have happened in the cockpit and cabin of the hijacked airliner that was diverted, by a passenger revolt, from its flight path to the U.S. Capitol. The filmmaker's mosaic—and lapidary—technique is similar to the one he employed in "Bloody Sunday." It worked superbly then, and just as superbly here.
Write to Joe Morgenstern at joe.morgenstern@wsj.com | "Greed is good," Gordon Gekko famously said in Oliver Stone's 1987 Wall Street, but maybe not in film directors. Critics say Stone crams too many story lines and sermons into the sequel, Money Never Sleeps, though Michael Douglas is still deliciously slimy as the ex-con stock trader. Douglas manages to "retain a sense of nasty fun," writes Joe Morgenstern at the Wall Street Journal, but the rest of the movie is "pumped up to the bursting point with gasbag caricatures, overblown sermons and a semicoherent swirl of events surrounding the economy's recent meltdown." Stone has tried to fit far too many themes and subplots into Money Never Sleeps, complains Matt Neal at the Standard, who found the movie "a bit like talking to a bad financial adviser—mostly boring, overly complicated, and in the end you feel a little ripped off." The movie "has the drive, luxe and sarcastic wit of the snazziest Hollywood movies for most of its two-hours-plus running time ," writes Richard Corliss at Time, although Stone ends up going surprisingly easy on the financial world. "Instead of whacking Wall Street, Stone gives it a poke that ends up as a tickle," he writes. |
Established in 1861, GPO is the principal agent for federal printing. All printing for the Congress, the Executive Office, and the Judiciary—except for the Supreme Court of the United States—and for every executive department, independent office, and establishment of the government is required to be done at or contracted by GPO. An agency located in the legislative branch, GPO produces the Congressional Record, the Federal Register, the Code of Federal Regulations, and other key government documents. GPO, through its Superintendent of Documents, is also responsible for the acquisition, classification, dissemination, and bibliographic control of tangible and electronic government information products. Accordingly, regardless of the printing source, the law requires that federal agencies make all their publications in all formats available to the Superintendent of Documents for cataloging and distribution. The Superintendent of Documents then distributes this government information in traditional and electronic formats to the public through a system of more than 1,300 depository libraries nationwide—the Federal Depository Library Program (FDLP). The Superintendent of Documents also links the public to about 203,000 online government documents through its Web site, known as GPO Access. In addition, it makes about 9,000 titles available for sale via telephone, mail, fax, e-mail, online orders from publishing agency Web sites, booksellers, the GPO Online Bookstore, and GPO Bookstores. This longstanding structure for centralized dissemination of government information is facing several challenges. First, government printing has evolved from a primarily in-house GPO operation to a combination of GPO-administered private printing contracts, executive branch agency contracts, and in-house printing. We have reported on this development and questioned the efficiency and effectiveness in this environment of the centralized GPO model for government printing. Second, federal agencies are increasingly disseminating information via agency Web sites, which is decreasing reliance on large-scale printing as the means to produce government documents. Third, the statutory mechanism for GPO printing and document distribution faces constitutional issues. Congress gave the congressional Joint Committee on Printing (JCP) broad authority to supervise government printing and distribution of government publications. Its regulation of executive branch printing and document distribution, however, has been viewed by the Department of Justice as being unconstitutional under the Supreme Court’s 1983 “separation of power” decision in INS v. Chadha. In this case, 462 U.S. 919 (1983), the Supreme Court invalidated the legislative veto authority of Congress, ruling that Congress can only affect the executive branch through legislation that has been passed by both Houses and signed into law. In the years since Chadha, the Department of Justice and the Office of Management and Budget have several times instructed executive branch agencies that they need not comply with either statutory JCP requirements or the JCP Printing and Binding Regulations. These challenges have impaired GPO’s ability to acquire and disseminate government documents. In 1996, GPO estimated that about 50 percent of government documents published in that year were not indexed, catalogued, and distributed to the depository libraries. Documents that should be—but are not—distributed by the Superintendent of Documents to the depository libraries are known as fugitive documents. GPO asserts that these fugitive documents are increasing because of electronic dissemination of information via agency Web sites and decreasing agency compliance with the statutory requirements for printing through GPO. Many government documents are also disseminated or sold by other components of the national information dissemination infrastructure, including the Department of Defense’s Defense Technical Information Center (a major component of the Department of Defense’s Scientific and Technical Information Program) and the Department of Commerce’s National Technical Information Service. Increasingly, these documents are also available on agencies’ Web sites and through FirstGov—a government Web site providing the public with one-stop access to online federal resources, including government publications. Increasing use of electronic publishing and dissemination technology is also changing the FDLP itself. The Legislative Branch Appropriations Act of 1996 directed GPO to reassess the program within this context. Noting that the use of these technologies requires careful analysis, planning, and the probable restructuring of the current federal dissemination program, the act directed GPO to examine the functions and services of the FDLP, identify measures that were necessary to ensure a successful transition to a more electronically based FDLP, and prepare a strategic plan for such a transition. The resulting study and its companion strategic planrecommended a 5-year transition to a more electronic FDLP, identified a core of government publications that should continue to be distributed on paper, such as the Code of Federal Regulations, and established a schedule for the transition. According to this schedule, by the end of fiscal year 1998 FDLP libraries were to receive 50 percent of publications in electronic format. The program reached this objective in fiscal year 2000, when about 57 percent of FDLP titles were made available online via GPO Access. Although the FDLP has begun to provide online access to selected government publications via GPO Access, it uses electronic information to supplement—and selectively replace—the dissemination of the same information on paper or microfiche. Thus, the number of tangible titles— on paper, microfiche, or CD-ROM—distributed to FDLP libraries since the publication of the study has remained relatively stable: from 29,372 titles distributed in fiscal year 1996 to 28,849 titles in fiscal year 2000. A similarly modest decline was evident in the number of distributed tangible copies, from 13,472,946 copies distributed in fiscal year 1996 to 12,207,064 copies in fiscal year 2000. A major impetus for accelerating the transition to a more electronic FDLP occurred in fiscal year 2001, when the Congress reduced by $2 million the funding for the programs managed by the Superintendent of Documents. Faced with this funding shortfall, GPO analyzed printing contracts to determine whether the publications distributed on paper or microfiche might also have online versions available. The analysis showed that 40 percent of the distributed tangible titles had online versions available. Acting on this finding, the Superintendent has accelerated the transition of the FDLP to a primarily electronic program and issued a policy for the dissemination of publications to depository libraries. The policy restated earlier guidance articulated in an August 2000 letter from the Superintendent of Documents to the directors of depository libraries, reaffirmed the commitment to use online dissemination as the primary method of FDLP distribution, and defined the conditions under which the program will continue to distribute paper publications and other tangible products, even if the publications are available online. Specifically, the program will also distribute the paper or tangible version when any one of the following conditions is met: There is a legal requirement to distribute the publication in tangible format. The paper publication is of significant reference value to most types of FDLP libraries. The paper publication serves a special needs population. The commonly accepted medium of the user community is tangible format. The product is essential to the conduct of government. With regard to the last category, the Superintendent of Documents has identified a list of “Essential Titles for Public Use in Paper Format.” Based on an initial list developed in 1996, these “essential” titles would be made available to the depository libraries in paper format, regardless of their online availability. These documents, listed in appendix VII, include such titles as the Budget of the United States, the Code of Federal Regulations, the Congressional Record, and the United States Code. We determined that 28 of the 42 titles (about 67 percent) are also available online. According to the Superintendent of Documents, maintaining the availability of these titles for selection in paper format is essential to the purpose of the FDLP. Advances in information technologies and the Internet continue to shape the FDLP. Over the years, these advances have also triggered numerous initiatives focused on reforming and restructuring the nation’s information dissemination infrastructure in general and the Office of the Superintendent of Documents in particular. These initiatives—including the recent proposal by the National Commission on Libraries and Information Sciences—are discussed in appendix VIII. Electronic dissemination of government documents offers the opportunity to reduce the costs of dissemination and make government information more usable and accessible. However, to move to an environment in which documents are disseminated solely in electronic format, a number of challenges would need to be overcome. These challenges include ensuring that these documents are (1) authentic, (2) permanently maintained, and (3) equally accessible to all individuals. In addition, cost issues should be addressed, including the effect of shifting printing costs to depository libraries and end users. One of the advantages of electronic dissemination is that electronic documents cost less to store, maintain, and disseminate. Electronic documents require no warehouse space and incur no shipping charges. If necessary, they may be readily updated with little further production cost. The contrast in costs between electronic and paper dissemination is illustrated by the costs associated with GPO Access in fiscal year 2000. In this period, the Superintendent of Documents distributed almost 12.2 million copies of 28,849 tangible titles to depository libraries and added 32,306 online titles to the 160,726 titles available at the end of fiscal year 1999 through GPO Access. For the 28,849 tangible titles, the reported fiscal year 2000 printing and reproduction costs were about $13.7 million; for operating and maintaining the 193,032 online titles the reported cost was about $3.3 million. A second advantage of electronic dissemination is that electronic documents may offer greater functionality than traditional paper documents. They can be searched, can be linked to related information, can be manipulated (allowing users to cut and paste text), and may incorporate not only images, but also audio and video. Further, electronic documents make printing on demand accessible to individuals. A third advantage of electronic dissemination is that electronic documents make government information far more accessible to citizens, including those with physical impairments. Once posted, they are immediately accessible to thousands of users from multiple locations around the nation. Because the Web is location independent, it reduces geographic differentiation and may eliminate the need for visits to a distant depository library or GPO bookstore. Moreover, unlike their paper counterparts stored in the nation’s libraries and bookstores, electronic documents are generally available 24 hours a day, 7 days a week. While the Web-based dissemination of electronic government publications provides an attractive alternative to the traditional ink-on-paper approach, a number of challenges would need to be overcome if the government were to disseminate documents solely in electronic format. These challenges include addressing (1) authentication, (2) permanence, and (3) equity of access. In addition, cost issues would need to be addressed. Authentication provides the assurance that the electronic document is official and complete: i.e., that the document was not surreptitiously or accidentally modified. When citizens access and retrieve government documents from federal Web sites, they should have assurance that the accessed documents are authentic. Although document authentication may be achieved through electronic signatures or seals, government documents currently available on the Web often lack authentication. Once downloaded from government Web sites, documents lacking electronic signatures or seals may be modified without detection. The FDLP is not currently using electronic signatures or other electronic means to authenticate government documents, but GPO is in the process of procuring public key infrastructure (PKI) technology to provide authentication of government publications disseminated online via GPO Access. Program officials told us that they guarantee the authenticity of the electronic documents available on the GPO Access Web site, although no such guarantee is posted on the Web site itself. Permanence refers to the retention of an online document by a Web site, making it possible for users to repeatedly find or link to the document. In practice, this means that the document must be maintained online in perpetuity. Since GPO provides FDLP libraries with access to electronic documents through GPO Access, it has assumed responsibility for their permanence. To do this, GPO uses the Online Computer Library Center’s Persistent Uniform Resource Locators, which provide an effective means to update Internet addresses by redirecting users from old Internet addresses to the most recent address associated with an electronic publication. Although the Government Printing Office Electronic Information Access Enhancement Act of 1993 directs the Superintendent of Documents to maintain an electronic directory of government information and provide online access to government documents, there is no explicit legal requirement for the Superintendent of Documents or any other federal entity to permanently maintain online electronic versions of government documents. However, GPO has advised us that it believes that it has a legal basis to maintain permanent access to electronic information. In addition, portions of government electronic information products managed by the Superintendent of Documents in what is known as the FDLP Electronic Collection are maintained by partner institutions, including other federal agencies and depository libraries and consortia. Recognizing the risk of losing documents controlled by other agencies, FDLP is acquiring and archiving electronic documents managed by these agencies. Equity of access recognizes that some individuals may have difficulty accessing and using electronic information. Many individuals have no access to the Internet, lack computer skills, or are unable to navigate the increasingly complex Web environment. A recent National Telecommunications and Information Administration report on Americans’ access to the Internet notes that while the Web is becoming more and more widespread, about 60 percent of U.S. households have no access to the Internet, whether by choice, poverty, or disability. Moreover, people with disabilities are only half as likely to have access to the Internet compared to those without disabilities. Although individuals in households without Internet access may use access provided by public schools or libraries, the lack of direct access poses a barrier. Similarly, even frequent Internet users may find it difficult to search for and locate specific government documents on the Web. According to the National Commission on Libraries and Information Sciences, government Web sites are often difficult to search, with users confronted with such a massive volume of disorganized materials that they cannot easily find what they are seeking, even if they are highly computer and information literate. In addition, while electronic documents are less costly to disseminate, GPO may face near-term cost increases associated with creating electronic documents when none exist. In 2000, the Superintendent of Documents explored the cost of a fully electronic depository library program. The study pointed out that there are no electronic versions available to the depository library program for many of the documents printed by GPO, and that the library program would have to digitize (that is, scan electronically) thousands of documents to make them suitable for online dissemination via GPO Access. The Superintendent of Documents estimated, based on 25,063 titles distributed to depository libraries between April 1999 and March 2000, that approximately 40 percent (10,000 titles) had an online counterpart, and that the remaining 60 percent (15,000 titles) would have to be converted. According to GPO, an additional $7.7 million would have been required in fiscal year 2000 to convert to a totally electronic FDLP, bringing total program costs to $38 million. The depository libraries may also expect to bear additional costs if the depository library program begins to disseminate government documents solely in electronic format. These include the costs of (1) printing shelf copies of electronic documents, (2) purchasing selected printed documents no longer provided by the library program, (3) training librarians in the use of the Internet and electronic reference services, and (4) helping library patrons to use the Internet to search and locate government publications. This shift in costs to the end user or to libraries was also recognized by the Depository Library Council. The Council noted that although this shift is an unintended consequence of technology, libraries will struggle with different issues such as printing, formatting, and providing users with instructions on how to use online technology. This point was echoed by the National Research Council in its argument that libraries must adopt a new model for library acquisitions, one that considers not only the cost of access to information, but also the cost of maintaining the technical environment—including hardware, software, and personnel—required to make these resources available to readers. Both advantages and disadvantages are associated with transferring the depository library program to the Library. In studies conducted in 1993 and 1994, the Library concluded that the depository library program is not inconsistent with the mission and functions of the Library and that it might be appropriate for the Library to have responsibility for a program established to acquire government documents and distribute them to depository libraries. Further, a transfer could allow the depository library program and the Library to develop governmentwide solutions to the common issues they face in addressing the acquisition, maintenance, and dissemination of electronic information. In addition, three other GPO programs—closely related to the depository library program or the Library and consistent with the Library’s mission—could also be considered for transfer. However, the Public Printer stated that the Library is not an appropriate home for the depository library program because the Library’s mission is inconsistent with a large-scale information dissemination program. In addition, the Library studies, as well as organizations representing librarians, cited disadvantages associated with such a transfer. These disadvantages included potential negative effects on public access to information and concern about the availability of funds to maintain the current program. In addition, unions representing GPO employees expressed concern about the effect of a transfer on employee rights. According to studies conducted by the Library, the mission of the Library—to make its resources available and useful to the Congress and the American people and to sustain and preserve a universal collection of knowledge and creativity for future generations—and that of the depository library program—to ensure that the American public has access to government information by disseminating information products to libraries nationwide—are not inconsistent. The Library noted that its National Library Service for the Blind and Physically Handicapped has long experience with distributing library materials through its network of libraries for the blind and physically handicapped. Therefore, a transfer could expand the Library’s current involvement in the dissemination of government information and enable consolidation of dissemination functions. A transfer of selected programs of the Superintendent of Documents might also facilitate the depository library program and the Library working together to address the broad issues of acquiring, managing, and disseminating digital information—issues critical to both organizations. In December 2000, the Library received a special appropriation of $100 million to lead a national strategic planning effort to develop a National Information Infrastructure and Preservation Program. In making this appropriation, the Congress directed the Librarian to develop a plan jointly with federal entities having expertise in telecommunications technology and electronic commerce, and with the participation of representatives of other federal, research, and private libraries and institutions having expertise in the collection and maintenance of archives of digital materials. The Library has developed a planning framework and is assembling advisory groups and forming partnerships to guide its planning for the national digital information strategy. In addition to the depository library program, three of the six other programs managed by the Superintendent of Documents could be considered for transfer. These programs are (1) Cataloging and Indexing, (2) GPO Access, including the FDLP Electronic Collection, and (3) the International Exchange Service. Two of the programs, the Cataloging and Indexing program and GPO Access, are closely related to the depository library program. Under the third program, the International Exchange Service, the Superintendent of Documents disseminates documents to foreign libraries on behalf of the Library of Congress. The Cataloging and Indexing program is responsible for identifying and organizing government publications by title, subject, and agency, and providing this information to users through indexes, catalogs, and online retrieval systems. The Cataloging and Indexing program ensures that government publications entering FDLP dissemination are under bibliographic control. According to Library studies, the Cataloging and Indexing program is consistent with the functions of the Library. The program would complement the Library’s cataloging operations and centralize the cataloging of government documents in a single entity. The Library considers that information gathered through the consolidated cataloging and indexing operation would help to ensure that all government publications are reported to a central source and thus help to reduce the number of fugitive documents that escape bibliographic control. GPO Access provides the public with access to electronic documents under the bibliographic control of the Superintendent of Documents. This program is essential to the Superintendent’s commitment to provide online access to government documents. The International Exchange Service program provides for the distribution of U.S. government publications to foreign libraries in exchange for publications produced by their governments. Foreign publications are then sent to the Library, which administers the program. As such, the program is already the responsibility of the Library and consistent with the Library’s mission. In addition, integrating the international exchange program into the Library’s ongoing domestic and international information exchange programs would allow the Library to consolidate responsibilities for the administration and distribution of all federal documents. The Library’s Office of the Director of Acquisitions, which currently administers the Library’s portion of the program, could assume complete responsibility for the international exchange program. Both the Library studies and organizations representing the interests of libraries and librarians—including the American Library Association and its Government Documents Roundtable—identified disadvantages associated with a potential transfer. The following are some of the more significant disadvantages identified. The library community does not believe that a transfer would enhance and promote the public’s no-fee access to government information. The Library expressed concern about the availability of funds to maintain the products and services available to depository libraries, much less support new products and services. According to the library community, a transfer would signal to executive agencies, which contribute thousands of tangible information products to the FDLP, that their publications are no longer needed. This would lead to the loss of thousands of publications that would no longer be accessible to the public. Efforts to retrieve and convert these documents after the fact would be logistically and financially prohibitive. The library community stated that disengaging the dissemination functions of the depository library program from the printing functions of GPO may destroy the only gateway available to minimize the number of fugitive documents. Similarly, the Library noted that this separation would have a major impact on both the depository library program and the international exchange service by making it more difficult and costly to ensure that materials Congress intended for these programs were actually distributed. The library community noted that a transfer of GPO’s Cataloging and Indexing program to the Library would likely result in a loss of quality, speed, and detail in cataloging. The transfer would add a significant level of bureaucracy and would result in increased costs in providing the transferred services, a loss of productivity, and disassembly of a cohesive program, according to the library community. With regard to the missions of the depository library program and the Library, Library studies indicated that the missions of the depository library program and the Library are “not inconsistent,” but also cautioned that administering the depository library program would considerably expand the Library’s mission. On the other hand, the Public Printer stated that the mission and operations of the Library are inconsistent with a large-scale information dissemination program such as the depository library program. He further stated that transferring the depository library program to the Library would increase costs, impose additional burdens on the Library, and not result in any improvement in the public’s ability to access government information. In addition, the library community took the view that the missions of the Library and the federal depository library program vary so significantly that the appropriateness of a transfer is questionable. The library community also noted that the mission of the Library would have to be expanded to accommodate the needs of government information users. The unions representing employees—the American Federation of Government Employees and the Graphics Communication International Union—expressed concern that a transfer might negatively affect annuities and seniority for all transferred employees and pay for blue- collar workers. Further, without specific enabling legislation, they maintain, unionized employees transferred to the Library would lose bargaining rights over wages. As previously noted, we were also asked to assess other issues affecting a potential transfer of the depository library program. If a decision is made to transfer the program, transition planning would be critical to determining how and when such a transfer should occur. One option would be to form a joint GPO/Library transition team which could include representation from the depository libraries. This team could further study and resolve critical issues cited by the library community and unions. This team could develop a detailed transition plan and schedule, cost estimates, and performance measures and identify data and system migration requirements. Regarding measures that could help ensure the success of a transfer, we identified the following: Ensure that issues raised by the library community are appropriately addressed, including the effects of a transfer on the number of fugitive documents, and the effect on the cataloging and indexing function. Address seniority, salary rates, and bargaining rights of union members transferred to the Library. Consider limiting the physical movement of staff and equipment during the transition period. Staff and equipment of the Superintendent of Documents units responsible for managing the programs transferred to the Library of Congress could continue to occupy their current office space at the GPO site. The Library could pay GPO for support services, including rent, building maintenance, and communications services. During the transition period, consider continuing to rely on the information systems and computer support provided by GPO’s Office of Information Resources Management to the Superintendent of Documents offices and programs transferred to the Library. The transition could take place after the detailed transition planning mentioned previously is completed. In addition, it may be advisable to begin the transition at the start of the fiscal year to coincide with budgetary actions. Appendixes IV, V, and VI provide requested information on the functions, services, and programs of the Superintendent of Documents and on the administrative and infrastructure support that is provided to the Superintendent by GPO. With regard to the potential cost of transferring the depository library program, we identified the program infrastructure costs for GPO and the Library as the most likely to be affected. Infrastructure services include administrative services such as payroll processing; rents, communications, and utilities; and information system support. The Superintendent of Documents currently pays GPO for these infrastructure costs. It appears that the program costs may remain stable, unless the Library or GPO makes changes in their scope or work processes. Of the seven programs managed by the Superintendent of Documents, four could be transferred to the Library and might be expected to continue to operate using existing processes, equipment, and staff. Three of the seven programs—By-Law Distribution, Sales of Publications, and Agency Distribution Services—are expected to remain at GPO. The costs for administrative services may also remain stable. As the Library assumed the management of the transferred units, it would also begin to provide various administrative services, such as personnel and payroll, previously provided by GPO. We assume that the Library would continue to pay for these services from the Superintendent’s Salaries and Expenses appropriations, and that the Library’s overhead rate for these services is comparable to GPO’s. In fiscal year 2000, the Superintendent of Documents expenses for administrative services and support provided by GPO were about $4.7 million. Similarly, we expect that the cost for rents, communications, and utilities would remain stable if there were no physical movement of staff or equipment. Staff and equipment of the Superintendent of Documents units responsible for managing the programs transferred to the Library of Congress could continue to occupy their current office space at GPO. The Library could then pay GPO at the current rate for rent, building maintenance, and communications services. In fiscal year 2000, the Superintendent of Documents expenses for the rents, communications, and utilities provided by GPO were about $7.3 million. The cost of information system support might increase during the transition period, largely because the transferred units and programs would continue to pay GPO for continuing information system support while migrating to the Library’s systems. In fiscal year 2000, the Superintendent of Documents expenses for information systems support provided by GPO were about $3.5 million. We provided a draft of this report for review and comment to the Public Printer and the Librarian of Congress. Their comments are reprinted in appendixes IX and X, respectively. In a letter dated March 21, 2001, the Public Printer raised numerous issues concerning the contents of the draft report. First, he stated that the report does not provide a comprehensive study of the impact of providing documents solely in electronic format. He stated that the report only briefly mentions such major issues as authenticity, permanent public access, security, equity of access, and cost considerations, and does not address how these issues will be resolved in an all-electronic information environment. Second, he stated that the Library is not an appropriate home for the federal depository library program, as the Library’s mission is inconsistent with a large-scale dissemination program. Finally, he states that the draft report lacks balance in the presentation of information from prior GAO audits because we did not include a discussion of a more recent study of GPO management. With regard to electronic dissemination, the conference report required us to study the impact of providing government information solely in electronic format, among other issues, and provide a report by March 30, 2001. Recognizing this limited timeframe, our report appropriately raises the major issues and challenges that would need to be addressed to move to an environment in which government documents are disseminated solely in electronic format. Resolving these challenges, however, is well beyond the scope of what we were asked to do and is instead the responsibility of federal agencies that disseminate information, including GPO. With regard to the transfer of the depository library program, we have revised the report to reflect GPO’s position that the mission of the depository library program is inconsistent with that of the Library. With regard to the comment on balance, our 1990 report is mentioned briefly in the report’s background because of its discussion of issues relating to the structure of GPO. We did not mention the Booz-Allen & Hamilton report cited by GPO because it dealt primarily with the management of GPO—an issue not addressed by this report. In a letter dated March 21, 2001, the Librarian of Congress also raised numerous issues concerning the draft report. First, the Librarian stated that sufficient analysis has not been done to support our recommendation that an Interagency Transition Team be established to plan the transfer of the depository library program, particularly in light of the recent mandate from the Congress that designated the Library the lead agency in developing the National Digital Information Infrastructure and Preservation Program. The Librarian also stated that the report focuses exclusively on shifting functions from one legislative branch agency to another and not on the larger policy issues involved in providing citizens useful and persistent access to government information. With regard to the Librarian’s comment concerning the recommendation in the draft report, we would like to clarify that the report contains neither conclusions nor recommendations. Our discussion of a transition team is offered only as one possible option for facilitating a transition, if the Congress decides to direct such a transfer. With regard to the focus of the report, we again note that the conference report specifically required us to study issues related to the feasibility of transferring the depository library program to the Library. Therefore, our report properly identifies the major issues that would need to be addressed if the Congress directs such a transfer. However, we recognize that the larger policy issues raised by the Librarian are valid. Therefore, we look forward to seeing the results of the Library’s efforts to study the dissemination of electronic government information as well as its strategy for dealing with the life cycle of digital information. The Public Printer also stated in his comments that the draft report contained numerous factual inaccuracies and misinterpretations. We subsequently received specific technical comments, which we have incorporated into the report as appropriate. Appendix IX also includes more specific responses to the Public Printer’s comments. The Librarian also provided technical comments that we have incorporated as appropriate. We are sending copies of this letter to Senator Ted Stevens, Senator Robert C. Byrd, Senator Robert F. Bennett, Senator Richard J. Durbin, Senator Fred Thompson, Senator Joseph I. Lieberman, Senator Pete V. Domenici, Senator Kent Conrad, Representative C. W. (Bill) Young, Representative David R. Obey, Representative Charles H. Taylor, Representative James P. Moran, Representative Dan Burton, Representative Henry A. Waxman, Representative Jim Nussle, and Representative John M. Spratt, in their capacities as Chair, Ranking Member, or Ranking Minority Member of Senate and House Committees and Subcommittees, and to other interested parties. Copies will also be available on our Web site at www.gao.gov. Please contact me at (202) 512-6240 if you or your staff have any questions. I can also be reached by e-mail at koontzl@gao.gov. Key contributors to this report were Timothy E. Case, Barbara S. Collier, Mirko J. Dolak, Jackson W. Hufnagle, William N. Isrin, and George L. Jones. This report responds to the requirements in the conference report for the legislative branch appropriations for 2001 that we study the impact of providing documents to the public solely in electronic assess the feasibility of transferring the depository library program to the Library of Congress. As part of our assessment of the depository library program, we were asked to (1) identify how such a transfer might be accomplished; (2) identify when such a transfer might optimally occur; (3) examine the functions, services, and programs of the Superintendent of Documents; (4) examine and identify administrative and infrastructure support that is provided to the Superintendent by GPO, with a view to the implications for such a transfer; (5) examine and identify the costs, for both GPO and the Library of Congress, of such a transfer; and (6) identify measures that are necessary to ensure the success of such a transfer. In addition, the conference report required that we provide (1) a current inventory of publications and documents that are provided to the public and (2) the frequency with which each type of publication or document is requested for deposit at nonregional depository libraries. To assess the impact of providing government documents solely in electronic format, we reviewed technical literature and studies, analyzed the Superintendent’s workload and publication dissemination data, and reviewed the workload and performance of the GPO Access Web site. In addition, we interviewed the Superintendent’s personnel, depository librarians, members of the American Library Association and of the Association of Research Libraries, and a representative of the American Association of Law Libraries to obtain their views on the feasibility and impact of distributing government documents solely in electronic format. To develop a current inventory of publications and documents provided by the Superintendent of Documents to the public, we focused on the sales inventory because GPO does not maintain an inventory of publications distributed to the depository libraries. We analyzed GPO’s fiscal year 2000 publication inventory and sales data. We also interviewed the Superintendent’s personnel responsible for preparing cyclical and annual inventories to obtain information on warehousing locations and on the disposal of excess publications. To obtain information on the frequency with which each type of publication or document is requested for deposit at nonregional depository libraries, we obtained and analyzed data from the 1999 Biennial Survey of Federal Depository Libraries. In assessing the feasibility, cost, and the timing of transfer of the depository library program to the Library of Congress, we reviewed GPO and Library of Congress program and budget documents, legislative proposals, and studies that examined the feasibility of the transfer of the Superintendent of Documents functions and staff to the Library of Congress. In addition, we interviewed GPO and Library of Congress managers and staff, members of selected professional library associations, the representatives of the two unions representing unionized employees of the Superintendent of Documents, and the executive director of the U.S. National Commission on Libraries and Information Science to obtain their views on this issue. We performed our work at GPO and Library of Congress headquarters in Washington, D.C., from October 2000 through March 2001 in accordance with generally accepted government auditing standards. While we did not assess the reliability of GPO-supplied data, we worked with GPO officials to verify fiscal year 2000 budget, personnel, and production data. GPO makes publications available to the public through the FDLP and the sales program. Under the FDLP, GPO disseminates thousands of publications—about 28,000 in fiscal year 2000—to over 1,300 depository libraries each year. The depository libraries maintain these publications as part of their collections and make them available, free of charge, to the public. GPO does not, however, maintain centralized information on publications that have been distributed to the depository libraries, and as a result, we are unable to provide a current inventory of these publications. (See appendix V for more detailed information on the FDLP.) Under its sales program, GPO sells documents to the public by mail and telephone order, through its bookstores located around the country, and through its Online Bookstore on GPO Access. GPO maintains information on its inventory of publications offered for sale. GPO’s program to sell government publications is managed by the Superintendent of Documents’ Documents Sales Service. In fiscal year 2000, the Service made approximately 9,000 titles of government documents (figure 1) available to the public for sale. The Service maintains a large inventory of these documents—about 5.2 million copies in fiscal year 2000, with a retail value of $74 million (figures 2 and 3). The number of titles available for sale and the inventory maintained by the Service has dropped substantially over the last decade. From fiscal years 1991 to 2000, the number of titles dropped by 7,759, a reduction of 46 percent, and the number of copies in the inventory fell by about 4.8 million copies, a drop of 48 percent. Titles (in thousands) Copies (in millions) Fiscal year Dollars (in millions) The retail value of the inventory—which in fiscal year 2000 totaled $74.1 million—has had periods of both stability and fluctuation since 1995. Although the value of the inventory was relatively stable from 1991 to 1995, it lost almost 30 percent of its value, or $25 million, from 1995 to 1996. This dramatic drop in the value of the inventory, according to GPO officials, is closely associated with the large number of unsold documents destroyed by the Service in fiscal year 1996. Every year, the Service destroys, as required by law, a substantial number of unsold or superseded documents. In fiscal year 2000, it destroyed 1.9 million documents (figure 4) with a retail value of $22.3 million (figure 5). According to GPO, the actual cost of the destroyed publications was only $2.8 million—a fraction of their stated retail value of $22.3 million. Volumes (in millions) Dollars (in millions) The Federal Depository Library Program (FDLP) is a national network of depository libraries designed to ensure free public access to government information. In fiscal year 2000, the program included 1,328 libraries. Participating libraries must maintain government publications as part of their existing collections and are responsible for ensuring that the public has free access to the deposited documents. Fifty-three libraries are designated regional depository libraries, with a goal of each state having at least one regional library. Regional libraries receive all materials distributed through the FDLP and are required to retain all received government documents in perpetuity, although they may withdraw superseded publications from their collections. The remaining 1,275 nonregional libraries may select what materials to receive and have limited retention responsibilities. We relied on information on library document selection rates reported in the 1999 Biennial Survey of Depository Libraries conducted by the Library Programs Service. As figure 6 shows, 971 (77 percent) of the 1,286 nonregional depository libraries participating in the survey reported that they were selecting less than 40 percent of the items offered by the FDLP, and only 135 (10 percent) reported selection rates over 60 percent. The Office of the Superintendent of Documents is organized into four units (figure 7): (1) the Library Programs Service, (2) Electronic Information Dissemination Services, (3) the Documents Sales Service, and (4) the Documents Technical Support Group. The four units manage seven major programs (see appendix V). In fiscal year 2000 the Superintendent of Documents operations were supported by 597 full-time equivalent (FTE) positions, with 450 employees—about 75 percent—working in the Documents Sales Service. The Service is responsible for publication sales and for the Agency and By- Law Distribution Programs. It operates phone, fax, and electronic order services in Washington, D.C., and Pueblo, Colorado, and retails publications through consignment agents and GPO bookstores. The next largest Superintendent of Documents unit—the Library Programs Service—has 106 employees, or about 18 percent of the total of 597 (table 1). The Service manages and operates the Cataloging and Indexing program, the Federal Depository Library Program, and the International Exchange Service. The remaining two units—Electronic Information Dissemination Services and the Documents Technical Support Group—employ 22 and 17 people, respectively, or less than 6 percent of the 597 positions. The Office of Electronic Information Dissemination Services assists federal agencies in electronic dissemination, supports the sales program and the sale of electronic products, and provides strategic planning and customer support for GPO Access. It relies largely on GPO’s Production Department for the development, operation, and maintenance of GPO Access. Finally, the Documents Technical Support Group provides management support and services to the Superintendent of Documents. GPO recognizes 16 labor unions, and two of these represent Superintendent of Documents employees: the American Federation of Government Employees (AFGE) and the Graphics Communications International Union (GCIU). Superintendent of Documents employees represented by AFGE belong to one of two locals: AFGE-PCJC or Local 3392. Under the provisions of the Federal Service Labor-Management Relations statute, GPO deals with unions on all personnel policies, practices, and matters affecting working conditions other than wages. Wages are negotiated under the provisions of section 305 of Title 44, known as the Kiess Act. The three locals represent 402 positions of the Superintendent of Documents, or about three fourths of all Superintendent of Documents employees (table 1). AFGE was founded in 1932 and is the largest federal employee union, representing over 600,000 federal and District of Columbia government workers nationwide and overseas. GCIU, a product of a series of mergers among predecessor craft unions, was established in 1983. AFGE- represented staff employed by the Superintendent of Documents include librarians, program analysts, writer-editors, office automation specialists, publication management specialists, marketing research analysts, supply technicians, customer accounts technicians, and inventory management specialists. GCIU-represented staff employed by the Superintendent of Documents include warehouse workers, materials handlers, stock handlers, motor vehicle operators, and printing plant workers. Among the three locals, AFGE-PCJC represents about three-fourths of all represented positions at the Superintendent of Documents (294). AFGE- 3392 and GCIU members represent 9 percent (35 positions) and 18 percent (73 positions), respectively, of total union representation at the Superintendent of Documents. According to GPO, in February 2001 the Superintendent of Documents employed 569 people in various locations. As shown in table 2, about 44 percent of the 569 employees are eligible for retirement (109 employees) or early retirement (139). The largest group of employees (156) eligible for retirement or early retirement is found in Washington, D.C., followed by 56 employees in Laurel, Maryland, 18 employees in Pueblo, Colorado, and 18 employees in bookstores around the country. Table 3 shows GPO-reported employee retirement eligibility by grade. Table 4 shows expenditures for the major organizational units of the Superintendent of Documents in fiscal year 2000. According to GPO, the Superintendent of Documents spent $92.8 million in fiscal year 2000. The Documents Sales Service (DSS) accounted for 62 percent of the Superintendent of Documents fiscal year 2000 expenditures; DSS and the Library Programs Service (LPS) combined account for 90 percent of expenditures. Tables 5 and 6 show the fiscal year 2000 expenditures for the seven major programs conducted by the Office of the Superintendent of Documents. According to GPO, the Salaries and Expense (S&E) appropriation provides funds for labor and related expenses for five of the Office’s seven programs (table 5): (1) Cataloging and Indexing, (2) the Federal Depository Library Program, (3) GPO Access, (4) By-Law Distribution, and (5) the International Exchange Service. The remaining two programs— Agency Distribution Services and Sales of Publications—are funded from the GPO Revolving Fund (table 6). In addition to these two programs, the Revolving Fund finances the printing and binding operations managed by the Public Printer. As shown in table 5, most of the Superintendent of Documents S&E appropriations were allocated to the Federal Depository Library Program, with expenditures of almost $26 million, or about 87 percent of the fiscal year 2000 S&E appropriations. The Cataloging and Indexing program expended about $2.7 million, or about 9 percent of the S&E appropriations, while the By-Law Distribution Program and the International Exchange Service spent about $529,000 and $687,000, respectively. According to GPO, Sales of Publications and the Agency Distribution Services Program are funded through GPO’s Revolving Fund (table 6). Expenses as they occur are drawn from the Revolving Fund, and revenues from sales and agency transfers replenish it. In fiscal year 2000, sales expenses were about $57.7 million and sales revenues were $45.5 million, resulting in a net deficit to the Revolving Fund of about $12.2 million. The Agency Distribution Services Program generated a surplus in fiscal year 2000 of about $85,000 on expenses of approximately $5.1 million and revenues of approximately $5.2 million. The Library Programs Service, with 106 employees, is the second largest of the Superintendent of Documents’ major units. As shown in figure 8, the Library Programs Service is made up of two major units, the Library Division and the Depository Distribution Division, which are further subdivided. The Library Programs Service administers the Federal Depository Library Program and the Cataloging and Indexing program, and manages the distribution component of the International Exchange Service for the Library of Congress. Historically, these programs have been carried out through seven basic functions: distribution/dissemination, bibliographic control of government information products in all formats, acquisition, classification, format conversion, the inspection of depository libraries; and the continuing education and training of depository library personnel. The electronic information environment has created an eighth function, that of preservation of electronic government information for permanent public access. This is done through the management of the FDLP Electronic Collection. According to GPO, the Library Programs Service expended about $26 million in fiscal year 2000 (table 7). Printing and reproduction costs were over $13 million, which accounted for over 50 percent of its fiscal year 2000 budget. Compensation and benefits accounted for about $8 million (30 percent) of the unit’s fiscal year 2000 budget. The Library Division is responsible for managing Depository Services, the Cataloging Branch, and the Depository Administration Branch. Depository Services. Depository Services designates new depository libraries, inspects participating libraries, administers self-studies, provides continuing education and training, and produces administrative publications. Its staff is responsible for relations with the federal depository libraries. Individual depository libraries are inspected for compliance with the requirements of 44 U.S.C. Chapter 19, by means of on- site visits. The staff also manages the designation of new depository libraries and terminates libraries upon their request or for cause, and reviews the depository libraries’ self-studies to determine if on-site visits are warranted. It also investigates and acts to resolve complaints about services in participating libraries. The Depository Services staff also develops and administers the Biennial Survey of Depository Libraries and manages the continuing education components of the FDLP. This function consists primarily of arranging the annual Federal Depository Library Conference. The Services staff oversees the Library Programs Service’s administrative publishing effort, including the publication of the Administrative Notes newsletter, the conference proceedings, and the depository library Manual and Guidelines. Cataloging Branch. The Cataloging Branch provides descriptive and subject cataloging (i.e., library-standard descriptions of content and other attributes) for a wide range of government publications in all formats and media. These bibliographic records contain numerous data elements, including author, title, publishing agency, date, Superintendent of Documents class number, etc. The cataloging records are entered into the OCLC (Online Computer Library Center, Inc.) system via the Internet. The principal outputs of the Branch are the cataloging records themselves, which are distributed in machine-readable format via the Library of Congress’ Cataloging Distribution Service, the online Catalog of U.S. Government Publications on GPO Access, and the printed Monthly Catalog. The Branch also searches the Web to discover electronic documents that federal agencies have not provided to the FDLP. Depository Administration Branch. The Depository Administration Branch manages document acquisition, maintains shipping lists, tracks fugitive publications, develops item selection lists, acts as the documents distribution agent to the foreign libraries in the International Exchange Service program, and maintains the GPO Classification Manual and List of Classes. The Depository Distribution Division manages both the Federal Depository Library and the International Exchange Service distribution programs through the physical receipt, storage, handling, and mailing of tangible government publications. The Division uses the Automated Depository Distribution System (ADDS) to guide the physical distribution of publications to depository libraries. The Division also coordinates, negotiates, and manages interagency agreements—primarily for products such as maps—as well as standby contracting and delivery carrier contracts to support distribution programs. In fiscal year 2000, the Division distributed an average of 102,500 copies weekly through the depository library program and about 6,900 copies weekly through the International Exchange Service program. Depository Processing Branch. The Depository Processing Branch uses ADDS to perform the physical distribution of publications to depository libraries. Depository Mailing Branch. The Depository Mailing Branch is responsible for packaging, weighing, and shipping materials sent through the Federal Depository Library and International Exchange Service programs. This Branch also processes claims for nonreceipt of publications from the participating depository libraries. The Office of Electronic Information Dissemination Services (EIDS) employs 22 workers and is made up of three units (figure 9): Product Services, the GPO Access User Support Team, and the Electronic Product Development Team. EIDS is responsible for assisting federal agencies in disseminating their electronic government publications through the Superintendent of Documents. The Office works with the GPO Production Department in all stages of development, from the creation, implementation, and maintenance of electronic products to their final sale through GPO Access, including customer support and training. This function includes advising agencies of alternative dissemination platforms and helping them define their requirements for electronic dissemination. The Office also performs strategic planning for the GPO Access program and establishes the program’s policies, procedures, and budget. The GPO Access Internet servers and Web sites are operated and maintained by staff of the GPO Production Department. According to GPO, EIDS expended about $3.3 million in FY2000 (table 8). About $1.3 million, or 39 percent, of the EIDS budget went for compensation and benefits. Over $1.1 million of FY2000 expenditures were for information technology—including the operations and maintenance of GPO Access—provided on a reimbursable basis by GPO’s Production Department. The Product Services group manages the Web content of GPO Access and develops GPO Access participation and training programs, promotional materials, training guides, and operational manuals. The group also manages the Federal Bulletin Board and administers the U.S. Fax Watch System. Fax Watch is a free fax-on-demand service that provides information on the products and services available from the Superintendent of Documents. The GPO Access User Support Team provides support to GPO Access users. The Team also supports electronic product sales and users of GPO’s U.S. Government Online Bookstore. The Electronic Product Development Team develops new electronic products for federal agencies and develops proposals to meet agency electronic publishing needs. The Documents Sales Service includes four major units (figure 10): the Sales Management Division, the Order Division, the Field Operations Division, and Laurel Operations, which includes the Warehouse Division and Retail Distribution. The Documents Sales Service (DSS) is the Superintendent of Documents’ largest entity, with 450 (about 75 percent) of its 597 employees. The Documents Sales Service purchases, warehouses, announces, and distributes government documents in agreement with Titles I and 44 of the U.S. Code. Functions performed by this group include providing subscription services for government publications and selling single publications. The Service operates phone, mail, fax, and electronic order services at the central office in Washington, D.C., and in Pueblo. It also sells publications through government consigned sales agents and GPO bookstores, and provides By-Law and Agency Distribution Services for Congress, the General Services Administration, and other federal agencies. Products may be ordered in person at GPO Bookstores, via e-mail, mail, or through the GPO Online Bookstore. The Documents Sales Service is funded through GPO’s Revolving Fund. According to GPO, in fiscal year 2000, DSS expended about $58 million (table 9). About $23 million—41 percent—of DSS expenditures in the same year went for personnel compensation and benefits. Almost $13 million of DSS’ expenses were for the cost of publications sold. The Sales Management Division is responsible for product acquisition, pricing, inventory management, outreach to federal agency publishers, market research, customer surveys, and product promotion and advertising. The Division also maintains and updates bibliographical information and catalogs documents. The Division is made up of three units (see figure 10): the Documents Control Branch, the Bibliographic Systems Branch, and the Promotion and Advertising Branch. It employs 56 workers. Documents Control Branch. The Documents Control Branch selects and purchases information products for sale and provides inventory management of sales products. Bibliographic Systems Branch. The Bibliographic Systems Branch creates, updates, and maintains bibliographic information for the Sales of Publications Program. This information is used to create the sales product catalog and other products to identify materials for sale. Promotion and Advertising Branch. The Promotion and Advertising Branch provides promotional and advertising services in support of all Superintendent of Documents programs. The Order Division manages the clerical processes in the fulfillment of requests for sales publications and subscriptions, deals with congressional sales correspondence and inquiries, maintains sales information, processes customer complaints, and establishes book dealer and reseller accounts. The Division is made up of three units (see figure 10): the Publication Order Branch, the Mail List Branch, and the Receipts and Processing Branch. The Order Division had 145 employees in fiscal year 2000. Publication Order Branch. The Publication Order Branch receives telephone inquiries and orders via the Order Desk and handles mail inquiries and publication customer complaints. Mail List Branch. The Mail List Branch processes requests for subscription services, maintains various mailing lists (including reimbursable mailing lists, standing order customer lists, and marketing lists), and responds to subscription customer complaints. Receipts and Processing Branch. The Receipts and Processing Branch opens and organizes mail and manages various accounts, including deposit accounts, government accounts, bookstore transaction accounts, and consigned sales agent accounts. The Branch also handles customer refunds. The Laurel Operations unit employs 105 personnel and is made up of two divisions (see figure 10): the Retail Distribution Division and the Warehouse Division. The Retail Distribution Division supports processing of sales and federal agency reimbursable customer orders, while the Warehouse Division is responsible for receiving and shipping. The Laurel Operations unit is housed in two buildings, but in a recent GPO cost- savings reorganization, it relinquished 23,563 square feet to storage of paper. The Laurel Operations unit provides oversight of the Retail Distribution and Warehouse Divisions, including the operation of the Receiving, Storage, and Shipping Branches. Specifically, Laurel Operations ensures that all stock received in the Laurel complex conforms to contract manages bulk orders, addition or removal of items from bulk storage, and bulk quantities of all sales, By-Law Distribution, and reimbursable stock, deals with procedural issues regarding the shipping control and shipping operation sections, including shipment of outgoing orders, prepares and maintains government bills of lading, manages the pickup and delivery of materials in the Laurel complex by GPO truck drivers, and processes subscriptions and mails subscription and nonsubscription items. The Retail Distribution Division is responsible for the processing of sales and federal agency reimbursable customer orders. The Division employs 72 people. The functions of its branches are given in table 10. The Warehouse Division is responsible for receiving, storing, and shipping publications purchased by the Superintendent of Documents for its Sales of Publications program. Its branches perform receiving, storage, and shipping activities for both the Sales and Consigned Publications Programs (see table 11). It employs 30 staff. The Field Operations Division oversees 23 bookstores nationwide, providing local access to government information. The Division runs the Sales Agent Program, which allows federal agencies to sell publications on behalf of GPO, and operates the Public Documents Distribution Center at Pueblo, Colorado. The Division employs 139 personnel, 46 in the Pueblo Branch and 93 in the Bookstore Branch. Table 12 gives details of the branch functions. The Documents Technical Support Group has two branches (figure 11): the Operations Branch and the Planning and Development Branch. The Group, with 17 employees, is the smallest of the four Superintendent of Documents operating units. The Documents Technical Support Group provides management and support services for Superintendent of Documents programs. Functions performed by this group include defining requirements for automated systems and working with other Superintendent of Documents units on implementation, as well as conducting annual, cyclical, and special inventory projects. The Group is also responsible for control of the quality assurance, policy and procedures, and forms management programs, and serves as a liaison with Personnel Services and Financial Management Services on personnel programs, payroll, and budgetary matters. It provides management and support services essential to the successful operation of Superintendent of Documents programs. Documents Technical Support Group expenditures (table 13) were the smallest of the Superintendent of Documents’ operating units. According to GPO, in FY2000, the Group expended about $1.3 million. About 85 percent of its expenditures were for personnel compensation and benefits; most of the remaining expenditures were for rents, communications, and utilities. The Planning and Development Branch serves as the central coordination point for Superintendent of Documents-wide plans and programs, manages automated data processing (ADP) systems development for the Superintendent of Documents organization, serves as coordination point with the GPO Office of Information Resources Management and outside information system contractors, and reviews all information system requests to assure compatibility and consistency with overall data automation goals and objectives. The Operations Branch serves as the central coordination and control point for Superintendent of Documents activities that cross organizational and functional lines; manages the Documents Reports, Forms, Policies and Procedures, Directives, Property, Space Utilization, and Quality Assurance programs; and is responsible for Superintendent of Documents automated annual and cyclical inventory programs. The Superintendent of Documents manages seven major programs: (1) Cataloging and Indexing of federal documents, (2) the Federal Depository Library Program (FDLP), which is responsible for distributing federal documents to over 1,300 participating libraries, (3) GPO Access, (4) the International Exchange Service (IES), which provides for the distribution of government publications to foreign libraries in exchange for publications produced by their governments, (5) the By-Law Distribution Program, focused on the distribution of government printed materials as directed by law, (6) the Sales of Publications program, with 23 bookstores as well as telephone and mail-order sales operations, and (7) the Agency Distribution Services Program, which mails, at the request of agencies and members of Congress, certain publications specified by law. Cataloging and indexing are the tools used by the Superintendent of Documents to identify and organize government publications by title, subject, and agency, and provide this information to users through indexes, catalogs, and online retrieval systems. The Cataloging and Indexing Branch of the Library Programs Service manages the Cataloging and Indexing program with assistance from the staff of the Depository Administration Branch, who assign the Superintendent of Documents classification number to government publications. The Cataloging Branch publishes—under the legal authority of Title 44 U.S.C. 1710 and 1711—the Monthly Catalog of United States Government Publications (MOCAT). MOCAT is made up of bibliographical records of government publications in tangible and electronic media published by all three branches of the government. The program maintains an online version of MOCAT called the Catalog of Government Publications (CGP). Records are added to the CGP daily, with about 22,000 records added annually. The CGP provides links from bibliographic citations to electronic documents. The Cataloging Branch also produces the U.S. Congressional Serial Set Catalog, establishes and maintains cataloging guidelines and policies, corrects catalog records, and prepares machine-readable cataloging records for the Online Computer Library Center, Inc. (OCLC) system. In fiscal year 2000, 30,124 titles were classified by the Depository Administration Branch, and 18,552 titles were cataloged by the Cataloging Branch (figure 12). This work was performed by 23 staff members at a cost of $2.7 million. 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 As shown in figure 12, the cataloging and indexing workload has steadily decreased from fiscal year 1991, when the Branch classified 62,000 titles and cataloged 29,000 titles, to fiscal year 2000, with 30,000 titles classified and 19,000 cataloged. The program is dependent on the Internet and the OCLC systems for the production of bibliographic records. Once created, the records are provided to GPO mainframe systems supporting various programs managed by the Superintendent of Documents. The depository library program is a national network of depository libraries designed to ensure free public access to government information. One of the largest Superintendent of Documents programs, it focuses on the acquisition and distribution of depository materials and the coordination of over 1,300 federal depository libraries in the 50 states, the District of Columbia, and U.S. territories. Libraries that have been designated federal depositories maintain these information products as part of their existing collections and are responsible for ensuring that the public has free access to the material provided by the FDLP. Fifty-three of these libraries are designated regional libraries. Regional libraries receive all materials distributed through the FDLP and are required to retain all received government documents in perpetuity, although they may withdraw superseded publications from their collections. An advisory group representing the library and information community—the Depository Library Council—assists GPO in identifying and evaluating alternatives for improving public access to government information through the program. Established in 1972, the 15- member Council is structured to provide the Public Printer with a diverse range of opinion and expertise. The Library Division is responsible for managing the Cataloging Branch, the Depository Administration Branch, and Depository Services. The Depository Distribution Division manages the distribution programs of both the Federal Depository Library and the International Exchange Service through the physical receipt, storage, handling, and mailing of tangible government publications. The Division also coordinates, negotiates, and manages interagency agreements, as well as stand-by contracting and delivery carrier contracts to support distribution programs. In fiscal year 2000, it distributed 27,761 tangible titles, including 12,422 paper titles, 14,572 microfiche titles, and 617 electronic titles. As shown in figure 13, the volume of titles in paper and microfiche formats distributed to the participating libraries declined significantly during the last decade. The distribution of paper titles shrank by 8,714 titles, from 21,186 titles in fiscal year 1991 to 12,472 titles in fiscal year 2000. During the same period, the distribution of microfiche titles shrank by 20,679 titles, from 35,251 in fiscal year 1991 to 14,572 titles in fiscal year 2000. 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 The decline in the number of distributed titles was reflected in the decline in the number of distributed copies. As shown in figure 14, the volume of distributed paper copies distributed during the last decade declined by 3,379,000 copies, from 9,303,000 copies in fiscal year 1991 to 5,924,000 copies in fiscal year 2000. An even larger decline was experienced in the number of distributed microfiche copies, which declined during the same period by 11,088,000 copies, from 17,141,000 copies in fiscal year 1991 to 6,053,000 copies in fiscal year 2000. Copies (in millions) As shown in figures 15 and 16, a similar decline was experienced in the distribution of electronic titles and copies—including CD-ROMs, DVDs, and diskettes. Although the volume of the distributed electronic titles initially increased between fiscal years 1992 and 1998, it declined during the last two years, from 836 titles in fiscal year 1998 to 617 titles in fiscal year 2000. Similarly, there was a decline in the number of distributed copies, from the peak of 341,105 copies distributed in fiscal year 1997 to 240,965 copies distributed in fiscal year 2000. GPO Access is the principal mechanism used by the Superintendent of Documents for the electronic dissemination of government documents to the public. Established by Public Law 103-40, the Government Printing Office Electronic Information Access Enhancement Act of 1993, GPO Access offers free Web-based access to information from all three branches of the federal government. The act requires that the Superintendent of Documents (1) maintain an electronic directory of federal electronic information, (2) provide a system of online access, and (3) operate an electronic storage facility. In addition, Senate Report 103-27 incorporated the Federal Bulletin Board as the fourth component of GPO Access. The Federal Bulletin Board is a free electronic bulletin board service of the Superintendent of Documents, enabling agencies to provide the public with self-service access to federal information in electronic form. The Bulletin Board is a component of GPO Access, providing utility and support files for use with online, searchable databases on GPO Access. GPO Access also provides other Internet-related services on a reimbursable basis to other federal agencies. These services include hosting of 16 federal agency Web sites and 32 agency databases of Government Information Locator Service (GILS) records. GPO Access fulfilled approximately 17 million document retrievals per month, with a total of more than 203 million retrievals in fiscal year 2000. Three documents—the Code of Federal Regulations, the Federal Register, and the Commerce Business Daily—accounted for over 172 million retrievals (85 percent of all the documents retrieved during this period). A major part of the GPO Access site supporting the Superintendent of Documents programs is the FDLP Electronic Collection. The collection is made up of (1) core legislative and regulatory documents residing permanently on GPO Access, (2) other remotely accessible documents maintained either by GPO or by other institutions with which GPO has established formal agreements, (3) remotely accessible documents that GPO identifies and links to but that remain under the control of the originating agencies, and (4) tangible electronic products, such as CD- ROMs, distributed to depository libraries. In addition, GPO Access provides access to Library Programs Service information pertaining to the FDLP. The Office of Electronic Information Dissemination Services (EIDS), a unit of the Superintendent of Documents, manages strategic aspects of GPO Access, while the GPO Production Department manages and operates GPO Access on a day-to-day basis. The 22 EIDS employees are responsible for GPO Access strategic planning, the Web content management of GPO Access products and services, and the management of the Federal Bulletin Board. In addition, EIDS is responsible for providing telephone, e-mail, and fax support to users of GPO Access, and for electronic product sales and support. The day-to-day operation and maintenance of GPO Access is carried out on a reimbursable basis by the employees of the GPO Production Department. The By-Law Distribution Program distributes—primarily by mail— documents and other tangible information products for executive agencies and members of Congress. Section 1701, Title 44, U.S.C. prohibits the use of appropriated funds by federal departments and agencies to address, wrap, mail, or otherwise dispatch a publication for public distribution, except for certain specified materials. Such distribution work is required to be performed at GPO with the use of agency mailing lists. Nine categories of publications are specified for By-Law Distribution, including the Daily Congressional Record, Legations, the Official Report of the Supreme Court, and Presidential Documents. In fiscal year 2000, the program was supported by about three FTEs, mostly in the Laurel Operations Center, and $529,000 was expended. The International Exchange Service (IES) provides for the distribution of U.S. government publications to nearly 48 nations with 70 libraries in exchange for publications produced by their governments. Foreign publications are sent to the Library of Congress, which administers the program. The Library Programs Service manages the distribution component of the Library of Congress’ IES program. The Depository Administration Branch is responsible for acquiring publications and the Depository Mailing Branch is responsible for packaging, weighing, and shipping materials to IES foreign recipients. In fiscal year 2000, the Service distributed more than 360,000 paper, CD-ROM, and microfiche copies to 66 IES libraries. The IES program cost $687,000 in fiscal year 2000 and employed four people. On behalf of certain federal agencies, the Consigned Sales Branch of the Documents Sales Service distributes publications to recipients designated by agencies and charges for the cost of services performed. This is a Revolving Fund activity. The program expended $5 million and employed 50 staff. GPO sells government publications to the public for other government agencies. Although most sales are made through mail and telephone orders, the program operates 23 bookstores throughout the country and a distribution center in Pueblo, Colorado. Unlike most of the Superintendent of Documents programs, which are funded through appropriations, the sales program is funded through the GPO Revolving Fund. In fiscal year 2000, the program generated about $45 million in revenues, with expenditures of $58 million—a loss of about $12.2 million. The program employed 450 people, with most of the employees supporting order fulfillment and bookstore sales. Almost 74 percent of the 563,000 sales orders in fiscal year 2000 were mail orders, and there were 87,000 telephone and 58,000 walk-in orders. As shown in figure 17, the number of sales orders in fiscal year 2000 declined from 1,597,000 in fiscal year 1991 to 686,000 in fiscal year 2000. GPO officials believe that the shift from a print to an electronic format, with titles now available free over the Internet, has substantially reduced the demand for printed publications from GPO, especially for relatively expensive publications, such as the Commerce Business Daily and the Federal Register. Paper subscriptions to the Federal Register, for example, dropped off more than 50 percent from fiscal year 1994 to fiscal year 2000. Dollars (in millions) The decline in orders is reflected in the decline in revenues. Sales orders, sales revenue, and profits have been declining in recent years. As shown in figure 18, net sales revenues declined from $81.5 million in fiscal year 1991 to $45.9 million in fiscal year 2000. An additional factor contributing to the mounting losses is the destruction of unsold documents. As noted earlier, the sales program, as required by law, destroys a substantial number of unsold or superseded documents every year. In fiscal year 2000, it destroyed 1.9 million documents with a retail value of $22.3 million and an actual cost of $2.8 million. Dollars (in millions) As shown in figure 19, the sales program operates at a deficit, with a cumulative loss of over $32 million between fiscal year 1996 and fiscal year 2000. The program lost over $12 million in fiscal year 2000. Dollars (in millions) There are currently 23 GPO bookstores located in cities throughout the United States. The GPO bookstore with the greatest sales is the main bookstore in Washington, D.C., which accounted for $785,789 in fiscal year 2000 sales. The Cleveland bookstore has the least sales, with $179,390 in fiscal year 2000 sales. Overall, however, GPO bookstores experienced a net loss of just over $700,000 in fiscal year 2000, with the Chicago bookstore losing over $250,000. GPO’s draft Sales Program 5-Year Plan (2001–2006) identifies numerous goals for achieving a revitalized and healthy sales program. These goals included reducing inventory to only 15 percent of the amount originally purchased, reducing the number of bookstores, enhancement of the U.S. Government Online Bookstore, improving services to customers, and establishing on-demand printing production for some types of printed items. The Office of the Superintendent of Documents and its four units—the Library Programs Service, Electronic Information Dissemination Services (EIDS), the Documents Sales Service, and the Documents Technical Support Group—are provided space and other services on a reimbursable basis by GPO. These services include administrative services, including payroll and personnel, rents, communications, and utilities, and information technology support. The cost allocation for these components is shown in table 14. GPO provides to the Superintendent of Documents administrative services and support, including payroll, other employee services, and services provided by the Office of Inspector General, OIRM, and others. In fiscal year 2000, the Superintendent of Documents expenses for administrative services and support were about $4.7 million, according to GPO. The largest share of the costs for administrative services and support— $3.4 million—was allocated to the Document Sales Service. In fiscal year 2000, the Superintendent of Documents expenses for the rents, communications, and utilities provided by GPO were about $7.3 million. The largest share of the costs for rents, communications, and utilities—$5.9 million—was allocated to the Document Sales Service. The costs to remaining units were far smaller—$645,000 for the Library Programs Service, $334,000 for the Office of the Superintendent of Documents, $224,000 for Electronic Information Dissemination Services, and $166,000 for the Documents Technical Support Group. Table 15 shows space allocation by organizational units. GPO has approximately 240 staff providing IT support, with the majority located in the Office of Information Resources Management (OIRM). The Superintendent of Documents also receives support from the Production Department. OIRM maintains and operates a mainframe computer and several mainframe software systems used by the Library Programs Service to manage the FDLP. The Production Department operates and maintains GPO Access for the Superintendent of Documents. All headquarters units, including the Superintendent of Documents, use GPO’s local area network. In fiscal year 2000, the Superintendent of Documents expenses for the information technology provided by GPO were about $3.5 million. The largest share of the information technology support cost was allocated to the Document Sales Service ($1.6 million), followed by Electronic Information Dissemination Services ($1.1 million) and the Library Programs Service ($700,000). IT support is supplied to four programs: the FDLP, Cataloging and Indexing, Sales of Publications, and GPO Access. The FDLP is supported by four systems: (1) The Depository Distribution Information System (DDIS), (2) the Acquisition, Classification and Shipment Information System (ACSIS), (3) the Automated Depository Distribution System (ADDS), and the System for the Automated Management of Text from a Hierarchical Arrangement (SAMANTHA). The four systems are operated and maintained by the GPO OIRM. One component of ADDS is maintained under contract by its developer. These systems are described as follows: 1. DDIS maintains depository library address information, depository item number selection profiles, item number and Superintendent of Documents classification information, and an interface to ADDS. The DDIS databases contain about 2.7 million records. 2. ACSIS is a large system with almost 1 million records. Its database contains bibliographic information for all titles in the depository library system that are ordered, received, processed, and distributed to depository libraries. The system has an interface to DDIS for Superintendent of Documents classification number, piece number, and shipping count information. 3. ADDS (formerly known as the lighted bin system) was procured by Engineered Systems, Inc., of Omaha. This system aids in the distribution of government materials to over 1,300 depository libraries. ADDS guides the physical distribution of publications to depository libraries by providing visual cues—via lighted bins—to personnel, showing which libraries should receive copies of the publications being distributed. ADDS operates with a daily batch interface with DDIS for item number, shipping count, depository address, and shipping list information. The ADDS database contains about 20,000 records. The system handling the actual lighting of the bins is maintained by Engineered Systems. Follow-up audit trails and report generation are performed on the OIRM mainframe. 4. SAMANTHA prepares OCLC bibliographic records in MARC (machine- readable cataloging) format for publication of the printed version of the Monthly Catalog, the Catalog of U.S. Government Publications, and the creation of cataloging tapes sold by the Library of Congress. The Library Programs Service is planning to replace these systems with an off-the-shelf Integrated Library System (ILS) in fiscal year 2002. However, if the depository library program were transferred to the Library of Congress, much of the information system support currently provided by the mainframe systems could be provided to the program by the Library’s ILS. The GPO Production Department supports the FDLP on a reimbursable basis. The Department maintains the servers with the FDLP Electronic Collection, develops interactive Web-based services for depository libraries—including Library Directory and Biennial Survey—and maintains the servers where the FDLP Desktop and Locator Services reside. GPO Access operates on more than 40 Compaq Alpha servers running the UNIX operating system and a dedicated Fiber Distributed Data Interface ring. Public access is provided through a broadband connection to GPO’s Internet service provider. Data are formatted and searched through the Wide Area Information Service (WAIS). In support of the Sales of Publications program, GPO has spent about $11 million since 1988 to develop and implement its Integrated Processing System. The system is to replace 15 stand-alone legacy systems currently supporting the Sales of Publications program, which officials stated are over 20 years old and do not meet GPO’s current and future needs. Currently, GPO’s goal is to complete acceptance testing of the contractor- developed system in June 2001 and begin implementing it shortly thereafter. Based on an initial list developed in 1996, the Superintendent of Documents has identified the following “Essential Titles for Public Use in Paper Format.” Available online? ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! Available online? ! ! Advances in information technologies—and most notably in electronic publishing and the Internet—have triggered numerous proposals to reform and restructure the nation’s information dissemination infrastructure in general and the Office of the Superintendent of Documents in particular. These reforms range from a 1988 proposal by the Office of Technology Assessment (OTA) for the creation of a Government Information Corporation to the most recent proposal by the National Commission on Libraries and Information Sciences (NCLIS) to consolidate federal printing and dissemination programs in a newly created Public Information Resources Administration. In 1988, an OTA report addressed the impact of electronic technology on federal information dissemination, examined the infrastructure for federal information dissemination, and offered several institutional alternatives for the Superintendent of Documents. These alternatives included centralizing all or most government dissemination functions in one office or agency; privatizing the Superintendent of Documents; reorganizing the Superintendent of Documents as part of a legislative printing office; and consolidating the Superintendent of Documents with the National Technical Information Service and creating a Government Information Office or Corporation. In a 1993 report to the Superintendent of Documents, the Depository Library Council recommended restructuring the FDLP. The Council noted that as structured, the depository library program was foundering so badly that its very existence was threatened, and suggested restructuring the program to ensure its future viability. The Council proposed ten alternative FDLP models, ranging from the creation of a National Collection of Last Resort to the development of a network of super- regional depository libraries. No action was taken on the Council’s proposals. The next effort to reorganize the Office of the Superintendent of Documents came in 1993, when the House of Representatives passed the Government Information Dissemination and Printing Improvement Act of 1993 (H.R. 3400). Title XIV of the bill transferred the position, functions, and staff of the Office of the Superintendent of Documents to the Library of Congress. The bill stated that the Superintendent of Documents shall be appointed by, and serve at the pleasure of, the Librarian of Congress. Concerned about the potential impact of the Internet on federal information dissemination, the Congress passed the Government Printing Office Electronic Information Access Enhancement Act. The act directed the Superintendent of Documents to provide a system of online access to the Congressional Record and the Federal Register by June 1994. In response, GPO created GPO Access—an Internet portal that provides free online access to FDLP libraries. In December 1995, GPO made GPO Access available free of charge over the Internet. In 1998 a government printing reform bill was introduced by Senator John Warner as S. 2288—the Wendell H. Ford Government Publications Reform Act of 1998. The focus of the legislation was to restructure the federal printing laws to eliminate reliance on the constitutionally suspect authorities of the Joint Committee on Printing and to strengthen the Government Printing Office’s control of federal agency printing and other dissemination activities and its responsibility for providing permanent public access to government information. The bill would have abolished the Joint Committee on Printing and transferred many of its responsibilities to the House Committee on House Oversight and the Senate Committee on Rules and Administration. The bill also would have renamed GPO the Government Publications Office, to be administered by the Public Printer. The production of all government publications, regardless of form or format, was to be centralized in the new Government Publications Office, except such production required by the Supreme Court and certain national security entities. A presidentially appointed Superintendent of Government Publications Access Programs would have assumed the duties of the current Superintendent of Documents, administering the GPO sales, federal depository library program, and GPO electronic documents access programs. The most recent proposal to reorganize the federal information infrastructure was made in January 2001 by NCLIS. The Commission proposes the creation of a new agency whose primary mission is to serve as the federal government’s focal point for providing timely dissemination and permanent public availability of its public information resources. This agency, provisionally called the Public Information Resources Administration (PIRA), would be in the executive branch and would bring together under one management the National Technical Information Service, the programs currently under the Superintendent of Documents at GPO (including the FDLP), and other information sales and dissemination programs from all three branches of government. In the Commission’s sweeping proposal, the Superintendent of Documents would be renamed the Superintendent of Public Information Resources, reporting directly to the head of the PIRA. The FDLP would be renamed the Public Information Resources Access Program (PIRAP), and the Federal Depository Libraries would be renamed Public Information Resources Access Libraries (PIRA Libraries). The basic structure of the FDLP or the congressional designation and other criteria for becoming a federal depository library would not be changed. 1. Our report refers to security issues within the context of authenticity. Specifically, we note that authentication provides the assurance that a document has not been surreptitiously or accidentally modified, which are problems that can be avoided with appropriate security controls. 2. The report has been revised to reflect that electronic documents may offer greater functionality than paper documents. However, we disagree with the Public Printer’s statement that the only barriers to the use of printed documents are basic literacy and physical disability. In fact, other barriers exist including geographic distance from the library where the printed information is maintained. In addition, the costs associated with maintaining paper documents may limit what is readily available. 3. Our report states that GPO would face major challenges—including ensuring equity of access—in moving to an environment in which government information is disseminated solely in electronic format. GPO raises valid concerns about how to achieve equity of access in an electronic environment and how this would be more achievable by the Library. Resolving such concerns is beyond the scope of what we were mandated to study. 4. We agree that total program costs may increase if the number of the documents eligible for inclusion in the depository library program also increases because of the availability of documents on the Web or other factors. We also recognize in our report that electronic dissemination may result in costs shifting to end users and depository libraries. Finally, an economic analysis of dissemination costs is beyond the scope of this study, but we do include the results of GPO’s own study of the costs of converting to an all-electronic depository library system. 5. The policy decision of whether to transfer the library program is one that rests with the Congress. Our role is to provide factual information—including advantages and disadvantages—that can be used to inform the decision-making process. The question of how public access to government information might be improved by transferring the depository library program to the Library is a valid one but clearly beyond our charge. 6. Our report points out that if a decision is made to transfer the depository library program, policymakers may want to consider limiting the physical movement of staff and equipment. This is offered as a measure for ensuring the success of a potential transfer, in response to our mandate. 7. The Public Printer is correct in stating that the depository library program is much larger in scope than the Library’s National Library Service for the Blind and Physically Handicapped. In recognition of this, our report notes that the Library cautioned in its studies that administering the depository library program would considerably expand the Library’s mission. 8. A transfer of the depository library program would clearly require changes to legislation. However, a detailed discussion of the specific changes needed would be difficult, if not impossible, without knowing the details of a policy decision that, to date, has not been made. The Public Printer’s question concerning the effect of a transfer on fugitive documents is valid, and our report highlights the concerns of the library community related to this issue. 9. The Public Printer is correct in stating that if a transfer is directed, spreading GPO’s current overhead costs over the programs remaining at GPO would result in increased costs for those programs. He is also correct that some reduction in overhead costs may, as a result, be warranted. We would expect that this issue would be addressed during transition planning. Further, we would expect that the transition team would identify and consider all options for dealing with this issue, including transferring the positions supporting the depository library program to the Library. 10. If a decision is made to transfer the depository library program to the Library, the Librarian would, of course, have to address the priority of the transferred program within the context of the mission of the Library. 11. The Public Printer also points out many detailed implementation questions that our report does not address. We believe such questions can be addressed if and when a decision is made to transfer the depository library program and the details of the functions to be transferred are known. 12. In regard to the impact on transferred employees, the report identifies the concerns of unions representing GPO employees and states that issues such as seniority, salary rates, and bargaining rights should be addressed during transition planning. 13. In regard to a potential transfer of GPO Access, our report does not suggest that the Library duplicate GPO’s preparation of the data bases that are derived from the printing processes currently managed by the production department or that dissemination of the databases be moved to another location. If a decision is made to transfer the program, we are confident that detailed transition planning— conducted by the Library and GPO—would result in an appropriate and cost-effective division of responsibilities between the two organizations. 14. The report has been revised to reflect that GPO Access is a program. 15. The report has been revised to reflect that GPO Access also includes GPO’s U.S. Government Online Bookstore. A discussion of the remaining components is contained in appendix V. | Electronic dissemination of government documents can reduce distribution costs and make government information more usable and accessible. However, the transition to a paperless environment will require that several challenges be overcome. Transferring the depository library program to the Library of Congress entails both advantages and disadvantages. In studies done in 1993 and 1994, the Library concluded that the depository library program was not inconsistent with the mission and functions of the Library and that it might be appropriate for the Library to oversee this program. However, the Government Printing Office (GPO) believes that the Library is not an appropriate home for the depository library program because the Library's mission and operations are inconsistent with a large-scale information dissemination program. In addition, the studies and librarian organizations raised concerns about the potential negative effects of the transfer on public access to information and the availability of funds to maintain the current program. If a decision is made to transfer the depository library program, the concerns raised by library organizations and employee unions should be addressed. One option for addressing these issues is to form a GPO/Library transition team to develop appropriate strategies. |
The death toll from Monday's shooting at a Cleveland-area high school rose to three, making it the deadliest such incident in the U.S. in seven years, as details emerged about the 17-year-old suspect's troubled upbringing.
A student wounded in a school shooting in northeast Ohio has been declared brain dead, the second fatality following an attack by a teen gunman, Jack Nicas reports on Lunch Break. Photo: AP.
Enlarge Image Close Reuters T.J. Lane, suspected in a school shooting Monday, leaves court Tuesday.
Enlarge Image Close Associated Press Parents and children are reunited at Chardon Middle School in Northeast Ohio, following a shooting at Chardon High School.
In a hearing Tuesday, a juvenile-court judge ordered suspect T.J. Lane held for 15 days while prosecutors prepare charges. Geauga County prosecutor David Joyce said his office would seek to try Mr. Lane as an adult, and that charges likely would include three counts of aggravated murder.
The incident, in which two students also were wounded, took place in Chardon, Ohio, about 30 miles from Cleveland, and authorities said the motive remained unclear. "He chose his victims at random," Mr. Joyce told reporters after the hearing Tuesday. "This is not about bullying. This is not about drugs. This is someone who is not well and I'm sure in our court case we'll prove that."
Attempts to reach Mr. Lane's family were unsuccessful. A woman who answered the phone at his maternal grandparents' residence declined to comment. Mr. Lane's attorney, Robert Farinacci, didn't return calls and an email seeking comment.
Authorities said Mr. Lane fired 10 shots at a group of students in the Chardon High School cafeteria as school began Monday, then shot an additional student elsewhere in the cafeteria, then proceeded down a hallway, where he shot one more student.
View Slideshow Mark Duncan/Associated Press A group of students and parents prayed Tuesday for victims of Monday's school shooting in Chardon, Ohio.
He then fled and was arrested nearby, authorities said. Police said Mr. Lane told them he didn't know the victims.
Demetrius Hewlin died Tuesday morning at MetroHealth Medical Center in Cleveland. His death followed that of Russell King Jr., 17 years old, who died late Monday, and 16-year-old Danny Parmertor, who succumbed to his wounds hours after the attack.
Another male victim remains hospitalized, while a female victim has been released to her family, officials said.
The Chardon incident is now the deadliest high-school shooting in the U.S. since 2005, when a teenager killed seven people at Red Lake Senior High School in Minnesota, according to the Brady Center to Prevent Gun Violence, which advocates for tougher controls on firearms. The assailant in that case also killed his grandfather, his grandfather's companion and himself.
Peers and neighbors described Mr. Lane as a quiet boy with a troubled family. Court records show that in 2002 his father, Thomas Lane Jr., was charged with attempted murder against his ex-wife, pleading guilty to felonious assault.
The high school's cafeteria serves as an early-morning gathering spot, where some students wait for buses that take them to other programs.
Mr. Lane ordinarily took the early bus to Lake Academy, an alternative school whose website says it serves "at-risk" students "experiencing serious challenges in meeting expectations within traditional school settings."
Three of his alleged victims routinely took the same bus, but to a vocational school, the Auburn Career Center, said Auburn superintendent Magaret Lynch.
Chardon High senior Garrett Szalay said his girlfriend was close with Mr. Lane in junior high school, when "he was a great kid. He just had a wall up and she had to break it down." Mr. Lane had several friends in junior high and wasn't bullied, "all that much," Mr. Szalay said.When he began attending Lake Academy in ninth grade, Mr. Lane lost touch with Mr. Szalay's girlfriend and "it all started falling apart for him," Mr. Szalay said.
Mr. Lane began living off and on with his paternal grandparents in Chardon several years ago, said Carl Henderson, a former Chardon police officer and Geauga County Sheriff, who lived near the family.
Mr. Henderson, 74 years old, said he came to know Mr. Lane because he would often jog by his home. "He was a nice young man," he said.
Mr. Henderson said he spent Monday evening with the family, and was told that the .22 revolver of Mr. Lane's grandfather, Thomas Lane Sr., was missing from the home on the day of the shooting.
"A .22 revolver—the same gun the sheriff's office confiscated. Just a regular little revolver, a target revolver," Mr. Henderson said.
Prosecutors said Mr. Lane confessed to taking a .22 revolver and a knife to school.
Mr. Henderson said he was a frequent hunting partner of Thomas Lane, Sr., and that he knew T.J. Lane also hunted. "We all have guns. Everybody in the community has guns," he said. "I'm sure (T.J. Lane) knows how to use guns from being around the family."
Under federal law, there is no penalty for gun owners storing guns accessible to children, according to Daniel Vice, senior attorney at the Brady center.
Although some states have child-access prevention laws that penalize gun owners who don't keep guns away from children, Ohio has no such law.
Write to Jack Nicas at jack.nicas@wsj.com
A version of this article appeared February 29, 2012, on page A2 in some U.S. editions of The Wall Street Journal, with the headline: Third Teen Dead In School Attack. ||||| Story highlights "She asked if we could pray and I'm like 'Yes, please'," says 10th-grader
Suspect T.J. Lane has admitted to school shootings, prosecutor says
He is likely to be tried as an adult, Ohio's attorney general says
Three students died from the Monday attack in Chardon, Ohio
Prosecutor David Joyce said Tuesday that 17-year-old T.J. Lane has admitted taking a .22-caliber gun and a knife into Chardon High School on Monday morning and firing 10 rounds, choosing his victims randomly.
Asked by Judge Timothy J. Grendell during a preliminary hearing if he understood his rights, Lane said softly, "Yes, sir, yes, I do."
Lane will continue to be held in detention, and charges must be filed by 4:45 p.m. March 1, the judge ordered.
Joyce predicted Lane will be tried as an adult. "Absolutely," he said. "It's a matter of law in the state of Ohio. At 17 years old, committing an act like this." He predicted the high school sophomore will be charged with three counts of aggravated murder "as well as other counts."
"I guarantee that this was an aberration, this does not represent our community," Joyce told reporters. "He chose his victims at random. This is not about bullying. This is not about drugs. This is someone who is not well and I'm sure, in our court case, we will prove that to all of your desires and we will make sure that justice is done in our county."
Grendell said the court had tentatively scheduled a hearing for March 19 "should there be a filing of a motion for transfer to the adult court."
JUST WATCHED Shooting suspect went to other school Replay More Videos ... MUST WATCH Shooting suspect went to other school 02:09
JUST WATCHED Friend: Alleged gunman had sad look Replay More Videos ... MUST WATCH Friend: Alleged gunman had sad look 02:01
JUST WATCHED Student: Teacher chased out gunman Replay More Videos ... MUST WATCH Student: Teacher chased out gunman 02:04
JUST WATCHED Chardon student: 'I saw a gun pointing' Replay More Videos ... MUST WATCH Chardon student: 'I saw a gun pointing' 03:25
Earlier Tuesday, a third student died of wounds suffered in the shooting, hospital officials said.
Demetrius Hewlin died Tuesday morning, MetroHealth Medical Center said in a statement.
Russell King Jr., 17, was declared brain dead early Tuesday, according to the Cuyahoga County medical examiner's office.
Student Daniel Parmertor died Monday.
"We are very saddened by the loss of our son and others in our Chardon community," Hewlin's family said in a statement released by the hospital. "Demetrius was a happy young man who loved life and his family and friends. We will miss him very much, but we are proud that he will be able to help others through organ donation."
Police Chief Tim McKenna said the motive remained unclear. Students have described Lane as a withdrawn boy.
Lane told authorities that he stole the gun used in the shootings from his uncle, a source told CNN on Tuesday.
A law enforcement source said the weapon had been purchased legally.
Police found the gun inside the school, apparently dropped by the suspect as he fled, the source said.
One other student wounded in the shooting remained hospitalized Tuesday. A fifth victim was released, officials said.
Geauga County Sheriff Daniel McClelland said the community has a long way to go before it can put the shooting behind it.
"Now we move to another important phase," he said. "And while the investigation continues and we still look for the why and what and who, we now deal with a community looking to heal."
A prayer service at the Church of St. Mary in Chardon sought to speed that healing. Hundreds of people spilled outside the front of the church. Inside, those assembled applauded as school and police officials were introduced.
"These are great people and out of a very, very, very terrible tragedy, they'll rise again and they'll make this an even greater town," Gov. John Kasich told reporters outside the church.
Heather Weinrich, a 2004 graduate of the school, said she drove an hour with her elementary school-age son to attend the event because she wanted him to know what happened and she wanted to support her school.
Zack Barry, an 18-year-old senior at the school, said he was overwhelmed by the turnout of support. "It made me feel very good," he said.
Classes in the tightly knit community of 5,100, about 30 miles east of Cleveland near Lake Erie, are to resume Friday. But staff, students and parents will be encouraged to return to district schools for visits and counseling on Wednesday and Thursday, Superintendent Joe Bergant said.
JUST WATCHED Author: Schools need 'culture of caring' Replay More Videos ... MUST WATCH Author: Schools need 'culture of caring' 02:03
JUST WATCHED Columbine survivor on Chardon shooting Replay More Videos ... MUST WATCH Columbine survivor on Chardon shooting 01:54
JUST WATCHED Social media's double edged sword Replay More Videos ... MUST WATCH Social media's double edged sword 02:35
Some of the victims were students who were in the cafeteria waiting for a bus to take them to Auburn Career Center, a nearby vocational school that they attended, said Maggie Lynch, the school's superintendent.
Lane is a student at Lake Academy Alternative School, a school for at-risk children, said the school's interim director, Don Ehas.
In a statement Monday, Parmertor's family said they were "torn by the loss."
"Danny was a bright young boy who had a bright future ahead of him," the family said.
Lawyer Bob Farinacci, speaking for Lane's family, said Monday night that the suspect was "extremely remorseful."
"Very, very scared and extremely remorseful," he told CNN affiliate WKYC
"He is a very confused young man right now," Farinacci said. "He's very confused. He is very upset. He's very distraught."
Like others in Chardon, Lane's family also has been left groping for an explanation.
"This is something that could never have been predicted," Farinacci said. "T.J.'s family has asked for some privacy while they try to understand how such a tragedy could have occurred and while they mourn this terrible loss for their community."
With little to go on to help make sense of the violence, many turned to cryptic Facebook postings by the alleged shooter for a glimpse into Lane's mindset -- especially a long, dark poetic rant from December 30.
The post refers to "a quaint lonely town, (where there) sits a man with a frown (who) longed for only one thing, the world to bow at his feet."
"He was better than the rest, all those ones he detests, within their castles, so vain," he wrote.
Lane then wrote about going through "the castle ... like an ominous breeze through the trees," past guards -- all leading up to the post's dramatic conclusion.
"Feel death, not just mocking you. Not just stalking you but inside of you," it says. "Wriggle and writhe. Feel smaller beneath my might. Seizure in the Pestilence that is my scythe."
The post concludes with: "Die, all of you."
Farinacci said Lane was a "fairly quiet and good kid" with good grades who was doubling up on classes to graduate in May.
"He pretty much sticks to himself but does have some friends and has never been in trouble over anything that we know about," he said.
But just before class started Monday, witnesses say, Lane silently walked up to a table of students, holding a gun. As he opened fire, the shooter was expressionless, a student recalled.
"He was silent the entire time," said student Nate Mueller, who said his ear was grazed by a bullet. "There was no warning or anything. He just opened fire."
Monday's death toll might have been higher had it not been for the actions of assistant football coach and study hall teacher Frank Hall. Students said Hall chased the gunman out of the school, and police arrested the suspect nearby a short time later.
"Coach Hall, he always talks about how much he cares about us students, his team and everyone," said student Neil Thomas. "And I think today he really went out and he proved how much he cared about us. He would take a bullet for us."
Similar praise was given to math teacher Joseph Ricci, whom 10th-grader Kaylee O'Donnell said made sure his students were safe before donning a bulletproof vest and entering a hallway, where he pulled a wounded student inside. "You're pretty brave in risking your life for students," she said.
"I actually was sitting with a girl, and she asked if we could pray and I'm like 'Yes, please.' So me and her quietly did that and a couple of my friends."
Asked how she would feel when she returns to school, she said, "It's not going to be the same, but I still feel safe."
The shooting has had national repercussions. "Violence like this should not be tolerated in our society," said House Speaker John Boehner. "But let's be honest -- there are about 250 million guns in America. They are out there but people should use them responsibly." ||||| 49 Gallery: Shooting at Chardon High School
• 6:43 a.m. update: Second Chardon High School student, Russell King Jr., dies of gunshot wound
CHARDON, Ohio -- When the unthinkable happened at Chardon High School, this town was prepared.
Faced with a classmate-turned-gunman in the cafeteria, nearby students knew what to do from emergency drills: They fled to the teachers' lounge, barricading the door with a piano.
Teachers knew what to do, too, locking down students in safe rooms while an assistant football coach chased the gunman from the school.
Police, also trained through repeated drills and summoned by 9-1-1 calls from students, responded immediately. And parents, alerted nearly instantly about the shooting by their cellphone-equipped students, arrived in droves, leaving their cars wherever they could, to walk to the school and collect their shocked, shivering kids.
Chaos struck here Monday morning, leaving a 16-year-old boy dead and four classmates seriously wounded, but Chardon responded with calm.
"We should be proud of our officials for how they reacted and our children," Geauga County Commissioner Mary E. Samide said. "It could have been a lot worse."
Three students were flown by helicopter to MetroHealth Medical Center, where one -- Daniel Parmertor -- died and two were in critical condition late Monday, according to police. The other students were in stable and serious conditions at Hillcrest Hospital in Mayfield Heights.
The shooting rocked Chardon, an insular town of 5,000 about 35 miles east of Cleveland, usually notable for maple syrup and lake-effect snow. Police offered no motive.
"I was getting out my homework, and then I heard a pop, like someone popping a big bag of chips," said Brad Courtney, a 15-year-old freshman in study hall in the cafeteria. "Mostly I was thinking, 'Is this happening in Chardon?' It's a little place in the middle of nowhere."
View Chardon high school shooting in a larger map
School was beginning at 7:30 a.m. Monday when T.J. Lane, indentified as a sophmore in last year's high school yearbook, opened fire in the cafeteria, said junior Nate Mueller, who said his right ear was grazed by a bullet.
A surveillance video shows Lane, 17, sat down at an empty table, reached into his bag and pulled out a .22-caliber handgun, according to a source who saw the video. He walked up to a group of students and, one by one, shot at least three in the backs of their heads.
At least three friends -- Russell King, Demetrius Hewlin and Nick Walczak -- were sitting at the long rectangular table where they regularly wait for their bus to the Auburn Career Center, a vocational school, Nate said. Not clear is where Danny was shot, though Maggie Lynch, the career center's superintendent, said he was also in the cafeteria.
As Lane ran out of the cafeteria, Frank Hall, an offensive coordinator for the football team, chased him down a side hallway, according to the source. An 18-year-old girl, whom friends identified as Joy Rickers, was shot in the buttocks in the hall as the gunman fled.
Police used tracking dogs to follow the shooter's footprints and found him about 45 minutes later, a mile away, said sheriff's Lt. John Hiscox.
Students had flooded police with 9-1-1 calls, including one from Nate, who hid behind a car after running out of the cafeteria. Dozens of study hall students, meanwhile, dove under the tables before moving to the adjacent teachers' lounge. They were following the protocol they'd practiced.
"We've done lockdown drills before, so we all knew," Brad said. "It all seemed pretty quick."
Chardon schools several years ago began working with law enforcement agencies to train for a possible school shooting, Geauga County Sheriff Dan McClelland said.
On Monday, an announcement over the public-address system instructed students and teachers to turn off the lights and hide, said a student who thought it was another drill until she received text messages from friends.
Students texted each other and their parents as they waited in the lounge, where they pushed a piano in front of the door, Brad said.
"I think the kids reacted exceptionally well and need to be proud of themselves. If they barricaded the door and were calling for help, that is a great survival response," said Larry Banaszak, the police director at Otterbein College, who developed a training plan after the Virginia Tech shootings in 2007.
Banaszak instructs students to run, hide, barricade and, as a last resort, fight a shooter.
"I think that school shootings are another crisis that we need to prepare for," he said. "We know what to do when there's a fire drill. We know what to do when there's a tornado siren goes off. . . . It's all about planning for another crisis."
How to cope Here are some ways Chardon High School parents can help their children grieve and understand the shooting incident. • Watch for anxiety. Listen and talk to them. Find out what they are concerned about. • Defuse their fears. School shootings are extremely rare, and school safety improves after such tragedies because administrators re-examine safety procedures. • Watch for troubling behavior. See if their child is angry or aggressive, isolated or detached. Watch to see if the child is disengaged in friendships, activities or academic life. • Encourage them to report alarming behavior of others. Students or friends concerned about a classmate can anonymously call 1-866-773-2587. • School counselors will be available today from 10 a.m. to 3 p.m. at Chardon Middle School, 424 North St., and 4 to 9 p.m. at St. Mary's School, which is across the street at 401 North St. -- Pat Galbincea
Police officers from Chardon, the Geauga County Sheriff's Office, the State Highway Patrol and other agencies, as well as emergency medical technicians, responded quickly, said Chardon Chief Tim McKenna.
School officials shepherded high school students across the campus to Maple Elementary School, where their parents could pick them up. The FBI and police interviewed students who witnessed the shootings before letting them go home.
Parents arrived as quickly as they could, leaving their cars on the side of the road and rushing to hug their children. No one honked. There was no chaos. Moms and dads lined up on sidewalks. When they were reunited with their kids, some of whom left school in only their gym shorts, they threw their coats around them to keep them warm.
People in Chardon are close, said Darlene Judd, who was relieved to find her two sons unharmed. "Everybody knows everybody; everybody cares for everybody. Even though it's not your kid down, someone else's is."
The high school of about 1,100 students from Chardon and surrounding areas is rated excellent by the Ohio Department of Education. Chardon is the kind of place where families stay for generations.
"That community will pull together. They are a tight-knit community. It is very, very difficult," Gov. John Kasich said Monday. "I've asked the state to offer any and all resources that can help that community get through this terrible time."
They started to pull together Monday night, when more than 100 Chardon students gathered around the bandstand in Chardon Square. Six bouquets of flowers, five for each of the victims and one for the shooter's family, were propped against the bandstand, along with votive candles and a hand-lettered sign on white cardboard: "2/27/12 Never forget." Students sang songs, including "Lean on Me," and hugged each other.
Nate said his group of friends had once been friends with Lane, but they had gone separate ways in high school, after Lane went through a Goth phase, usually known for black clothes and nonconformist behavior.
Russell had recently started dating Lane's ex-girlfriend, who is home-schooled, Nate said. Court records show Lane had a traffic case in November and a juvenile delinquency case in 2009.
Nate said Lane regularly took the bus to Lake Academy, an alternative school in Willoughby for students with emotional problems, academic deficiencies, family discord, drug and alcohol abuse and other problems. Lane's family has a long history of problems, records show.
Anna Mullet, the daughter of the pastor of the Chardon Assembly of God on the square, where a vigil was held late Monday, said Lane and his older brother, Adam, sometimes attended a joint youth group operated by six small churches in Chardon.
"He seemed like a nice boy," said Mullet.
The shooting is the second in five years in Northeast Ohio. At SuccessTech Academy in Cleveland in October 2007, 14-year-old Asa Coon wounded two teachers and two students, then killed himself as police swarmed the building.
Even with the drills, Chardon students said they were scared.
"This is real. This is real," said Nicole Weaver, 17, who hid in a classroom near the cafeteria. "I thought someone was going to come in here and shoot us."
With reporting by Brian Albrecht, Jo Ellen Corrigan, Rachel Dissell, Stan Donaldson, Karen Farkas, Pat Galbincea, John Horton, Peter Krouse, Patrick O'Donnell, Michael O'Malley, Tonya Sams and Michael Scott | A second student has been declared brain dead after being shot yesterday at Ohio's Chardon High School, the Wall Street Journal reports. Russell King Jr., 17, had recently started dating the suspected shooter's ex-girlfriend. The first victim, 16-year-old Daniel Parmertor, died yesterday, and three others were wounded when 17-year-old TJ Lane allegedly began shooting in the cafeteria—"directly aiming" at a table where four boys sat, from "two feet away," one witness says. The Cleveland Plain Dealer reports that Lane allegedly walked up to the group and shot at least three of the victims in the backs of their heads. Lane is "very distraught" and "extremely remorseful," his lawyer says. "He's very confused. … This is a very scary circumstance that I don't think he could have possibly even foreseen himself in the middle of." Lane was a student at a nearby school for at-risk children; his male victims were students at a nearby vocational school, and were waiting at Chardon to be bused there. School was canceled today as the community mourns, and many are looking for clues in a dark Facebook post Lane made on December 30, CNN reports. It refers to a man who "was better than the rest, all those ones he detests, within their castles, so vain," and ends with the line, "Die, all of you." Lane's father also has a violent past, police say. |
The federal government has long played an important role in promoting the economic vitality of rural America—from supporting agriculture to building rural infrastructure, such as the electrification of rural America in the 1930s. More recently, since 1983, the federal government has funneled over $15.5 billion to rural areas for such activities as small business assistance, industrial development, and economic planning. In addition, rural areas receive federal funds that are not specifically targeted to economic development but that nevertheless influence rural economic development, such as agricultural payments, infrastructure assistance, and job training. The U.S. Department of Agriculture (USDA) has primary federal responsibility for rural development and provides leadership within the executive branch for coordinating federal programs, services, and actions affecting rural areas. Other federal agencies, such as the Department of Commerce’s Economic Development Administration (EDA) and the Department of Housing and Urban Development, also provide assistance for economic and other types of development to rural communities. Finally, independent federal agencies—such as the Appalachian Regional Commission, Small Business Administration, and the Tennessee Valley Authority—provide assistance in rural areas. To facilitate the delivery of assistance through the programs that these agencies administer, USDA has promoted the development of the National Rural Development Partnership (NRDP), whose objective is to promote collaboration, innovation, and strategic approaches among federal and state agencies involved in rural development. NRDP’s members include the National Rural Development Council and State Rural Development Councils. The national council is composed of senior program managers from over 40 federal agencies and representatives of public interest, community, and private organizations. State councils, which have been established in 39 states, are composed of representatives from federal, state, and local governments, tribal councils, and the private sector. Despite the range of federal assistance, many rural areas continue to face distinct barriers to social and economic development. One of these barriers, remoteness from population centers, means that rural areas may find it difficult to attract many services—such as access to advanced medical care and higher education—that are available in or near population centers and may offer fewer job opportunities than urban areas. Increasingly, telecommunications technologies are seen as a way to overcome the problems posed by distance, according to rural development experts. For example, some communities are using interactive videoconferencing to provide medical consultations. Some colleges and schools are offering classes, and even degree programs, to students on-line in remote locations. Large businesses have found it cost-effective to establish or maintain branch offices in rural areas by using videoconferencing or on-line access to hold meetings and conduct business. In February 1996, the Congress enacted the Telecommunications Act of 1996 (P.L. 104-104, Feb. 8, 1996), the first major overhaul of telecommunications law in over 60 years. The new law, which includes important provisions promoting the use of advanced telecommunications in rural America, seeks to preserve and advance the concept of universal service, defined generally as an evolving level of telecommunications service. The preservation and advancement of universal service is to be based on seven principles, including the availability of advanced services in all regions of the nation and access to services in rural and high-cost areas. The act also establishes the Telecommunications Development Fund, which, among other things, is to support universal service and promote the delivery of telecommunications services to underserved rural and urban areas. At least 28 federal programs in 15 agencies provide funding for telecommunications programs. Of the 28 programs, 13 are specifically designed to support telecommunications projects, although not necessarily for rural areas. The remaining 15 programs have more general economic development purposes but can be used for telecommunications efforts. In fiscal year 1995, the 13 telecommunications programs provided about $715.8 million for about 540 projects. Programs ranged from the Rural Utilities Service’s rural telephone loan programs ($585 million combined), which are designed to ensure that rural areas have telephone service comparable with urban areas’, to the Department of Health and Human Service’s (HHS) Health Care Financing Administration’s Research, Demonstration, and Evaluation Program ($0.5 million), which funds, among other things, innovative projects that use telecommunications technologies to improve medical access and care. Table 1 lists the 13 telecommunications-related programs, their funding levels, and the other types of activities they support. The other 15 programs we identified that can be used for telecommunications projects are intended to support a range of community assistance projects. For example, the Department of Housing and Urban Development provides Community Development Block Grants to communities for development purposes, while the EDA provides grants to communities for public works and infrastructure development. In addition, HHS’ Office of Rural Health Policy supports the Rural Health Services Outreach Program, which, among other things, can be used to provide better access to health care through telecommunications technology. (See app. I for more detailed information on all 28 programs.) Officials in five rural communities that have obtained federal funds for telecommunications projects identified three key actions for putting telecommunications projects into place: (1) developing a basic understanding of the potential benefits of telecommunications technologies; (2) engaging in long-term planning to determine the need for, and ensure the technical and financial feasibility of, their project; and (3) building partnerships among the key players who would be needed to support and/or benefit from the project. The representatives of the State Rural Development Councils and representatives of rural associations, such as the National Association of Development Organizations (NADO) and National Association of Regional Councils, confirmed the importance of these actions. In examining options to address a particular problem in their rural communities, officials at all of the projects we visited identified telecommunications as a possible solution. They all agreed, however, that they had to develop a basic understanding of telecommunications technologies before they could evaluate their usefulness in solving their problem. For example, a consortium of mental health officials in eastern Oregon were seeking ways to reduce the risk, expense, and time involved in transporting individuals who might be committed to mental health facilities to and from various types of court hearings and psychiatric evaluations. Once they learned about various telecommunications technologies and the ways in which the technologies could help them deliver mental health services, these officials identified video teleconferencing as a alternative to repeatedly transporting patients across long distances. They developed the RODEONET project, which has 14 sites in eastern and southern Oregon. Similarly, in Kentucky, the Chief Executive Officer and Chairman of the Greater Paducah Economic Development Council told us that he first became interested in the potential of an information age park to bring economic opportunities to his community when he attended a telecommunications conference in 1989 that was sponsored by a telephone company. An information age park is an office park that, by concentrating state-of-the-art telecommunications—such as videoconferencing, high-speed data transfer, and computer networking—could attract a host of new industries, such as credit card centers and telemarketers. After studying the technology, he and the Greater Paducah Economic Development Council asked for assistance from the local carrier to determine the feasibility of a project. Most representatives of the 15 State Rural Development Councils, NADO, and many other experts on rural development underscored the importance of gaining a basic understanding of telecommunications technologies as a first step in using them. Furthermore, NADO, as well as others, reported that rural communities need reliable, centralized information on the use of telecommunications. Officials for all of the projects we visited developed long-term plans to ensure the technical and financial feasibility of their project. For example, the director of the Paducah Information Age Park, told us that, in 1990, the Greater Paducah Economic Development Council formally requested a carrier’s assistance to identify and quantify the potential economic benefits of developing such a park for use as a resource in recruiting information-intensive, high-technology industries. In March 1991, project officials and their partners conducted a study to determine if information age business parks might be economically feasible in nonmetropolitan areas, such as Paducah, Kentucky. The study concluded that the proposed site would be a suitable location to develop a “micropolitan” information park. Planning for the project’s funding involved multiple participants. Total funds of $21 million were secured through investments from individuals and private businesses as well as state and federal loans and grants from the state of Kentucky and the Tennessee Valley Authority. In addition, the city government granted certain zoning concessions. According to officials of all of the projects we visited and the 15 State Rural Development Councils we spoke with, partnership building is critical to the successful creation and continued operations of telecommunications projects. Partnership building involves bringing together the key players, such as telephone companies, anticipated users, and government officials at all levels. For example, the Spokane, Washington, STEP/Star Network, which develops, produces, and broadcasts education programs for credit, primarily at the high school level, relies on its relationships with the school districts, teachers, students, states, private businesses, and government. Further demonstrating the value of a strong partnership, in fiscal year 1994, the project received about $2 million from users and other local sources. In January 1994, the STEP/Star Network joined forces with other education broadcasters to create a new, much larger network. With the new network, the STEP/Star Network and other providers share programming, which greatly increases the course offerings to their subscribers. In commenting on a draft of this report, USDA officials reemphasized the importance of partnership building in developing telecommunications capabilities. They further explained that USDA actively encourages partnership building by those rural communities seeking the Rural Utilities Service’s assistance, but any rural community interested in using telecommunications as a rural development tool should include its local carrier in its partnership. Rural development experts and public officials we interviewed suggested three ways to improve federal programs providing telecommunications assistance: (1) educating rural communities on the potential benefits of telecommunications technologies, (2) building in requirements for considering telecommunications technologies in long-range planning, and (3) making the multiple federal programs easier to use. Although at least 28 federal programs are available to help communities improve their telecommunications capabilities, these programs offer only limited outreach aimed at educating rural communities about the potential of advanced telecommunications for development, according to most of the program and rural development officials we spoke with. Instead, the programs generally offer technical assistance to communities that have already received approval and funding for a particular project. All of the experts we spoke with and the studies we reviewed pointed out that many rural areas do not have a full understanding of the development opportunities that the new technologies offer. For example, the Executive Director of the Missouri Rural Opportunities Council told us that her experience with residents and business people in rural midwestern communities showed that they have had limited exposure to telecommunications technologies and do not understand their potential benefits. She believes that better education, training, and overall exposure to these technologies are needed by rural areas. Most of the federal telecommunications program officials agreed that all rural areas should receive information and training in the uses of telecommunications technologies. They also agreed that providing this information and training was a valid federal role but that they lacked the staff and resources to provide such outreach. The federal programs that provide telecommunications assistance require plans for the projects they fund, but most of the officials again reported a lack of resources to actively encourage all rural areas to consider telecommunications infrastructure as a component in their comprehensive, locally based economic development plans. According to a number of rural development officials we spoke with, many rural areas have not considered telecommunications in their long-term strategic planning. Telecommunications technologies should at least be considered in communities’ long-range plans, according to the federal officials and rural development experts we spoke to. In some instances, requiring such consideration is being contemplated. For example, the Director of the EDA’s Planning Division confirmed that although the agency recognizes telecommunications as a high-priority item, the agency’s current guidelines for producing an economic development plan do not require including telecommunications. As we pointed out to the Director, these guidelines were last updated in 1992, and he agreed it is time for them to be updated again, and to include telecommunications issues. If such a change were implemented, 315 economic development districts across the nation, each encompassing multiple counties, would be coached to consider telecommunications technologies in their long-term strategic planning. The National Association of Regional Councils informed us that communities have economic development plans that do not include consideration of telecommunications technologies because the plans were developed before these technologies were fully recognized as a potentially important tool for rural areas. The Federal Agriculture Improvement and Reform Act of 1996 may also encourage rural communities to consider telecommunications technologies, depending on how the act is implemented. The act requires the Secretary of Agriculture to direct all of the Directors of Rural Economic and Community Development State Offices to prepare a 5-year strategic plan for their states. They are to work closely with state, local, private, and public persons, State Rural Development Councils, Indian tribes, and community-based organizations in preparing the plan. Once the plan is established, financial assistance for rural development is to be provided only for orderly community development that is consistent with the state’s strategic plan. The Deputy Under Secretary for Rural Development told us that USDA will encourage all rural areas to consider including telecommunications projects in their long-term strategic plan, which will be included in the state plan. He also stressed that others involved in the plan development process, including the State Rural Development Councils, are very strong advocates of using telecommunications technologies as a rural development tool and will encourage rural areas to consider these technologies in their plan. As we previously reported, federal programs providing assistance to rural areas are difficult to identify, understand, and use. This is also the case for telecommunications programs, according to all of the officials of the State Rural Development Councils we spoke with. For example, the Director of the Montana State Rural Development Council told us that with the exception of grants given by the Rural Utilities Service to telephone companies, most of that state’s rural planners do not have the expertise to obtain access to federal grants. In Montana, the applications submitted typically come from grant writers located at universities and other centers of expertise. Similarly, the Executive Director of the Iowa Rural Development Council told us that assistance programs tend to go to organizations like the universities. While the universities have some good project ideas, Council officials said, they do not always consider the local needs of rural America. Better coordination of federal programs would also help rural communities, according to officials we spoke with. For example, the Executive Director of the Colorado Rural Development Council told us that rural communities would benefit if the plethora of federal telecommunications programs could be coordinated because currently a full-time grant writer must spend much of his time tracking all the programs. She also said that given the extremely limited capacity of most small rural communities to access this type of technical assistance, most are effectively eliminated from applying for any of the grant programs. However, these are the same communities that would benefit the most from such assistance. Similarly, the National Association of Regional Councils told us that the federal government needs to pull these programs together to ensure consistent, readily understandable, and accessible assistance. The Federal Agriculture Improvement and Reform Act of 1996 emphasizes the need to better coordinate federal programs, requiring the Secretary of Agriculture to provide leadership within the executive branch and establish an interagency working group to be chaired by the Secretary. The working group is to establish policy for, coordinate with, make recommendations with respect to, and evaluate the performance of all federal rural development efforts. The conference report for the act noted that the NRDP should continue its role in monitoring and reporting on policies and programs that address the needs of rural America. The State Rural Development Councils, which are members of the NRDP, are to continue to act as the conduit of information to the partnership. We provided USDA a draft copy of this report for its review and comment because USDA is responsible for the federal involvement in rural development. For all other agencies and organizations that provided input to this report, we provided relevant sections of the draft report that either dealt with information they had provided to us or that we synthesized from data obtained both from them and other respondents. We met with USDA officials to obtain their comments, both on the programs discussed in this report and on policies relating to rural development. These officials included the Deputy Administrator of the Rural Utilities Service and representatives of the Office of the Under Secretary for Rural Development. The officials agreed with the report and provided several additional clarifying comments, which we have incorporated into this report as appropriate. In commenting on the draft report, the USDA officials also said that it was important to recognize the recent changes to rural telecommunications programs made by the Federal Agriculture Improvement and Reform Act. Specifically, they noted that the Act authorized $100 million for loans under the Distance Learning and Telemedicine Loan Program. They said this will result in a real cost to the government of $1 million, representing interest-rate subsidies, some general and administrative expenses, and allowance for bad debt. The officials also stressed that many rural areas lack the basic infrastructure needed for advanced telecommunications and that the Rural Utilities Service will continue its mission of meeting the needs of rural America. Officials from the other agencies and organizations that responded to our request for comments agreed with the facts presented in the report and, in some cases, provided clarifying information that we considered and incorporated as appropriate in preparing our final report. In developing information for this report, we identified the federal agencies and programs offering telecommunications assistance to rural areas by searching the June 1995 Catalog of Federal Domestic Assistance. Our search covered all programs that offer grants, loans, or technical assistance to rural areas for planning, constructing, expanding, demonstrating, and/or operating advanced telecommunications projects for rural development. We reviewed documents describing these programs and met with program officials at their headquarters offices in Washington, D.C., and in Knoxville, Tennessee, to learn about the programs’ operations. We obtained fiscal year 1995 funding amounts from agency officials. We did not independently verify this information. We judgmentally selected for site visits five telecommunications projects that received federal funds. We met with project officials and reviewed documents to learn how these projects were developed and are currently operating and what lessons officials had learned from these projects. For each project selected, we developed a description and identified the source of funds for the project. These projects are the Ringgold, Georgia, Telephone Company; the Mayfield, Kentucky, Rural Telecommunications Resource Center; the Eastern Oregon RODEONET Project; the Paducah, Kentucky, Information Age Park; and the Spokane, Washington, STEP/Star Network. These projects are discussed in greater detail in appendix II. To gain further insight into the lessons learned by other rural areas using the federal programs and to identify any changes needed, we reviewed relevant studies by the Aspen Institute, the National Governors Association, the National Association of Development Organizations, the Organization for the Protection and Advancement of Small Telephone Companies, the National Association of Regional Councils, the American Academy of Political and Social Science, USDA’s Economic Research Service, the Rural Policy Research Institute, and the Office of Technology Assessment. To obtain the state perspective on telecommunications technologies in rural communities, we spoke with a group of officials from 15 State Rural Development Councils through a conference call arranged by the National Rural Development Partnership Office at our request. These officials were in Alaska, Colorado, Florida, Idaho, Iowa, Massachusetts, Minnesota, Missouri, Montana, Nebraska, Ohio, Texas, Washington, Wisconsin, and Wyoming. To obtain a grassroots perspective, we requested NADO, the National Association of Regional Councils, and the Organization for the Protection and Advancement of Small Telephone Companies to solicit the views of their members on the same issues discussed with state officials. (See app. III for a brief description of these organizations.) We conducted our review from August 1995 through May 1996 in accordance with generally accepted government auditing standards. We are sending copies of this report to the House Committee on Agriculture; other appropriate congressional committees; the Secretary of Agriculture; and federal and state agencies with responsibility for telecommunications technologies in rural areas. If you or your staff have any questions about this report, I can be reached at (202) 512-5138. Major contributors to this report are listed in appendix IV. This appendix presents detailed information on the 28 federal programs we identified that are either designed to support telecommunications projects or that can be used for that purpose. This information was principally obtained from the June 1995 Catalog of Federal Domestic Assistance. We confirmed the budget information with appropriate program officials but did not independently verify the information. Thirteen programs we identified are designed to provide funding for telecommunications projects. The Department has four programs that directly support telecommunications projects. The Agricultural Telecommunications Program, supported by the Cooperative State Research Education and Extension Service, awards grants to eligible institutions to assist in the development and utilization of an agricultural communications network to facilitate and strengthen agricultural extension, residents’ education, and research, and domestic and international marketing of U.S. commodities and products through a partnership between eligible institutions and the Department. The network employs satellite and other telecommunications technologies to disseminate and share academic instruction, cooperative extension programming, agriculture research, and marketing information. Types of assistance. Project grants. Funding levels. This program was initially funded in fiscal year 1992. Funding levels remained constant at $1.22 million through fiscal year 1995. Eligibility criteria. Applicants must demonstrate that they will (1) make optimal use of available resources for agricultural extension, residents’ education, and research by sharing resources between participating institutions; (2) improve the competitive position of U.S. agriculture in international markets by disseminating information to producers, processors, and researchers; (3) train students for careers in agriculture and food industries; (4) facilitate interaction among leading agricultural scientists; (5) enhance the ability of U.S. agriculture to respond to environmental and food safety concerns; and (6) identify new uses for farm commodities and increase the demand for U.S. agricultural products in both domestic and foreign markets. Proposals are invited from accredited institutions of higher education. Intended beneficiaries. Institutions of higher education, state and local governments, private organizations or corporations, and individuals. Examples of funded projects. One project is to develop and deliver a model program for staff and faculty training in agricultural distance learning at 13 land grant universities. In another project, six land grant universities will develop a network training concept to improve the dissemination and sharing of academic instruction, extension programming, and research activities. Rural Telephone Loan and Loan Guarantees, in the Rural Utilities Service (RUS), has as its objective ensuring that people in eligible rural areas have access to telecommunications services comparable in reliability and quality with the rest of the nation. Types of assistance. Direct loans. Funding levels. Cost-of-money loans totaled $186.4 million in fiscal year 1991, rose to $311.03 million in fiscal year 1993, and fell to $242.35 in fiscal year 1995. The total loans guaranteed remained fairly constant at $120 million, from fiscal year 1991 to fiscal year 1995. Funding for hardship loans became a distinct funding category in fiscal year 1994. Fiscal year 1994 funding was $70.34 million and fiscal year 1995, $69.5 million. While the funding levels varied from year to year, this reflects the amount of funding (budget authority) provided by the Congress, not the number of applications received; in each year, more applications were received than could be funded. Eligibility criteria. Telephone companies or cooperatives, nonprofit associations, limited dividend associations, mutual associations or public bodies, including those located in the U.S. territories, are eligible for this program. Intended Beneficiaries. Residents of rural areas and others who may also receive telephone service as a result of service provided to a rural area. Examples of funded projects. Since 1992, loans have been made to RUS borrowers to finance over $368 million in projects for fiber optic cable, over $350 million for digital switching equipment, $70 million for advanced telecommunications features, and $14 million for distance learning. Rural Telephone Bank Loans (Rural Telephone Bank), under RUS, is designed to provide supplemental financing to extend and improve telecommunications services in rural areas. Types of assistance. Direct loans. Funding levels. The program made loans totaling $177.0 million in fiscal year 1991, $199.85 million in fiscal year 1994, and $175 million in fiscal year 1995. Eligibility criteria. Eligible recipients are borrowers, including those located in the U.S. territories, or possessions that have received a loan or loan commitment under section 201 of the Rural Electrification Act or that have been certified by the Administrator as qualified to receive such a loan. Intended beneficiaries. Residents of rural areas and others who receive telecommunications service resulting from service provided to rural areas. Examples of funded projects. Since 1992, loans have been made to RUS telephone borrowers to finance over $368 million for fiber optic cable, over $350 million for digital switching, $70 million for advanced telecommunications features, and $14 million for distance learning equipment. Distance Learning and Medical Link Grants, provided by RUS, are intended to encourage and improve the use of telecommunications, computer networks, and related advanced technologies to provide educational and medical benefits to people living in rural areas. Types of assistance. Project grants. Funding levels. Funding for fiscal years 1993 and 1994, the first 2 years that grants were awarded, was $10.0 million each year. The program’s funding level was reduced to $7.50 million in fiscal year 1995. Eligibility criteria. Eligible recipients include organizations such as schools, libraries, hospitals, medical centers, or similar organizations that will be users of a telecommunications, computer network, or related advanced technology system to provide educational and/or medical benefits to rural residents. The applicant must not be delinquent on any federal debt. Intended beneficiaries. Rural communities will benefit, particularly in the areas of health care and education. Examples of funded projects. The program has supported a network to link rural hospitals and health care clinics with urban tertiary care centers to provide rural residents with continuous access to trauma and emergency care. It has also sponsored a system to provide 37,000 rural residents—including students, patients, and other residents—with access to the Iowa Communications Network for educational and medical services. The Department has three programs that directly support telecommunications projects. The Star Schools Program encourages improved instruction in mathematics, science, and foreign languages, as well as other subjects, such as literacy skills and vocational education. Grants are made to eligible telecommunications partnerships to enable them to (1) develop, construct, acquire, maintain, and operate telecommunications audio and video facilities and equipment; (2) develop and acquire educational and instructional programming; and (3) obtain technical assistance for the use of such facilities and instructional programming. Type of assistance. Project grants. Funding levels. The program’s funding has increased from $14.4 million in fiscal year 1991 to $25 million in fiscal year 1995. Eligibility criteria. Eligible telecommunications partnerships must be organized on a statewide or multistate basis. Two types of partnerships are eligible. One type is a public agency or corporation established to develop and operate telecommunications networks to enhance educational opportunities provided by educational institutions, teacher training centers, and other entities. The agency or corporation must represent the interests of elementary and secondary schools eligible to participate under title 1 of the Elementary and Secondary Education Act of 1965, as amended. The second type is a partnership of three or more agencies, such as a state educational agency, a local educational agency that serves certain types of students, an institution of higher education or a state higher education agency, a teacher training center, an adult or family education program, a public or private elementary or secondary school, a telecommunications entity, or a public broadcasting entity. At least one of the partners must be an eligible local educational agency or state educational agency. Intended beneficiaries. The program serves underserved populations, including those who are disadvantaged or illiterate, as well as those who have disabilities or limited proficiency in English. Examples of funded projects. In fiscal years 1994 and 1995, Star Schools funded 13 projects totaling approximately $50.9 million. For example, the College of Eastern Utah received grants totaling $4.4 million to develop state-of-the-art studios and linked classrooms to improve the delivery of education services to the Four Corners area of the Southwest. The project is aimed at rural and Native American populations. The Pacific Mountain Network received $741,000 to develop eight 30-minute video modules focusing on distance learning and education reform, to screen distance learning resources, and to provide background information on technology’s role in education. Challenge Grants for Technology in Education provide support to consortia that are using new applications of technology to strengthen the school reform effort, improve the content of the curriculum, increase student achievement, and provide sustained professional development for teachers and others who are employing new applications of technology to improve education. Types of assistance. Project grants (discretionary). Funding levels. The program was originally appropriated $27 million for fiscal year 1995, its first year of operation. However, the Congress later reduced the appropriated amount to $9.5 million for the year. Challenge grants are for 5-year projects. Each project will receive an initial 15-month budget from combined fiscal year 1995-96 appropriations. Eligibility criteria. Consortia must include at least one local educational agency (LEA) with a high percentage or number of children living below the poverty line and may include other LEAs, state educational agencies, institutions of higher education, businesses, academic content experts, software designers, museums, libraries, or other appropriate entities. Intended beneficiaries. Elementary and secondary education students, teachers, administrators, and school library media personnel benefit from the program. Examples of funded projects. The program funded 19 projects in its first year. For example, the program provided funds to Westside Community Schools and the Nebraska Consortium for Discipline-Based Art Education to use telecommunications and digital technology to link urban and rural schools to the art collections of five major museums across the country. The program also funded the state of Utah Resource Web to use telecommunications to provide quality educational opportunities in low-income, rural, and culturally disenfranchised communities. Telecommunications Demonstration Project for Mathematics carries out a national telecommunications-based demonstration project to improve the teaching of mathematics. Types of assistance. Project grants. Funding levels. In fiscal year 1995, the first year of the program, $1.1 million was appropriated. Eligibility criteria. State educational agencies (SEA), LEAs, nonprofit telecommunications entities, or partnerships with these entities may apply. Intended beneficiaries. Those benefiting from the program include elementary and secondary school teachers of mathematics and schools of LEAs having a high percentage of children who are counted for the purpose of part A, title 1, of the Elementary and Secondary Education Act of 1965, as amended. Examples of funded projects. One grant was awarded in fiscal year 1995 to the Public Broadcasting Service for an elementary component of PBS Mathline. The project will provide video modules and on-line resources for teachers of mathematics in more than 30 states across the country. Department of Health and Human Services The Department has two programs that support telecommunications projects. Health Care Financing Administration’s (HCFA) Research, Demonstration, and Evaluation projects are designed to support analyses, experiments, demonstrations, and pilot projects aimed at resolving major health care financing issues or developing innovative methods for the administration of Medicare and Medicaid. In 1994, HCFA identified a number of areas in which specific information or experience was needed to improve programs’ effectiveness or guide decisions. These priority areas for discretionary cooperative agreements and/or grants were to be HCFA’s guide for project selection in fiscal years 1994, 1995, and 1996, and included (1) access and quality of care; (2) managed care systems; (3) provider payments; (4) health care systems reform and financing; (5) program evaluation and analyses; (6) service delivery systems; and (7) subacute and long-term care. However, substantial cutbacks in discretionary funding for HCFA in fiscal years 1995 and 1996 resulted in only a few new awards in these areas in 1995 and none in 1996. Types of assistance. Project grants or cooperative agreements. Eligibility criteria. Grants or cooperative agreements may be made to private or public agencies or organizations, including state agencies that administer the Medicaid program. Private for-profit organizations may apply. Awards cannot be made directly to individuals. Intended beneficiaries. Contributing retirees or specially entitled beneficiaries, which include those with disabilities, end-stage renal disease, and families receiving Medicaid benefits. Funding levels. HCFA funded five telemedicine demonstration projects totaling $858,000 in fiscal year 1993, $4 million in fiscal year 1994, and $524,000 in fiscal year 1995. New telemedicine projects have not been funded since fiscal year 1994. The fiscal year 1995 funding was to begin a comprehensive evaluation of HCFA’s previously awarded telemedicine demonstration projects. Examples of funded projects. The program funded five telemedicine demonstration projects in 1993 and 1994. For example, in fiscal year 1993, the program provided about $700,000 to the Iowa Methodist Health System for its telemedicine services in cardiology and pathology consultations. In fiscal year 1994, the program provided about $272,000 to East Carolina University to test a system of Medicare payments for telemedicine services involving two rural hospitals and a medical school affiliate. Rural Telemedicine Grants support projects to demonstrate and collect information on the feasibility, costs, appropriateness, and acceptability of telemedicine for improving access to health services for rural residents and reducing the isolation of rural practitioners. Types of assistance. Project grants. Eligibility criteria. The grant recipient can be a public (nonfederal) or private nonprofit or for-profit entity, located in either a rural or urban area. The entity must be a health care provider and a member of a telemedicine network or a consortium of providers that are members of a telemedicine network. Intended beneficiaries. Rural health care providers, patients, and rural communities benefit from this grant program. Funding levels. The program received $4.6 million in fiscal year 1994, its first year, and $5 million in fiscal year 1995. Examples of funded projects. For fiscal year 1994, the program granted 11 new awards. No new grants were made in fiscal year 1995 because of budget constraints. One project is the High Plains Rural Health Network in Fort Morgan, Colorado. This project is a consortium of hospitals, clinics, and physician practices in Colorado, Nebraska, and Kansas. Its telemedicine network will have two hub facilities serving two rural hospitals, two community health centers, and a long-term care facility. The network will have a videoconferencing system and an electronic bulletin board for ongoing communications among all network practitioners. Another project is the University of Kentucky’s Medical Center’s plan to provide specialty consultations to Berea Hospital (with 42 acute-care beds) and several clinics in rural Kentucky. The university hospital also will be linked with the Saint Claire Medical Center in Morehead, Kentucky, which will serve as a second hub site. The Department sponsors two telecommunications projects under its National Telecommunications and Information Administration. Public Telecommunications Facilities Planning and Construction Grants can be used to assist in the planning, acquisition, installation, and modernization of public telecommunications facilities, through planning grants and matching construction grants, in order to (1) extend the delivery of public telecommunications services to as many citizens of the United States and its territories as possible by the most efficient and economical means, including the use of broadcast and nonbroadcast technologies; (2) increase public telecommunications services and facilities available to, operated by, and owned by minorities and women; and (3) strengthen the capability of existing public television and radio stations to provide public telecommunications services to the public. Types of Assistance. Project grants. Funding levels. This program’s funding was increased from $19.7 million in fiscal year 1991 to $27.7 million in fiscal year 1995. Eligibility criteria. Several types of entities are eligible for these grants: (1) public or noncommercial educational broadcast stations; (2) noncommercial telecommunications entities; (3) systems of public telecommunications entities; (4) public or private nonprofit foundations, corporations, institutions, or associations organized primarily for educational or cultural purposes; and (5) state or local governments or agencies, including U.S. territories, federally recognized Indian tribal governments, or political or special purpose subdivisions of a state. Intended beneficiaries. The general public and students benefit from the program. Examples of funded projects. One project funded under this program is the construction of a new noncommercial radio station in Ada, Oklahoma, to provide the first public radio signal to 40,000 residents in southeastern Oklahoma. Another is the replacement of the transmission system, the remote control, and associated dissemination equipment for a public television station in Austin, Texas. The Telecommunications and Information Infrastructure Assistance Program promotes the widespread use of advanced telecommunications and information technologies in the public and nonprofit sectors. Types of assistance. Project grants. Funding levels. This program was initially funded in 1994. The program funded projects totaling $24.4 million in fiscal year 1994 and $36.0 million in fiscal year 1995. Eligibility criteria. State and local governments, nonprofit health care providers, school districts, libraries, universities and colleges, public safety services, and other nonprofit entities. Intended beneficiaries. The general public benefits from the program. Examples of funded projects. One project involves a rural educational system in Washington State, serving a predominantly Native American population, that will build a systemwide voice, data, and video instructional network. This system will be connected to statewide educational and national information services. Another project is the Kansas State Corporation Commission’s effort to develop a comprehensive, statewide telecommunications infrastructure plan that addresses the needs of business, health care, education, government, and the public. The National Science Foundation (NSF) has two programs that support telecommunications projects. Connections to the Internet is intended to encourage U.S. research and educational institutions to connect to the Internet. In March 1996, this program extended the Connections to the NSFNET program, which has been in place since 1990. Types of assistance. Project grants. Funding levels. The program’s funding was $1.6 million in fiscal year 1991. Funding rose to $8.3 million in fiscal year 1993 and fell to $5.8 million in fiscal year 1995. Eligibility criteria. Proposals may be submitted by any U.S. research or educational institution or consortium of such organizations as appropriate for connections categories: (1) connections utilizing innovative technologies for Internet access; (2) connections for institutions of higher education; and (3) connections for research and education institutions and facilities that have meritorious applications requiring high network bandwidth or other novel network attributes not readily available from commodity network service providers. Intended beneficiaries. Students, faculty, and researchers at the connected schools. Examples of funded projects. One example of a Connection to the Internet project is an NSF-funded connection of five community colleges in eastern New Mexico. Networking Infrastructure for Education has as its goal building synergy between technology and education researchers and developers and implementers so that they can explore networking costs and benefits, test self-sustaining strategies, and develop a flexible educational networking infrastructure that will be instrumental in the dissemination, integration, and application of technologies to speed the pace of educational innovation and reform. Types of assistance. Project grants or cooperative agreements. Funding levels. NSF allocated $8.7 million to the program in fiscal year 1994, its first year, and $11.7 million in fiscal year 1995. Eligibility criteria. Individual institutions or groups of institutions within the United States. Alliances of 2- and 4-year degree-granting academic institutions, school districts, professional societies, state agencies, public libraries, museums, and others concerned with educational reform. Business and industry participation, with cost-sharing consistent with their role, is required for demonstration, model site, testbed and infrastructure projects and encouraged for policy studies and research and development projects. Intended beneficiaries. Elementary, secondary, and undergraduate science, mathematics, and engineering teachers and faculty; secondary, undergraduate students; public and private colleges (2-year and 4-year) and universities; state and local educational agencies; nonprofit and private organizations; professional societies; science academies and centers; science museums and zoological parks; research laboratories; and other institutions with an educational mission. Examples of funded projects. One project funded under this program is a Montana statewide coalition featuring partners from all public and private stakeholders, including the Statewide Systemic Initiative, to plan for the development of a lasting infrastructure that will support a variety of educational telecommunications services, paying particular attention to the special conditions in this largely rural state. Another project is a regional data network to connect schools, libraries, and community centers to individual households, the network itself, and the Internet. We also identified 15 multipurpose programs that do not have telecommunications projects as a specific objective but can fund such projects. While these programs have similar objectives—such as economic development, education, and health outreach—they do not specifically cite telecommunications as the means to accomplish their objectives. Table I.2 lists these programs. Although these programs may fund many different kinds of projects, some have emphasized telecommunications technologies. For example, the Appalachian Regional Commission views telecommunications as crucial to Appalachia’s economic development. Telecommunications technologies is one of three initiatives ARC has targeted for the region, along with civic development and preparing Appalachia for the global economy. The ARC Co-Chairman has pledged to ensure that “the ’information superhighway’ not bypass Appalachia as the national highway system did some four decades ago.” The Department has one program in its Rural Business and Cooperative Development Service that, while not specifically designed for telecommunications technologies, can be used for them. Rural Economic Development Loans and Grants are designed to promote rural economic development and help create jobs, including funding for project feasibility studies, startup costs, incubator projects, and other reasonable expenses for the purpose of fostering rural development. Types of assistance. Direct loans; project grants. Funding levels. The program received $13.5 million in fiscal year 1994. Eligibility criteria. Electric and telephone utilities that have current Rural Electrification Administration or Rural Telephone Bank loans or guarantees outstanding and are not delinquent on any federal debt or in bankruptcy proceedings may apply. Intended beneficiaries. Rural communities and the general public benefit from this program. Examples of funded projects. Program officials say the program has not funded telecommunications projects. The Commission currently offers two programs that can be used for telecommunications projects. ARC receives only one appropriation each year for its assistance activities. All projects are funded from this “Area Development” allocation. Area Development funding has ranged from $39.5 million in fiscal year 1991 to $102.0 million in fiscal year 1995; only a small amount of this funding is used for telecommunications under the two programs discussed below. Special ARC Initiatives have been funded during fiscal years 1995 and 1996 and are planned to be funded again in fiscal year 1997. The three special initiatives provide assistance for telecommunications, internationalization of the Appalachian region’s economy, and local leadership and civic development. (Approximately $5 million to $6 million was set aside from the Area Development allocation for these three initiatives in fiscal years 1995 and 1996, and a similar amount is anticipated for fiscal year 1997.) Types of assistance. Project grants. Funding levels. See description of Area Development funding, above. Eligibility criteria. Multicounty organizations, state universities, community colleges, high schools, nonprofit organizations, and school boards. Intended beneficiaries. Residents of the Appalachian region. Examples of funded projects. Assisted with the strategic plan for the Multiregional Telecommunications Improvement Project in New York and the Western Maryland WMDNet Equipment Project, which connected universities, junior colleges, libraries, county governments, and health facilities. Appalachian Area Development provides assistance for a variety of needs, including telecommunications projects. (See Special ARC Initiatives described above). On average, across the region, about $2.5 million to $3 million is annually provided for telecommunications-related projects from the overall Area Development funding allocation. Types of assistance. Project grants. Funding levels. See description of Area Development funding, above. Eligibility criteria. Multicounty organizations, state universities, community colleges, high schools, nonprofit organizations, and school boards. Intended beneficiaries. Residents of the Appalachian region. Examples of funded projects. Assisted with the Elmore County (Alabama) Telecommunications Network Project (connecting high schools, junior colleges, businesses, and government offices) and the Greenville (South Carolina) Hospital Home Health Project. The Department has four programs that can be used to support development of telecommunications projects. Economic Development Grants for Public Works and Infrastructure Development, administered by the Economic Development Administration, are used to promote long-term economic development and assist in the construction of public works and development facilities needed to initiate and encourage the creation or retention of permanent jobs in the private sector in areas experiencing severe economic distress. Types of assistance. Project grants. Funding levels. The program’s funding has ranged from $140.8 million in fiscal year 1991 to $195.0 million in fiscal year 1995. Eligibility criteria. States, cities, counties, other political subdivisions, Indian tribes, the Federated States of Micronesia, the Republic of the Marshall Islands, commonwealths and territories of the U.S. flag, and private or public nonprofit organizations or associations representing a redevelopment area or a designated Economic Development Center are eligible to receive grants. Corporations and associations organized for profit are not eligible. Intended beneficiaries. Local economies, unemployed and underemployed persons, and/or members of low-income families benefit from the program. Examples of funded projects. These grants have supported infrastructure necessary for economic development (e.g., water/sewer facilities), the construction of incubator facilities, and port development and expansion. With respect to telecommunications, two rural community colleges in North Carolina received grant assistance to install two-way interactive telecommunications equipment that is used to provide training for underemployed and unemployed youths and adults. Economic Development Technical Assistance, administered by the Economic Development Administration, provides funding to promote economic development and alleviate underemployment and unemployment in distressed areas. The program provides funds to enlist the resources of designated university centers in promoting economic development, support demonstration projects, disseminate information and studies of economic development issues of national significance, and finance feasibility studies and other projects leading to local economic development. Types of assistance. Project grants. Funding levels. The program’s funding increased from $6.6 million in fiscal year 1991 to $10.9 million in fiscal year 1995. Eligibility criteria. Private or public nonprofit organizations, educational institutions, federally recognized Indian tribal governments, municipal, county or state governments, and U.S. territories or entities thereof. Intended beneficiaries. Projects are intended to assist in solving economic development problems, respond to economic development opportunities, and expand organizational capacity for economic development. Examples of funded projects. Management and technical assistance services to communities, counties, districts, nonprofit development groups; technology transfer assistance to firms; studies to determine the economic feasibility of various local development projects. An example of a recent telecommunications-related project involved providing grant assistance to rural communities in Colorado to improve the competitive stance of existing, emerging, and prospective businesses through Internet-based services. Planning Program for States and Urban Areas, administered by the Economic Development Administration, is designed to assist economically distressed states, substate planning regions, cities, and urban counties to undertake significant new economic development planning, policymaking, and implementation efforts. (Rural areas are included in this program). Types of assistance. Project grants. Funding levels. The program’s funding has remained fairly stable over the past 5 years, ranging from $4.7 million in fiscal year 1991 to $4.5 million in fiscal year 1995. Eligibility criteria. Eligible applicants include states, substate planning units, cities, urban counties within metropolitan statistical areas, and combinations of these entities. Intended beneficiaries. Residents of eligible areas. Examples of funded projects. The state of Alabama received a grant in 1994 that drew on computer technology to assist high school students in rural areas, as well as the unemployed and underemployed, in getting job training that would enhance their ability to obtain employment. The New River Valley Planning Development Council, in Radford, Virginia, received a grant in 1994 that uses telecommunications technology to link Southwest Virginia to areas that are more industrially developed. Advanced Technology Program, administered by the National Institute of Standards and Technology, is designed to promote “commercializing new scientific discoveries and technologies rapidly” and “refining manufacturing practices” through supporting high-risk civilian technologies that are in the nation’s economic interest. Types of assistance. Project grants (cooperative agreements). Funding levels. The program’s funding increased steadily between fiscal years 1991 and 1995, from $35.9 million to $341.0 million. Eligibility criteria. Recipients must be U.S. businesses or joint research and development ventures. Foreign-owned businesses are eligible, if they meet the requirements of the American Technology Preeminence Act of 1991 (P.L. 102-245, Feb. 2, 1992). Intended beneficiaries. U.S. businesses and U.S. joint research and development ventures. Foreign-owned businesses, if they meet the requirements of P.L. 102-245. Examples of funded projects. Printed wiring board manufacturing technology, flat panel display manufacturing, magnetoresistive random access memories, and ultra-high-density magnetic recording heads. The Department has two programs that can be used to support telecommunications projects. The Library Research and Demonstrations Program has as its objective the awarding of grants and contracts for research and/or demonstration projects in areas of specialized services intended to improve library and information science practices. Among other things, the program may fund the use of new technologies to enhance library services. Types of assistance. Project grants. Funding levels. Funds increased from $325,000 in fiscal year 1991 to $6.5 million in fiscal year 1995. Eligibility criteria. Institutions of higher learning or public or private agencies, institutions, or organizations are eligible. Intended beneficiaries. Institutions of higher learning or public or private agencies, institutions, or organizations are the beneficiaries. Examples of funded projects. Since fiscal year 1993, funds have been used to establish statewide multitype library networks. For example, in fiscal year 1993, Louisiana State University and Agricultural and Mechanical College was awarded a $2.5 million grant to expand its electronic library network to connect libraries around the state. Other grant recipients for the same purpose are the Colorado Department of Education’s State Library and Adult Education Office (fiscal year 1994, $2.5 million); State Library of Iowa (fiscal year 1995, $2.5 million); and West Virginia Library Commission, Department of Education and the Arts (fiscal year 1995, $2.5 million). Each project is making its databases available to all types of libraries throughout the state. In Iowa, the State University Extension Service is also participating in the project to coordinate information resources. The Eisenhower Professional Development Program is designed to give teachers, administrators, and other school personnel access to high-quality, sustained, and intensive professional development activities in the core academic subjects aligned to challenging state content and student performance standards. Types of assistance. Formula grants. Funding levels. $251.3 million in fiscal year 1995. Eligibility criteria. Funds are distributed to the states on a formula basis. Of the total state allocation, the SEA receives 84 percent and the state agency for higher education, 16 percent. The SEA distributes, by formula, at least 90 percent of the funds that it receives to LEAs within the state. The state agency for higher education distributes at least 95 percent of its allocation in the form of competitive subgrants to institutions of higher education and nonprofit organizations. Intended beneficiaries. Teachers, administrators, and other school personnel are direct beneficiaries, and as a result of these populations’ participating in professional development, students are indirect beneficiaries. Examples of funded projects. One project supported through this program is “Geometry Enhancement Models Institute: Meeting the Challenge of Mathematics Education,” funded through the University of Memphis, which is to be conducted during the summer of 1996. The Institute is planned for 20 middle school in-service teachers to acquaint participants with the van Hiele theory of geometry through interactive, hands-on participation drawing on a number of instructional methods. Department of Health and Human Services The Department has one program that can be used for telecommunications projects. Rural Health Services Outreach is intended to provide health services to rural populations that are not receiving them and to help rural communities and health care providers coordinate their services and enhance linkages, integration, and cooperation among rural providers of health services. Types of assistance. Project grants. Funding levels. Funds for telemedicine projects have increased from $220,000 in fiscal year 1991 to $1.7 million in fiscal year 1995. Eligibility criteria. Nonprofit public or private entities located in nonmetropolitan statistical areas or a rural area within a larger metropolitan statistical area may apply. Intended beneficiaries. Medically underserved populations in rural areas will receive expanded services. Examples of funded projects. The program funded eight new telemedicine projects in fiscal years 1994 and 1995. For example, the program provided assistance to Douglas County Hospital in Alexandria, Minnesota, to develop an advanced telemedicine network to serve eight rural communities in central Minnesota. The network’s goal is to reduce the isolation of rural health care providers and to enhance access to specialized medical services. For another project, the program provided a total of $306,000 to Big Bend Regional Medical Center of Alpine, Texas, over 3 years to use telemedicine to offer primary care and health education services to the underserved population of Presidio, Texas. A telecommunications system is being set up in the town to link it with Big Bend Medical Center in Alpine and the Texas Tech Health Sciences Center. Department of Housing and Urban Development The Department administers one program that can be used to support telecommunications projects. Community Development Block Grants/State’s Program has as its primary objective the development of viable communities by providing decent housing, and a suitable living environment and expanding economic opportunities, principally for persons of low and moderate income. Types of assistance. Formula grants. Funding levels. Total funding levels for the program were $1.0 billion in fiscal year 1992, $1.2 billion in fiscal year 1993, $1.3 billion in fiscal year 1994, and $1.3 billion in fiscal year 1995. Eligibility criteria. State governments receive funding according to a formula; funds are then provided through the state to eligible units of general local government. Eligible units of general local government are generally cities with populations of 50,000 or less that are not designated central cities of metropolitan statistical areas, and counties with populations of 200,000 or less. Forty-eight states and Puerto Rico participate in the state Community Development Block Grant program. Intended beneficiaries. Low- to moderate-income persons. Examples of funded projects. No telecommunications-related projects have as yet been completed by this program, according to program officials. One project—a telemedicine project linking 45 rural clinics to larger hospitals in Oklahoma—is being pursued at this time. We identified one program that can be used for telecommunications projects. Small Business Loans (7(a) Loans) are guaranteed loans to small businesses that are unable to obtain financing in the private credit marketplace but that can demonstrate the ability to repay the loans granted. This program can also provide guaranteed loan assistance to low-income business owners or businesses located in areas of high unemployment or to specific types of businesses, such as those owned by handicapped individuals. Types of assistance. Guaranteed/insured loans. Funding levels. In fiscal year 1992, loans totaling $6.0 billion were guaranteed. In fiscal year 1995, guaranteed loans totaled $8.3 billion. Eligibility criteria. Small businesses that are independently owned and operated and not dominant in their field are eligible; businesses must also meet specific criteria for size, depending on the industry. Intended beneficiaries. Small businesses, including those owned by low-income and handicapped individuals, or located in high unemployment areas benefit from the program. Examples of funded projects. With respect to telecommunications-related loans, the Small Business Administration has assisted small businesses that provide telecommunications-related services such as paging services and cellular telephone services. The Tennessee Valley Authority (TVA) has three programs to support telecommunications. The Economic Development Loan Fund was established to stimulate industrial development and leverage capital investment in TVA’s power service area. Specifically, the fund is used to promote economic expansion, encourage job creation, and foster the increased sale of electricity by TVA and its power distributors. Types of assistance. Direct loans. Funding levels. This revolving loan fund was initially funded in fiscal year 1995 with $20 million from power revenues. Eligibility criteria. Projects are sponsored by a local government, power distributor, or established economic development organization. Loans are made to TVA power customers, communities or nonprofit economic development corporations to support approved projects. Intended beneficiaries. The ultimate beneficiaries are the people of the Tennessee Valley region. Examples of funded projects. As of November 1995, this program had not funded any telecommunications projects. The Special Opportunity Counties Revolving Loan Fund is designed to stimulate economic development and private sector job growth in the most economically disadvantaged counties in the Tennessee Valley. Types of assistance. Direct loans. Funding levels. This revolving loan fund was funded with a $14 million allocation from TVA’s appropriations for fiscal years 1981 through 1987. Eligibility criteria. Per capita personal income and percent of persons below the poverty level were the two variables used to determine which of the 201 Tennessee Valley counties were eligible for the program. First, the 100 counties with the lowest per capita personal income were chosen. Then, the 50 counties with the highest percent of persons below the poverty level were considered to be eligible for the program. Intended beneficiaries. The ultimate beneficiaries are the people of the Tennessee Valley region. Examples of funded projects. One project is an interactive television network, a two-way interactive television network, in the Upper Cumberland area of Tennessee. The network provides full motion, multisite, multichannel simultaneous two-way interactive communication capabilities. The Technical Assistance Program invests in economic development to increase the production of goods and services and generate a higher standard of living for all citizens of the Tennessee Valley Region. Types of assistance. Advisory services, counseling, architectural and engineering studies, and the dissemination of economic information. Investments in the research, development, and implementation of a regional small business incubator network. Funding levels. Funding for this program includes salaries and expenses. Fiscal year 1991 funding was $21.2 million. Funding dropped to $18 million in fiscal year 1994, but rose to $22.5 million in fiscal year 1995. Eligibility criteria. Within the Tennessee Valley, officers and agencies of state, county, and municipal governments; quasi-public agencies; and private organizations, individuals, and business firms and associations may seek technical advice and assistance in community resource development. Intended beneficiaries. The ultimate beneficiaries are the people of the Tennessee Valley region. Examples of funded projects. TVA’s technical services include architectural/engineering, economic research and forecasting, information services support, environmental coordination, and project management. We visited five telecommunications projects—two economic development projects, one distance learning project, and one medical link project. These projects are funded in part with federal moneys. In addition, we visited a borrower of the Rural Utilities Service’s Telephone Bank Loan program. The results of those visits are summarized below. The Paducah Information Age Park, located in Paducah, Kentucky, includes 650 acres, with 360 acres planned for development. The park is designed for companies that heavily utilize telecommunications and telecommunications-related research and development. Typically, such companies move volumes of information: data processing companies, reservation businesses, credit card companies, payroll centers, and catalog companies. The park provides a fiber optic system that supports high-quality video conferencing, Lan-to-Lan internetworking, and multimedia communications. The park also includes an on-site digital switching center, which provides a network-based Automatic Call Distributor and Integrated Services Digital Network as well as other state-of-the-art capabilities. The mission of the park is to create economic growth for the region. The Chief Executive Officer and Chairman of the Greater Paducah Economic Development Council (GPEDC) said that, by the early 1980s, the Paducah area’s economy had stagnated. Community leaders recognized that new development opportunities were needed. Toward this end, they created GPEDC in 1989. GPEDC first became interested in the feasibility of an information age park in Paducah after an official attended a telecommunications conference in 1989 that was sponsored by a carrier. In 1990, GPEDC formally requested the carrier to identify and quantify the potential economic benefits of “developing an information age park for use as a resource in recruiting information-intensive, high-technology industries.” In March 1991, the carrier contracted for a study to help determine whether information age business parks might be economically feasible in nonmetropolitan areas such as Paducah, Kentucky. The contractor subsequently determined that Paducah/McCracken County would be a suitable location for such a park. The park officially opened in May 1994. According to GPEDC officials, it will be 12 to 15 years before it is fully developed. The contractor determined that the park could have an economic impact on the area of $100 million to $300 million, an estimate based on adding between 2,500 to 7,500 jobs in two information age parks and the multiplier effects of that employment. As of November 1995, four sites were sold, options to buy were held on three more, and one 12,000- to 15,000-square-foot speculative building is planned. GPEDC officials said that local government entities set aside jurisdictional questions to commit themselves to the park’s ability to provide high-quality services at the lowest possible cost. The city of Paducah has annexed the park because the city can most economically extend public services like water, sewer, and police and fire protection. Officials of McCracken County, in which the project is located, agreed to forgo the tax revenues from the park itself, confident that countywide growth will more than compensate for any short-term revenue losses. Paducah community leaders see the park’s creation as validating their commitment to the “partnering” of various private, public, local and state organizations. The Chief Executive Officer and Chairman of GPEDC is not aware of any case in which an organization that was asked to be a partner in the project declined to participate. According to GPEDC officials, the park has benefits beyond new jobs for the region. Residents have a positive attitude about the economic potential of the community, and new ways of approaching economic development are being considered, such as development involving advanced telecommunications technologies. GPEDC officials said that the total cost of the project is just over $21 million. According to GPEDC officials, a carrier invested $6 million in the project, including building a central office in the park. Additionally, TVA provided over $1.8 million in financial and technical assistance towards the development of the park. GPEDC officials said the Commonwealth of Kentucky created a $6.2 million package of combined grants/loans for infrastructure, the conversion of a wetlands area into a lake, and the partial construction of the Resource Center. Public and private local investments total almost $6.7 million and GPEDC officials said the council itself provided $300,000. The Purchase Area Development District (PADD), an economic development district serving eight western Kentucky counties, created the Rural Telecommunications Resource Center to serve both public agencies and businesses in its service area in an effort to promote economic development using advanced telecommunications. The resource center is located in Mayfield, Kentucky, in Graves County, and has two conference rooms, encompassing 4,000 square feet of the 15,000-square-foot PADD office complex. The resource center, which officially began operations in October 1995 has advanced teleconferencing capability, including equipment to make live interactive presentations. Additionally, the resource center has a satellite downlink and a graphical information system, a data tool for analyzing and displaying geographically related information. PADD officials said that the resource center has access to other Kentucky networks and that, ultimately, the center will have direct Internet access. PADD officials expect several benefits, including (1) increased training opportunities for employees of area businesses because of reduced travel costs and time, (2) improved communications between plant managers and their company headquarters and reduced travel costs for these executives, (3) improved business access to customers and suppliers, and (4) improved communication with state regulators and other officials in Frankfort, Kentucky. PADD officials believe that these benefits will enable businesses in their area to become more productive and therefore more competitive in the global economy. Furthermore, PADD officials expect the resource center to be a demonstration project that spawns additional interest in the economic development potential of telecommunications in the PADD area. PADD officials became interested in telecommunications as a tool for business and economic development in 1989. In October 1990, the Development District held the region’s first seminar on the telecommunications technology available for all types and sizes of businesses. Over the years, PADD has worked with groups such as South Central Bell, the West Kentucky Private Industry Council, and the University of Louisville Telecommunications Research Center to assist area businesses and industries to become better informed about the changing trends in communications and information technology. In 1994, PADD asked over 450 regional businesses and industries about their need for advanced telecommunications capability. To obtain funding for the resource center, PADD officials contacted the Economic Development Administration (EDA) in January 1992. PADD officials said that in February 1992, their agent, the Jackson Purchase Local Officials Organization, in partnership with Murray State, applied for an EDA public works grant. According to PADD officials, that application requested funding for the Rural Telecommunications Resource Center, as well as for linkages between each county and a districtwide economic and information database that PADD maintains. The total cost of the project was estimated at $572,679, with EDA providing a grant of $343,362, Murray State providing “in-kind” equipment valued at $168,907, and the Jackson Purchase Local Officials Organization providing $60,000. However, PADD officials said that EDA turned down the request in the spring of 1992 because it was not the type of project EDA normally funded. According to an EDA representative in Washington, D.C., as well as EDA’s Kentucky state representative, when the application was submitted, EDA generally was unfamiliar with the economic development potential of telecommunications projects. Traditionally, projects funded under the public works program have been for infrastructure items such as water or sewer systems for industrial parks. PADD officials said that EDA subsequently reconsidered the application, and following the visit of an EDA representative from Washington, D.C., in March 1993, PADD officials reworked and resubmitted the application. It was approved in September 1994. The approved project totaled $658,158. EDA supplied a grant of $451,236, Murray State provided $191,922 in “in-kind” equipment, and the Jackson Purchase Local Officials Organization contributed $15,000 in cash. Also, district officials said that they received a $25,000 technical assistance grant from EDA in September 1994 to help fund a full-time PADD position to assist in facilities operation. RODEONET, which began operations in 1992, is a mental health telemedicine project using advanced telecommunications technologies, such as two-way video teleconferencing, to provide selected mental health services and professional development opportunities to consumers and mental health professionals in 13 rural counties in eastern Oregon, an area of about 45,000 square miles. In 1995, the service was expanded to include one site in southern Oregon and three sites on the northwest Oregon coast. RODEONET’s services include consultation/evaluation, preadmission and predischarge interviews, medication management, and staff training and demonstrations. The Eastern Oregon Human Services Consortium, a consortium of community mental health programs, operates RODEONET, using the telecommunications facilities of Oregon’s educational network (ED-NET). The size of the eastern Oregon service area, the location of the state’s two public psychiatric hospitals, and Oregon’s laws regarding hearings and admissions to state mental health facilities make a project such as RODEONET a practical way of providing some mental health services to residents and training for mental health professionals in Eastern Oregon. The precommitment service, for example, operates in the following manner. If a mental health professional believes that a patient is a danger to himself or others, the mental health professional can have the patient transported to a mental health hospital and held. Oregon has two public psychiatric hospitals—one in northeast Oregon and one in western Oregon. For many rural communities in the extreme eastern and southern parts of Oregon, this often means a trip of hundreds of miles. During periods of inclement weather, the trip can be dangerous. Furthermore, consortium officials said that Oregon law requires that the patient be given a precommitment hearing within 72 hours or released, and the hearing must be presided over by a judge in the county in which the patient lives. If the patient is committed, a total of three long, costly, and frequently hazardous trips to court and to a psychiatric hospital will be made within a few days. When appropriate, two of the three trips can be avoided if video teleconferencing is used in lieu of face-to-face meetings. Recognizing the potential for reducing training and travel costs and the scarcity of mental health services in many rural communities, the consortium began planning a telemedicine project. In May 1991, it applied for funding from the Department of Health and Human Service’s Office of Rural Health Policy and was subsequently awarded a 3-year Rural Health Outreach demonstration grant. From October 1991 to September 1994, the grant provided over $800,000 of the project’s estimated $1.3 million cost for that 3-year period. RODEONET has been self sustaining since September 1994, and users are now required to pay their own satellite and access charges. RODEONET member institutions or agencies are now charged $145 per hour for video teleconferencing, with another $20 per hour for each additional site. All of the officials with whom we spoke, including officials from the Office of Rural Health Policy, consider the project a success and believe that there is increased potential for using advanced telecommunications to provide mental health services. RODEONET officials told us that a major factor contributing to the success of the project was that the 13 eastern Oregon counties that are partners in RODEONET had a long history of collaboration on providing mental health services in their nearly 45,000-square-mile service area. Officials stressed that without this long history of collaboration, successful completion of the project would not have been possible. The Satellite Telecommunications Educational Programming/Pacific Star Schools Partnership (STEP/StarNetwork) is a satellite-based K-12 distance learning network. Educational Service District (ESD) 101, a state-chartered regional agency located in Spokane, Washington, operates the STEP/Star Network. STEP/Star Network offerings include full-credit traditional courses in subjects such as foreign languages, mathematics, science, and vocational education. The network also offers innovative courses such as Young Astronauts, a course for fourth to sixth graders using space themes to teach math and science. The courses ESD 101 broadcasts over the STEP/Star Network include those developed by or for the district as well as those developed by other distance education providers. Through the STEP/Star Network, ESD 101 also offers a variety of other services for educators, school administrators, parents, and community leaders, including in-service workshops for college credit, teleconferencing, and parenting classes. ESD 101’s programming serves 31,500 students and 43,000 teachers located in 31 states and six time zones. Nearly 90 percent of the participating schools are in rural areas, and the average Star school is about 80 miles from the nearest university or college. The network’s principal customers are the remote or rural school districts in Alaska, Hawaii, Idaho, Montana, Oregon, and Washington State, and the Colorado and the Central Indiana educational service districts. The network is now expanding into the Pacific territories. The programming is broadcast live from ESD 101’s television studios via satellite uplink. Student and teacher interaction is achieved through a combination of two-way audio, one-way video, and two-way data transmission. Where possible, students’ papers and tests are submitted to instructors electronically. According to district officials, some instructors are developing tests that their students will be able to take on-line. Participating school districts pay an annual membership fee of $2,950 for basic services for a single site. Each additional site is $150. The membership fee includes the startup equipment needed to interface with the network (e.g., satellite dishes, computers, modems, and scanners). Some courses require the students to use computers. The participating school district is responsible for providing this equipment. The equipment ESD 101 provides for interface with the STEP/StarNetwork remains the property of ESD 101 and is retrieved from a district that discontinues its participation. Participating school districts are charged varying fees for the K-12 courses they use. For example, Elementary Spanish for grades 1 and 2 costs a flat $500 per site, but Advanced Placement English costs $490 for a maximum of seven students, with each additional student costing $175. ESD 101 officials said that the original STEP distance education network, which began operating in the district’s service area in 1986, was started because some farsighted school and community officials saw a need to provide educational offerings that the district’s schools would be unable to provide otherwise. They also said that the formation of the first Star program in the five original northwestern states was greatly assisted by the fact that these states had a long history of collaboration and partnerships on regional projects, and that partnership and collaboration has been key to the STEP/Star Network’s subsequent expansion. Since 1990, ESD 101, on behalf of its STEP/Star partners, has been awarded three successive 2-year Star Schools grants totaling $21.3 million from the Department of Education. These grants have enabled ESD 101 to (1) expand course offerings beyond those initially offered in STEP/Star and (2) expand the area it serves. The first grant totaled about $9.9 million for September 1990 to September 1992. The second grant totaled about $5.2 million for October 1992 to September 1994. The third grant totaled about $6.2 million for October 1994 through September 1996. As significant as ESD 101’s funding from the Star Schools Program has been, it does not represent all of the District’s funding. According to the District’s superintendent and the District’s annual financial report for the fiscal year ending August 1994, the agency’s total annual operating budget is about $40 million, with about $13.1 million in revenues coming from all sources. Of that amount, only about $3.6 million was from federal sources, and the balance was from local, state, and cooperative programs; payments for other programs; and investment earnings. Amounts from local sources included $1.7 million in tuition and fees and about $331,000 from sales of goods, supplies, and other services. Funds received from the state included an ESD allotment of about $685,000 and $100,700 for traffic safety education. Amounts from federal sources other than STEP/Star Schools included $420,400 for the Job Training Partnership’s payments for a program it operates for the city of Spokane. The Ringgold Telephone Company began operations in 1912 in Ringgold, Georgia, to serve the citizens of Catoosa County, located in extreme north Georgia. In 1958, Ringgold applied for and received a loan from the Rural Electrification Administration (subsequently organized as a part of RUS). The loan was needed for capital improvements and expansion to keep up with the growth in demand. Today, Ringgold services 11,000 lines, and its equipment includes digital switching gear and 100 miles of fiber-optic lines. According to the company’s executive vice president, if telecommunications is inadequate in any rural area, development of that area will suffer. He said that businesses usually ask about the transmission speed and band width capabilities of the phone system before deciding to locate in the area. He also said that his company works closely with its customers, the Catoosa County Chamber of Commerce, and the Economic Development Commission, which assists rural areas in north Georgia with development planning. He said that forming such partnerships and establishing such plans is an integral part of achieving projects’ success. The executive vice president told us that although he works actively on long-term county planning, he is aware of only two federal programs for telecommunications. He said that the RUS loans have made it possible for the company to provide its customers with advanced technologies. He considers RUS’ requirement that its borrowers maintain a 5-year telecommunications plan a very positive factor. The National Rural Development Partnership, created in 1991, has as its objective the promotion of (1) innovative and strategic approaches to rural development and (2) collaboration among federal and state agencies involved in rural development. It also helps identify and resolve intergovernmental and interagency impediments. The partnership’s members are drawn from federal agencies involved in rural development, the 39 State Rural Development Councils, and national rural organizations. The goals of the National Association of Development Organizations are to (1) promote economic development, focusing primarily on rural areas and small towns; (2) serve as a forum for communication and education; and (3) provide technical assistance to its members. Founded in 1967, the organization has more than 300 members drawn primarily from multicounty planning and development agencies. The Organization for the Protection and Advancement of Small Telephone Companies is a national trade association of nearly 450 small independently owned and operated local exchange carriers serving more than 2 million subscribers in the rural United States. Founded in 1963, the organization represents small independent telephone companies before the Congress and provides a forum for the exchange of ideas and a discussion of mutual problems. The National Association of Regional Councils has as its members regional planning agencies, councils of government, and development districts. The association was founded in 1967 and has about 230 members. It provides legislative representation in Washington, D.C., and technical assistance to its members through workshops and training programs. The National Governors Association represents governors at the national level to inform the federal government of the needs and views of the states. The association also provides technical assistance to the governors and serves as a vehicle for sharing information. Founded in 1908, the association has 55 members, including the governors of the 50 states and representatives from Guam, American Samoa, the U.S. Virgin Islands, the Northern Mariana Islands, and the Commonwealth of Puerto Rico. Robert C. Summers, Assistant Director John K. Boyle, Project Leader Sara Bingham Clifford J. Diehl Natalie H. Herzog Carol Herrnstadt Shulman Frank C. Smith The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed rural communities' efforts to develop advanced telecommunications technologies, focusing on: (1) federal programs that fund rural telecommunications projects; (2) lessons learned for developing such projects; and (3) whether changes to these programs are needed. GAO found that: (1) as of December 1995, there were at least 28 programs that provided discretionary development funds for rural telecommunications projects; (2) 13 designated programs provided about $715.8 million for 540 telecommunications projects; (3) program users and rural development experts believe that rural communities need a basic understanding of telecommunications technologies and their potential benefits, strategic plans to determine the technical and financial feasibility of telecommunications development, and partnerships among key players involved in constructing, financing, and using telecommunications networks; (4) rural development experts and public officials believe that telecommunications programs could be improved by educating rural communities on the potential benefits of telecommunications technologies, building in requirements for considering telecommunications technologies in long-range planning, and making multiple federal programs easier to use; (5) most federal agencies lack the resources required for educational outreach programs; and (6) 1996 legislation emphasizes the need for rural communities to include telecommunications projects in their long-term planning and coordination of multiple federal programs. |
Dr Rosa Sancho, Head of Research at Alzheimer’s Research UK, said: “Being able to detect who is at an increased risk of developing Alzheimer’s could revolutionise the way we evaluate potential new drugs.
“While these genetic risk scores hold promise as valuable research tools, they will need to be thoroughly evaluated, tested and refined before they could ever be used to help doctors diagnose or treat the disease.
“This study does not suggest that having a high polygenic hazard score means you will definitely develop Alzheimer’s, nor does a low score mean you are immune from the disease. Genetics is only part of the story.”
Dr James Pickett, Head of Research, Alzheimer’s Society, added: "Preventing the development of dementia symptoms is the holy grail of Alzheimer’s research but to succeed we first need accurate methods to predict who is most likely to develop the condition.
"This study’s approach was fairly successful at predicting the likelihood of someone developing dementia over the coming year, but needs to be tested further in mixed, non US populations.
"This genetic risk score could help identify people to take part in research studies, but is not opening a door to genetic testing for dementia risk in the clinic.
"For anyone concerned about dementia the first step is to visit your GP. If you’re looking for ways to reduce your risk, remember what’s good for your heart is good for your head, and it may be possible to lower your risk by staying active, eating well, and learning new skills.” ||||| We have developed a PHS for quantifying individual differences in age-specific genetic risk for AD. Within the cohorts studied here, polygenic architecture plays an important role in modifying AD risk beyond APOE. With thorough validation, quantification of inherited genetic variation may prove useful for stratifying AD risk and as an enrichment strategy in therapeutic trials.
Using genotype data from 17,008 AD cases and 37,154 controls from the International Genomics of Alzheimer’s Project (IGAP Stage 1), we identified AD-associated SNPs (at p < 10 −5 ). We then integrated these AD-associated SNPs into a Cox proportional hazard model using genotype data from a subset of 6,409 AD patients and 9,386 older controls from Phase 1 of the Alzheimer’s Disease Genetics Consortium (ADGC), providing a polygenic hazard score (PHS) for each participant. By combining population-based incidence rates and the genotype-derived PHS for each individual, we derived estimates of instantaneous risk for developing AD, based on genotype and age, and tested replication in multiple independent cohorts (ADGC Phase 2, National Institute on Aging Alzheimer’s Disease Center [NIA ADC], and Alzheimer’s Disease Neuroimaging Initiative [ADNI], total n = 20,680). Within the ADGC Phase 1 cohort, individuals in the highest PHS quartile developed AD at a considerably lower age and had the highest yearly AD incidence rate. Among APOE ε3/3 individuals, the PHS modified expected age of AD onset by more than 10 y between the lowest and highest deciles (hazard ratio 3.34, 95% CI 2.62–4.24, p = 1.0 × 10 −22 ). In independent cohorts, the PHS strongly predicted empirical age of AD onset (ADGC Phase 2, r = 0.90, p = 1.1 × 10 −26 ) and longitudinal progression from normal aging to AD (NIA ADC, Cochran–Armitage trend test, p = 1.5 × 10 −10 ), and was associated with neuropathology (NIA ADC, Braak stage of neurofibrillary tangles, p = 3.9 × 10 −6 , and Consortium to Establish a Registry for Alzheimer’s Disease score for neuritic plaques, p = 6.8 × 10 −6 ) and in vivo markers of AD neurodegeneration (ADNI, volume loss within the entorhinal cortex, p = 6.3 × 10 −6 , and hippocampus, p = 7.9 × 10 −5 ). Additional prospective validation of these results in non-US, non-white, and prospective community-based cohorts is necessary before clinical use.
Competing interests: I have read the journal's policy and the authors of this manuscript have the following competing interests: JBB served on advisory boards for Elan, Bristol-Myers Squibb, Avanir, Novartis, Genentech, and Eli Lilly and holds stock options in CorTechs Labs, Inc. and Human Longevity, Inc. AMD is a founder of and holds equity in CorTechs Labs, Inc., and serves on its Scientific Advisory Board. He is also a member of the Scientific Advisory Board of Human Longevity, Inc. (HLI), and receives research funding from General Electric Healthcare (GEHC). The terms of these arrangements have been reviewed and approved by the University of California, San Diego in accordance with its conflict of interest policies. AG served on or have served on in the last 3 years the scientific advisory boards of the following companies: Denali Therapeutics, Cognition Therapeutics and AbbVie. BM served as guest editor on PLOS Medicine’s Special Issue on Dementia.
Funding: This work was supported by grants from the National Institutes of Health (NIH-AG046374, K01AG049152, R01MH100351), National Alzheimer’s Coordinating Center Junior Investigator Award (RSD), Radiological Society of North America Resident/Fellow Award (RSD), Foundation of the American Society of Neuroradiology Alzheimer’s Imaging Grant (RSD), the Research Council of Norway (#213837, #225989, #223273, #237250/EU JPND), the South East Norway Health Authority (2013-123), Norwegian Health Association, and the KG Jebsen Foundation. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Building on a prior approach evaluating GWAS-detected genetic variants for disease prediction [ 6 ] and using a survival analysis framework, we tested the feasibility of combining AD-associated SNPs and APOE status into a continuous-measure polygenic hazard score (PHS) for predicting the age-specific risk for developing AD. We assessed replication of the PHS using several independent cohorts.
In addition to the single nucleotide polymorphism (SNP) in APOE, recent genome-wide association studies (GWASs) have identified numerous AD-associated SNPs, most of which have a small effect on disease risk [ 4 , 5 ]. Although no single polymorphism may be informative clinically, a combination of APOE and non-APOE SNPs may help identify older individuals at increased risk for AD. Despite their detection of novel AD-associated genes, GWAS findings have not yet been incorporated into a genetic epidemiology framework for individualized risk prediction.
Late-onset Alzheimer disease (AD), the most common form of dementia, places a large emotional and economic burden on patients and society. With increasing health care expenditures among cognitively impaired elderly individuals [ 1 ], identifying individuals at risk for developing AD is of utmost importance for potential preventative and therapeutic strategies. Inheritance of the ε4 allele of apolipoprotein E (APOE) on Chromosome 19q13 is the most significant risk factor for developing late-onset AD [ 2 ]. APOE ε4 has a dose-dependent effect on age of onset, increases AD risk 3-fold in heterozygotes and 15-fold in homozygotes, and is implicated in 20%–25% of AD cases [ 3 ].
We examined the association between our PHS and established in vivo and pathological markers of AD neurodegeneration. Using linear models, we assessed whether the PHS associated with Braak stage for NFTs and CERAD score for neuritic plaques, as well as CSF Aβ 1–42 and CSF total tau. Using linear mixed effects models, we also investigated whether the PHS was associated with longitudinal CDR-SB score and volume loss within the entorhinal cortex and hippocampus. In all analyses, we co-varied for the effects of age and sex.
Because case–control samples cannot provide the proper baseline hazard [ 17 ], we used previously reported annualized incidence rates by age estimated from the general US population [ 18 ]. For each participant, by combining the overall population-derived incidence rates [ 18 ] and the genotype-derived PHS, we calculated the individual’s “instantaneous risk” for developing AD, based on their genotype and age (for additional details see S1 Appendix ). To independently assess the predicted instantaneous risk, we evaluated longitudinal follow-up data from 2,724 cognitively normal older individuals from the NIA ADC with at least 2 y of clinical follow-up. We assessed the number of cognitively normal individuals progressing to AD as a function of the predicted PHS risk strata and examined whether the predicted PHS-derived incidence rate reflected the empirical progression rate using a Cochran–Armitage trend test.
To assess for replication, we first examined whether the predicted PHSs derived from the ADGC Phase 1 cohort could stratify individuals into different risk strata within the ADGC Phase 2 cohort. We next evaluated the relationship between predicted age of AD onset and the empirical (actual) age of AD onset using cases from ADGC Phase 2. We binned risk strata into percentile bins and calculated the mean of actual age of AD onset in that percentile as the empirical age of AD onset. In a similar fashion, we additionally tested replication within the NIA ADC subset classified at autopsy as having a high level of AD neuropathological change [ 13 ].
Using the IGAP Stage 1 sample, we first identified a list of SNPs associated with increased risk for AD, using a significance threshold of p < 10 −5 . Next, we evaluated all IGAP-detected AD-associated SNPs within the ADGC Phase 1 case–control dataset. Using a stepwise procedure in survival analysis, we delineated the “final” list of SNPs for constructing the PHS [ 14 , 15 ]. Specifically, using Cox proportional hazard models, we identified the top AD-associated SNPs within the ADGC Phase 1 cohort (excluding NIA ADC and ADNI samples), while controlling for the effects of gender, APOE variants, and the top five genetic principal components (to control for the effects of population stratification). We utilized age of AD onset and age of last clinical visit to estimate age-specific risks [ 16 ] and derived a PHS for each participant. In each step of the stepwise procedure, the algorithm selected the one SNP from the pool that most improved model prediction (i.e., minimizing the Martingale residuals); additional SNP inclusion that did not further minimize the residuals resulted in halting of the SNP selection process. To prevent overfitting in this training step, we used 1,000× bootstrapping for model averaging and estimating the hazard ratios for each selected SNP. We assessed the proportional hazard assumption in the final model using graphical comparisons.
We followed three steps to derive the PHS for predicting age of AD onset: (1) we defined the set of associated SNPs, (2) we estimated hazard ratios for polygenic profiles, and (3) we calculated individualized absolute hazards (see S1 Appendix for a detailed description of these steps).
To assess the relationship between polygenic risk and in vivo biomarkers, we evaluated an ADGC-independent sample of 692 older controls and participants with mild cognitive impairment or AD from the ADNI (see S1 Appendix ). Briefly, the ADNI is a multicenter, multisite longitudinal study assessing clinical, imaging, genetic, and biospecimen biomarkers from US-based participants through the process of normal aging to early mild cognitive impairment, to late mild cognitive impairment, to dementia or AD (see S1 Appendix ). Here, we focused specifically on participants from ADNI 1 with cognitive, imaging, and cerebrospinal fluid (CSF) assessments from 2003 to 2010. In a subset of ADNI 1 participants with available genotype data, we evaluated baseline CSF level of Aβ 1–42 and total tau, as well as longitudinal Clinical Dementia Rating Sum of Boxes (CDR-SB) scores. In ADNI 1 participants with available genotype and quality-assured baseline and follow-up MRI scans, we also assessed longitudinal subregional change in medial temporal lobe volume (atrophy) on 2,471 serial T 1 -weighted MRI scans (for additional details see S1 Appendix ).
To assess longitudinal prediction, we evaluated an ADGC-independent sample of 2,724 cognitively normal elderly individuals. Briefly, all participants were US based, evaluated at National Institute of Aging–funded Alzheimer’s Disease Centers (data collection coordinated by the National Alzheimer’s Coordinating Center [NACC]) and clinically followed for at least two years (enrollment from 1984 to 2012, evaluation years were 2005 to 2016) [ 10 ]. Here, we focused on older individuals defined at baseline as having an overall Clinical Dementia Rating score of 0.0. To assess the relationship between polygenic risk and neuropathology, we assessed 2,960 participants from the NIA ADC samples with genotype and neuropathological evaluations. For the neuropathological variables, we examined the Braak stage for neurofibrillary tangles (NFTs) (0, none; I–II, entorhinal; III–IV, limbic; and V–VI, isocortical) [ 11 ] and the Consortium to Establish a Registry for Alzheimer’s Disease (CERAD) score for neuritic plaques (none/sparse, moderate, or frequent) [ 12 ]. Finally, as an additional independent replication sample, we evaluated all NIA ADC AD cases with genetic data who were classified at autopsy as having a high level of AD neuropathological change (n = 361), based on the revised National Institute of Aging–Alzheimer’s Association AD neuropathology criteria [ 13 ]. The institutional review boards of all participating institutions approved the procedures for all NIA ADC sub-studies. Written informed consent was obtained from all participants or surrogates.
To develop the survival model for the PHS, we first evaluated age of onset and raw genotype data from 6,409 patients with clinically diagnosed AD and 9,386 cognitively normal older individuals provided by the Alzheimer’s Disease Genetics Consortium (ADGC) (Phase 1, a subset of the IGAP dataset), excluding individuals from the National Institute of Aging Alzheimer’s Disease Center (NIA ADC) and Alzheimer’s Disease Neuroimaging Initiative (ADNI) samples. To evaluate replication of the PHS, we used an independent sample of 6,984 AD patients and 10,972 cognitively normal older individuals from the ADGC Phase 2 cohort ( Table 1 ). The genotype and phenotype data within the ADGC datasets has been described in detail elsewhere [ 7 , 8 ]. Briefly, the ADGC Phase 1 and 2 datasets (enrollment from 1984 to 2012) consist of case–control, prospective, and family-based sub-studies of white participants with AD occurrence after age 60 y derived from the general community and Alzheimer’s Disease Centers across the US. Participants with autosomal dominant (APP, PSEN1, and PSEN2) mutations were excluded. All participants were genotyped using commercially available high-density SNP microarrays from Illumina or Affymetrix. Clinical diagnosis of AD within the ADGC sub-studies was established using NINCDS-ADRDA criteria for definite, probable, and possible AD [ 9 ]. For most participants, age of AD onset was obtained from medical records and defined as the age when AD symptoms manifested, as reported by the participant or an informant. For participants lacking age of onset, age at ascertainment was used. Patients with an age at onset or age at death less than 60 y and individuals of non-European ancestry were excluded from the analyses. All ADGC Phase 1 and 2 control participants were defined within individual sub-studies as cognitively normal older adults at time of clinical assessment. The institutional review boards of all participating institutions approved the procedures for all ADGC sub-studies. Written informed consent was obtained from all participants or surrogates. For additional details regarding the ADGC datasets, please see [ 7 , 8 ].
To select AD-associated SNPs, we evaluated publicly available AD GWAS summary statistic data (p-values and odds ratios) from the International Genomics of Alzheimer’s Project (IGAP) (Stage 1; for additional details see S1 Appendix and [ 4 ]). For selecting AD-associated SNPs, we used IGAP Stage 1 data, from 17,008 AD cases and 37,154 controls drawn from four different consortia across North America and Europe (including the United States of America, England, France, Holland, and Iceland) with genotyped or imputed data at 7,055,881 SNPs (for a description of the AD cases and controls within the IGAP Stage 1 sub-studies, please see Table 1 and [ 4 ]).
We found that the PHS was significantly associated with Braak stage of NFTs (β-coefficient = 0.115, standard error [SE] = 0.024, p-value = 3.9 × 10 −6 ) and CERAD score for neuritic plaques (β-coefficient = 0.105, SE = 0.023, p-value = 6.8 × 10 −6 ). We additionally found that the PHS was associated with worsening CDR-SB score over time (β-coefficient = 2.49, SE = 0.38, p-value = 1.1 × 10 −10 ), decreased CSF Aβ 1–42 (reflecting increased intracranial Aβ plaque load) (β-coefficient = −0.07, SE = 0.01, p-value = 1.28 × 10 −7 ), increased CSF total tau (β-coefficient = 0.03, SE = 0.01, p-value = 0.05), and greater volume loss within the entorhinal cortex (β-coefficient = −0.022, SE = 0.005, p-value = 6.30 × 10 −6 ) and hippocampus (β-coefficient = −0.021, SE = 0.005, p-value = 7.86 × 10 −5 ).
The gray line represents the population baseline estimate. Dashed lines represent incidence rates in APOE ε4 carriers (dark red dashed line) and non-carriers (light blue dashed line) not associated with a PHS percentile. The asterisk indicates that the baseline estimation is based on previously reported annualized incidence rates by age in the general US population [ 18 ]. PHS, polygenic hazard score.
Given an individual’s genetic profile and age, the corrected survival proportion can be translated directly into incidence rates ( Fig 4 ; Tables 3 and S1 ). As previously reported in a meta-analysis summarizing four studies from the US general population [ 18 ], the annualized incidence rate represents the proportion (in percent) of individuals in a given risk stratum and age who have not yet developed AD but will develop AD in the following year; thus, the annualized incidence rate represents the instantaneous risk for developing AD conditional on having survived up to that point in time. For example, for a cognitively normal 65-y-old individual in the 80th percentile of the PHS, the incidence rate (per 100 person-years) would be 0.29 at age 65 y, 1.22 at age 75 y, 5.03 at age 85 y, and 20.82 at age 95 y ( Fig 4 ; Table 3 ); in contrast, for a cognitively normal 65-y-old in the 20th percentile of the PHS, the incidence rate would be 0.10 at age 65 y, 0.43 at age 75 y, 1.80 at age 85 y, and 7.43 at age 95 y ( Fig 4 ; Table 3 ). As independent validation, we examined whether the PHS-predicted incidence rate reflects the empirical progression rate (from normal control to clinical AD) ( Fig 5 ). We found that the PHS-predicted incidence was strongly associated with empirical progression rates (Cochran–Armitage trend test, p = 1.5 × 10 −10 ).
To evaluate the risk for developing AD, combining the estimated hazard ratios from the ADGC cohort, allele frequencies for each of the AD-associated SNPs from the 1000 Genomes Project, and the disease incidence in the general US population [ 18 ], we generated population baseline-corrected survival curves given an individual’s genetic profile and age (panels A and B of second figure in S1 Appendix ). We found that PHS status modifies both the risk for developing AD and the distribution of age of onset (panels A and B of second figure in S1 Appendix ).
(A) Risk stratification in ADGC Phase 2 cohort, using PHSs derived from ADGC Phase 1 dataset. The dashed lines and shaded regions represent Kaplan–Meier estimations with 95% confidence intervals. (B) Predicted age of AD onset as a function of empirical age of AD onset among cases in ADGC Phase 2 cohort. Prediction is based on the final survival model trained in the ADGC Phase 1 dataset. AD, Alzheimer disease; ADGC, Alzheimer’s Disease Genetics Consortium; PHS, polygenic hazard score.
To assess replication, we applied the ADGC Phase 1–trained model to independent samples from ADGC Phase 2. Using the empirical distributions, we found that the PHS successfully stratified individuals from independent cohorts into different risk strata ( Fig 3A ). Among AD cases in the ADGC Phase 2 cohort, we found that the predicted age of onset was strongly associated with the empirical (actual) age of onset (binned in percentiles, r = 0.90, p = 1.1 × 10 −26 ; Fig 3B ). Similarly, within the NIA ADC subset with a high level of AD neuropathological change, we found that the PHS strongly predicted time to progression to neuropathologically defined AD (Cox proportional hazard model, z = 11.8723, p = 2.8 × 10 −32 ).
To quantify the additional prediction provided by polygenic information beyond APOE, we evaluated how the PHS modulates age of AD onset in APOE ε3/3 individuals. Among these individuals, we found that age of AD onset can vary by more than 10 y, depending on polygenic risk. For example, for an APOE ε3/3 individual in the tenth decile (top 10%) of the PHS, at 50% risk for meeting clinical criteria for AD diagnosis, the expected age of developing AD is approximately 84 y ( Fig 2 ); however, for an APOE ε3/3 individual in the first decile (bottom 10%) of the PHS, the expected age of developing AD is approximately 95 y ( Fig 2 ). The hazard ratio comparing the tenth decile to the first decile is 3.34 (95% CI 2.62–4.24, log rank test p = 1.0 × 10 −22 ). Similarly, we also evaluated the relationship between the PHS and the different APOE alleles (ε2/3/4) (first figure in S1 Appendix ). These findings show that, beyond APOE, the polygenic architecture plays an integral role in affecting AD risk.
The proportional hazard assumptions were checked based on graphical comparisons between Kaplan–Meier estimations (dashed lines) and Cox proportional hazard models (solid lines). The 95% confidence intervals of Kaplan–Meier estimations are also demonstrated (shaded with corresponding colors). The baseline hazard (gray line) in this model is based on the mean of ADGC data. ADGC, Alzheimer’s Disease Genetics Consortium; ADNI, Alzheimer’s Disease Neuroimaging Initiative; NIA ADC, National Institute on Aging Alzheimer’s Disease Center; PHS, polygenic hazard score.
From the IGAP cohort, we found 1,854 SNPs associated with increased risk for AD at p < 10 −5 . Of these, using the Cox stepwise regression framework, we identified 31 SNPs, in addition to two APOE variants, within the ADGC cohort for constructing the polygenic model ( Table 2 ). Fig 1 illustrates the relative risk for developing AD using the ADGC Phase 1 case–control cohort. The graphical comparisons among Kaplan–Meier estimations and Cox proportional hazard models indicate that the proportional hazard assumption holds for the final model ( Fig 1 ).
Discussion
In this study, by integrating AD-associated SNPs from recent GWASs and disease incidence estimates from the US population into a genetic epidemiology framework, we have developed a novel PHS for quantifying individual differences in risk for developing AD, as a function of genotype and age. The PHS systematically modified age of AD onset, and was associated with known in vivo and pathological markers of AD neurodegeneration. In independent cohorts (including a neuropathologically confirmed dataset), the PHS successfully predicted empirical (actual) age of onset and longitudinal progression from normal aging to AD. Even among individuals who do not carry the ε4 allele of APOE (the majority of the US population), we found that polygenic information was useful for predicting age of AD onset.
Using a case–control design, prior work has combined GWAS-associated polymorphisms and disease prediction models to predict risk for AD [19–24]. Rather than representing a continuous process where non-demented individuals progress to AD over time, the case–control approach implicitly assumes that normal controls do not develop dementia and treats the disease process as a dichotomous variable where the goal is maximal discrimination between diseased “cases” and healthy “controls.” Given the striking age dependence of AD, this approach is clinically suboptimal for estimating the risk of AD. Building on prior genetic estimates from the general population [2,25], we employed a survival analysis framework to integrate AD-associated common variants with established population-based incidence [18] to derive a continuous measure, the PHS. We note that the PHS can estimate individual differences in AD risk across a lifetime and can quantify the yearly incidence rate for developing AD.
These findings indicate that the lifetime risk of age of AD onset varies by polygenic profile. For example, the annualized incidence rate (risk for developing AD in a given year) is considerably lower for an 80-y-old individual in the 20th percentile of the PHS than for an 80-y-old in the 99th percentile of the PHS (Fig 4; Table 3). Across the lifespan (panel B of second figure in S1 Appendix), our results indicate that even individuals with low genetic risk (low PHS) develop AD, but at a later peak age of onset. Certain loci (including APOE ε2) may “protect” against AD by delaying, rather than preventing, disease onset.
Our polygenic results provide important predictive information beyond APOE. Among APOE ε3/3 individuals, who constitute 70%–75% of all individuals diagnosed with late-onset AD, age of onset varies by more than 10 y, depending on polygenic risk profile (Fig 2). At 60% AD risk, APOE ε3/3 individuals in the first decile of the PHS have an expected age of onset of 85 y, whereas for individuals in the tenth decile of the PHS, the expected age of onset is greater than 95 y. These findings are directly relevant to the general population, where APOE ε4 accounts for only a fraction of AD risk [3], and are consistent with prior work [26] indicating that AD is a polygenic disease where non-APOE genetic variants contribute significantly to disease etiology.
We found that the PHS strongly predicted age of AD onset within the ADGC Phase 2 dataset and the NIA ADC neuropathology-confirmed subset, demonstrating independent replication of our polygenic score. Within the NIA ADC sample, the PHS robustly predicted longitudinal progression from normal aging to AD, illustrating that polygenic information can be used to identify the cognitively normal older individuals at highest risk for developing AD (preclinical AD). We found a strong relationship between the PHS and increased tau-associated NFTs and amyloid plaques, suggesting that elevated genetic risk may make individuals more susceptible to underlying AD pathology. Consistent with recent studies showing correlations between AD polygenic risk scores and markers of AD neurodegeneration [22,23], our PHS also demonstrated robust associations with CSF Aβ 1–42 levels, longitudinal MRI measures of medial temporal lobe volume loss, and longitudinal CDR-SB scores, illustrating that increased genetic risk may increase the likelihood of clinical progression and developing neurodegeneration measured in vivo.
From a clinical perspective, our genetic risk score may serve as a “risk factor” for accurately identifying older individuals at greatest risk for developing AD, at a given age. Conceptually similar to other polygenic risk scores (for a review of this topic see [27]) for assessing coronary artery disease risk [28] and breast cancer risk [29], our PHS may help in predicting which individuals will test “positive” for clinical, CSF, or imaging markers of AD pathology. Importantly, a continuous polygenic measure of AD genetic risk may provide an enrichment strategy for prevention and therapeutic trials and could also be useful for predicting which individuals may respond to therapy. From a disease management perspective, by providing an accurate probabilistic assessment regarding the likelihood of AD neurodegeneration, determining a “genomic profile” of AD may help initiate a dialogue on future planning. Finally, a similar genetic epidemiology framework may be useful for quantifying the risk associated with numerous other common diseases.
There are several limitations to our study. We primarily focused on individuals of European descent. Given that AD incidence [30], genetic risk [25,31], and likely linkage disequilibrium in African-American and Latino individuals is different from in white individuals, additional work will be needed to develop a polygenic risk model in non-white (and non-US) populations. The majority of the participants evaluated in our study were recruited from specialized memory clinics or AD research centers and may not be representative of the general US population. In order to be clinically useful, we note that our PHS needs to be prospectively validated in large community-based cohorts, preferably consisting of individuals from a range of ethnicities. The previously reported population annualized incidence rates were not separately provided for males and females [18]. Therefore, we could not report PHS annualized incidence rates stratified by sex. We note that we primarily focused on genetic markers and thus did not evaluate how other variables, such as environmental or lifestyle factors, in combination with genetics impact age of AD onset. Another limitation is that our PHS may not be able to distinguish pure AD from a “mixed dementia” presentation since cerebral small vessel ischemic/hypertensive pathology often presents concomitantly with AD neurodegeneration, and additional work will be needed on cohorts with mixed dementia to determine the specificity of our polygenic score. Finally, we focused on APOE and GWAS-detected polymorphisms for disease prediction. Given the flexibility of our genetic epidemiology framework, it can be used to investigate whether a combination of common and rare genetic variants along with clinical, cognitive, and imaging biomarkers may prove useful for refining the prediction of age of AD onset.
In conclusion, by integrating population-based incidence proportion and genome-wide data into a genetic epidemiology framework, we have developed a PHS for quantifying the age-associated risk for developing AD. Measures of polygenic variation may prove useful for stratifying AD risk and as an enrichment strategy in clinical trials. ||||| Test based on 31 genetic markers could be used to calculate any individual’s yearly risk for onset of disease
Scientists have developed a new genetic test for Alzheimer’s risk that can be used to predict the age at which a person will develop the disease.
A high score on the test, which is based on 31 genetic markers, can translate to being diagnosed many years earlier than those with a low-risk genetic profile, the study found. Those ranked in the top 10% in terms of risk were more than three times as likely to develop Alzheimer’s during the course of the study, and did so more than a decade before those who ranked in the lowest 10%.
Strobe lighting provides a flicker of hope in the fight against Alzheimer’s Read more
Rahul Desikan, of the University of California – who led the international effort, said the test could be used to calculate any individual’s risk of developing Alzheimer’s that year.
“That is, if you don’t already have dementia, what is your yearly risk for AD onset, based on your age and genetic information,�? he said.
The so-called polygenic hazard score test was developed using genetic data from more than 70,000 individuals, including patients with Alzheimer’s disease and healthy elderly people.
It is already known that genetics plays a powerful role in Alzheimer’s. Around a quarter of patients have a strong family history of the disease, and scientists have shown this is partly explained by a gene called ApoE, which comes in three versions, and is known to have a powerful influence on the risk of getting the most common late-onset type of Alzheimer’s. One version of ApoE appears to reduce risk by up to 40%, while those with two copies (one from each parent) of the high-risk version can increase risk by 12 times.
The latest study takes a new approach, showing that, aside from ApoE, there are thousands of background genetic variations that each have a tiny influence on Alzheimer’s risk, but whose cumulative influence is substantial.
The researchers first identified nearly 2,000 single letter differences in the genetic code (known as SNPs) and, after ranking them for influence, developed a test based on 31 of the markers. The test was then used to accurately predict an individual’s risk of getting the disease in an independent patient cohort.
In people with the high-risk version of ApoE, those ranked in the top 10% of risk on the new test got Alzheimer’s at an average age of 84 years, compared with 95 years for those ranked in the lowest 10%.
James Pickett, head of research at Alzheimer’s Society, said: “Preventing the development of dementia symptoms is the holy grail of Alzheimer’s research but to succeed we first need accurate methods to predict who is most likely to develop the condition. This study’s approach was fairly successful at predicting the likelihood of someone developing dementia over the coming year, but needs to be tested further in mixed, non-US populations.�?
Pickett added that, while the score could help to identify people for trials, it was too early to apply it as a genetic testing tool for use in the clinic.
Rosa Sancho, head of research at Alzheimer’s Research UK, said that while genetic makeup can influence the chances of developing dementia, a healthy diet, regular physical activity and remaining mentally active can also drive down the risk. “Genetics is only part of the story and we know that lifestyle factors also influence our risk of developing Alzheimer’s,�? she said. “The best current evidence points to habits we can all adopt to help lower our risk and indicates that what’s good for your heart is also good for the brain.�?
The findings are published in the journal PLOS Medicine. ||||| News_release
An international team of scientists, led by researchers at University of California San Diego School of Medicine and University of California San Francisco, has developed a novel genetic score that allows individuals to calculate their age-specific risk of developing Alzheimer’s disease (AD), based upon genetic information.
A description of the polygenic hazard scoring (PHS) system and its validation are published in the March 21 online issue of PLOS Medicine.
“We combined genetic data from large, independent cohorts of patients with AD with epidemiological estimates to create the scoring, then replicated our findings on an independent sample and validated them with known biomarkers of Alzheimer’s pathology,” said co-first author Rahul S. Desikan, MD, PhD, clinical instructor in the UCSF Department of Radiology & Biomedical Imaging.
Specifically, the researchers combined genotype-derived polygenic information with known AD incidence rates from the U.S. population to derive instantaneous risk estimates for developing AD.
“For any given individual, for a given age and genetic information, we can calculate your �?personalized’ annualized risk for developing AD,” said Desikan. “That is, if you don’t already have dementia, what is your yearly risk for AD onset, based on your age and genetic information. We think these measures of polygenetic risk, of involving multiple genes, will be very informative for early AD diagnosis, both in determining prognosis and as an enrichment strategy in clinical trials.”
To conduct the study, the research team analyzed genotype data from more than 70,000 AD patients and normal elderly controls who were participating in several projects, such as the Alzheimer’s Disease Genetics Consortium, the National Alzheimer’s Coordinating Center and the Alzheimer’s Disease Neuroimaging Initiative. The team scrutinized the data for AD-associated single nucleotide polymorphisms (SNPs), which are variations of a single nucleotide or DNA building block that occur at a specific position in the genome. There is some SNP variation in genomic information in all humans, which affects individual susceptibility to disease. In this case, the researchers looked at SNPs linked to AD risk and for APOE status. Persons with the E4 variant in the APOE gene are known to be at greater risk of developing late-onset AD.
The researchers developed a continuous polygenic hazard score or PHS based upon this data to predict age-specific risk of developing AD, then tested it in two independent cohorts or defined groups of people. They found persons in the top PHS quartile developed AD at a considerably lower age and had the highest yearly AD incidence rate. Importantly, PHS also identified people who were cognitively normal at baseline but eventually developed AD. Even among people who did not have the APOE E4 allele, the most important genetic risk factor for AD, PHS informed age of onset; individuals with high PHS scores developed AD 10-15 years earlier than individuals with low PHS.
The authors found that PHS strongly predicted empirical age of AD onset and progression from normal aging to AD, with strongly associated neuropathology and biomarkers of AD neurodegeneration.
“From a clinical perspective, the polygenic hazard score provides a novel way not just to assess an individual’s lifetime risk of developing AD, but also to predict the age of disease onset,” said senior author Anders Dale, PhD, director of the Center for Translational Imaging and Precision Medicine and professor in neurosciences, radiology, psychiatry and cognitive science at UC San Diego School of Medicine. “Equally important, continuous polygenic testing of AD genetic risk can better inform prevention and therapeutic trials and be useful in determining which individuals are most likely to respond to therapy.”
The authors note several limitations to their study, beyond the need for broader and deeper validation studies. For example, their databases primarily represented individuals of European descent and thus are not indicative of AD incidence and genetic risk in other ethnicities, such as African-American or Latino.
“This limitation is an unfortunate product of available genetic studies. To have good predictive performance, the genetic risk score requires a large amount of data to train, but currently only European cohorts have reached this critical mass,” said co-first author Chun Chieh Fan, MD, in the Department of Cognitive Science at UC San Diego.
But “given the genome-wide association studies across ethnic populations that are emerging, the health disparities in the field of genetic prediction will be removed,” Fan added.
Co-authors include: Andrew Schork, Dominic Holland, Chi-Hua Chen, James B. Brewer, David S. Karow, Karolina Kauppi and Linda K. McEvoy, UCSD; Yunpeng Wang, UCSD, University of Oslo and Oslo University Hospital; Howard J. Cabral and L. Adrienne Cupples, Boston University School of Public Health; Wesley K. Thompson, Sankt Hans Psychiatric Hospital, Denmark; Lilah Besser and Walter A. Kukull, University of Washington; Aree Witoelar and Ole A. Andreassen, University of Oslo and Oslo University Hospital; Celeste M. Karch, Washington University, St. Louis; Luke W. Bonham, Jennifer S. Yokoyama, Howard J. Rosen, Bruce L. Miller, William P. Dillon, David M. Wilson and Christopher P. Hess, UCSF; Margaret Pericak-Vance, University of Miami; Jonathan L. Haines, Case Western Reserve University; Lindsay A. Farrer, Boston University School of Medicine; Richard Mayeux, Columbia University; John Hardy, University College London; Alison M. Goate, Icahn School of Medicine at Mount Sinai; Bradley T. Hyman, Massachusetts General Hospital; and Gerard D. Schellenberg, University of Pennsylvania Perelman School of Medicine.
Funding for this research came, in part, from the National Institutes of Health (NIH-AG046374, K01AG049152, R01MH100351), a National Alzheimer’s Coordinating Center Junior Investigator Award, a Radiological Society of North America resident/fellow award, a Foundation of the American Society of Neuroradiology Alzheimer’s imaging grant, the Research Council of Norway, the South East Norway Health Authority, the Norwegian Health Association and the KG Jebsen Foundation.
| A new genetic test can predict the age a person is likely to develop Alzheimer's and calculate a person's risk of developing the disease in a particular year, according to a study published Tuesday in PLOS Medicine. “For any given individual, for a given age and genetic information, we can calculate your ‘personalized’ annualized risk for developing [Alzheimer's]," study co-author Rahul Desikan says in a press release. The polygenic hazard score test was created using genetic data from 70,000 people and is based on 31 genetic markers, the Guardian reports. Scientists behind the study found that thousands of small genetic variations can add up to a substantial risk for Alzheimer's. People the test ranked in the top 10% for risk of Alzheimer's were more than three times as likely to develop the disease during the study. They developed Alzheimer's 11 years before those in the lowest 10% for risk (84 years old vs. 95 years old, on average). There's currently no treatment for Alzheimer's, but experts believe that when one is found, it will have to be administered very early on, the Telegraph reports. This new test could help doctors identify patients for treatment before it's too late. But one expert not involved in the study says genetics is only one part of determining Alzheimer's risk; exercise, diet, and mental activity level all play a part. (Sleeping late may be an early warning sign of dementia.) |
Between August 1994 and August 1996, enrollment in Medicare risk-contract health maintenance organizations (HMO) rose by over 80 percent (from 2.1 million to 3.8 million), and the number of risk-contract HMOs rose from 141 to 229. As managed care options become increasingly available to Medicare beneficiaries, the need for information that can help them make prudent health care decisions has become more urgent. The need for straightforward and accurate information is also important because in the past some HMO sales agents have misled beneficiaries or used otherwise questionable sales practices to get them to enroll. For most 65-year-olds, notice of coverage for Medicare benefits comes in the mail—a Medicare card from the Health Care Financing Administration (HCFA), which administers the Medicare program. Unless beneficiaries enroll in an HMO, HCFA automatically enrolls them in Medicare’s fee-for-service program. Medicare’s fee-for-service program, available nationwide, offers a standard package of benefits covering (1) hospitalization and related benefits (part A), with certain coinsurance and deductibles paid by the beneficiary, and (2) physician and related services (part B) for a monthly premium ($42.50 in 1996), a deductible, and coinsurance. Medicare part B coverage is optional, though almost all beneficiaries entitled to part A also enroll in part B. Many beneficiaries in the fee-for-service program enhance their Medicare coverage by purchasing a private insurance product known as Medigap. Medigap policies can cost beneficiaries $1,000 a year or more and must cover Medicare coinsurance. Some policies also cover deductibles and benefits not covered under Medicare such as outpatient prescription drugs. Medicare beneficiaries may enroll in a Medicare-approved “risk” HMO if available in their area. Such a plan receives a fixed monthly payment, called a capitation payment, from Medicare for each beneficiary it enrolls. The payment is fixed per enrollee regardless of what the HMO spends for each enrollee’s care. An HMO paid by capitation is called a risk-contract HMO because it assumes the financial risk of providing health care within a fixed budget. Although other types of Medicare managed care exist, almost 90 percent of Medicare beneficiaries now in managed care are enrolled in risk-contract HMOs. Compared with the traditional Medicare fee-for-service program, HMOs typically cost beneficiaries less money, cover additional benefits, and offer freedom from complicated billing statements. Although some HMOs charge a monthly premium, many do not. (Beneficiaries enrolled in HMOs must continue to pay the Medicare part B premium and any specified HMO copayments.) HMOs are required to cover all Medicare part A and B benefits. Many HMOs also cover part A copayments and deductibles and additional services—such as outpatient prescription drugs, routine physical exams, hearing aids, and eyeglasses—that are not covered under traditional Medicare. In effect, the HMO often acts much like a Medigap policy by covering deductibles, coinsurance, and additional services. In return for the additional benefits HMOs furnish, beneficiaries give up their freedom to choose any provider. If a beneficiary enrolled in an HMO seeks nonemergency care from providers other than those designated by the HMO or seeks care without following the HMO’s referral policy, the beneficiary is liable for the full cost of that care. Recently, Medicare allowed HMOs to offer a “point-of-service” (POS) option (also known as a “self-referral” or “open-ended” option) that covers beneficiaries for some care received outside of the network. This option is not yet widely available among Medicare HMOs. Managed care plans’ marketing strategies and enrollment procedures reflect Medicare beneficiaries’ freedom to move between the fee-for-service and managed care programs. Unlike much of the privately insured population under age 65, beneficiaries are not limited to enrolling or disenrolling only during a specified “open season;” they may select any of the Medicare-approved HMOs in their area and may switch plans monthly or choose the fee-for-service program. Thus, HMOs market their plans to Medicare beneficiaries continuously rather than during an established 30- or 60-day period. HMOs and their sales agents, not HCFA, enroll beneficiaries who wish to join a managed care plan. Most beneficiaries have access to at least one Medicare HMO, and more than 50 percent of beneficiaries have at least two HMOs available in their area. In some urban areas, beneficiaries can choose from as many as 14 different HMOs. Each HMO may be distinguished from its competitors by its coverage of optional benefits, cost-sharing arrangements, and network restrictions. As a practical matter, the number of choices is likely to be greater than the number of HMOs because a single HMO may offer multiple Medicare products, each with its own combination of covered benefits and premium levels. In February 1996, Senator Pryor, the Ranking Minority Member of the Senate Special Committee on Aging asked us to examine issues related to the marketing, education, and enrollment practices of health plans participating in the Medicare risk-contract HMO program. Subsequently, he was joined by Committee Chairman Cohen and by Senators Grassley, Breaux, Feingold, and Wyden as corequesters. This report focuses on information that can help beneficiaries become discerning consumers. In particular, the report reviews (1) HCFA’s performance in providing beneficiaries comparative information about Medicare HMOs to assist their decision-making and (2) the usefulness of readily available data that could inform beneficiaries and caution them about poorly performing HMOs. Our study focused on risk-contract HMO plans, which (as of August 1996) enrolled almost 90 percent of Medicare beneficiaries enrolled in managed care. In conducting our study, we reviewed records at HCFA headquarters and regional offices and interviewed HCFA officials, Medicare beneficiary advocates, provider advocates, Medicare HMO managers, and representatives of large health insurance purchasing organizations. We also analyzed enrollment and disenrollment data from HCFA’s automated systems. In addition, we reviewed beneficiary complaint case files and observed certain HCFA oversight and education activities. Finally, we reviewed relevant literature. Our work was performed between October 1995 and August 1996 in accordance with generally accepted government auditing standards. (For further detail on our data analysis methodology, see app. I.) Though Medicare is the nation’s largest purchaser of managed care services, it lags other large purchasers in helping beneficiaries choose among plans. HCFA has responsibility for protecting beneficiaries’ rights and obtaining and disseminating information from Medicare HMOs to beneficiaries. HCFA has not yet, however, provided information to beneficiaries on individual HMOs. It has announced several efforts to develop HMO health care quality indicators. HCFA has, however, the capability to provide Medicare beneficiaries useful, comparative information now, using the administrative data it already collects. Unlike leading private and public health care purchasing organizations, Medicare does not provide its beneficiaries with comparative information about available HMOs. Other large purchasers of health care—for example, the Federal Employees Health Benefits Program, the California Public Employees’ Retirement System (CalPERS), Minnesota Medicaid, Xerox Corporation, and Southern California Edison—publish summary charts of comparative information such as available plans, premium rates, benefits, out-of-pocket costs, and member satisfaction surveys. Table 2.1 compares the information provided by HCFA and these other large health purchasers. A few purchasers also give enrollees information that helps them compare HMOs’ provision of services in such areas as preventive health and care of chronic illness. For example, CalPERS publishes the percentage of members in each plan who receive cholesterol screening, cervical and breast cancer screening, and eye exams for diabetics. Some purchasers also provide indicators of physician availability and competence, such as percentage of physicians accepting new patients, physician turnover, and percentage of physicians who are board certified. HCFA currently collects benefit and cost data in a standardized format from Medicare HMOs. HCFA’s professional staff use the data to determine that each HMO is providing a fairly priced package of Medicare services or that Medicare is paying a fair price for the services provided. HCFA could provide this benefit and cost information to beneficiaries with little additional effort. Using these data, HCFA’s regional office in San Francisco, on its own initiative, developed benefit and premium comparison charts 2 years ago for markets in southern and northern California, Arizona, and Nevada. However, distribution of these charts has been limited primarily to news organizations and insurance counselors. Beneficiaries may request the charts, but few do because HCFA does not widely publicize the charts’ existence. In fact, when we called a Los Angeles insurance counselor (without identifying ourselves as GAO staff) and asked specifically about Medicare HMO information, we were not told about the comparison charts. Recently, HCFA’s Philadelphia office began producing and distributing similar charts. While HCFA’s Office of Managed Care has been studying how to provide information to beneficiaries for a year and a half, the local initiatives in the San Francisco and Philadelphia offices demonstrate that HCFA could be distributing comparison charts to beneficiaries nationwide. Although HMOs provide beneficiaries information about benefits and premiums through marketing brochures, each plan uses its own terminology to describe benefits, premiums, and the rules enrollees must follow in selecting physicians and hospitals. Despite HCFA’s authority to do so, the agency does not require a standardized terminology or format for describing benefits. HCFA does review HMO marketing and informational materials to prevent false or misleading claims and to ensure that certain provider access restrictions are noted. HCFA has not ensured that HMO marketing materials are clear, however, because the agency does not require standard terminology or formats. For example, one plan’s brochure, to note its access restrictions, states that “. . . Should you ever require a specialist, your plan doctor can refer you to one” but never states that beneficiaries must get a referral before seeing a specialist. In addition, each HMO develops its own format to summarize its benefits and premiums. As a result, beneficiaries seeking to compare HMOs’ coverage of mammography services, for example, have to look under “mammography,” “X ray,” or another term, depending on the particular brochure. The length of some HMOs’ benefit summaries varies widely. For example, some brochures we received from the Los Angeles market, which has 14 Medicare HMOs, contain a summary of benefits spanning 14 pages; others have only a 1-page summary. Such diverse formats—without a comparison guide from HCFA—place the burden of comparing the HMOs’ benefits and costs exclusively on the beneficiary. To collect, distill, and compare HMO information would, in some markets, require substantial time and persistence (see figs. 2.1 and 2.2). First, beneficiaries would need to find and call a toll-free number to learn the names of available HMOs. This telephone number appears in the back of the Medicare handbook. However, the handbook generally is mailed to only those individuals turning age 65 or to beneficiaries who specially request it. Next, beneficiaries would have to contact each HMO to get benefit, premium, and provider network details. Finally, they would have to compare plans’ benefit packages and cost information without the benefit of standardized formats or terminology. This set of tasks is likely to be difficult for determined beneficiaries and may be too daunting for others. To test the difficulty of these tasks, we called all 14 Medicare HMOs in Los Angeles to request their marketing materials. After several weeks and follow-up calls, we had received information from only 10 plans. Some plans were reluctant to mail the information but offered to send it out with a sales agent. Declining visits from sales agents, we finally obtained the missing brochures by calling the HMOs’ marketing directors, identifying ourselves as GAO staff, and insisting that the marketing materials be mailed. The materials gathered show that beneficiaries in the Los Angeles market would have to sort through pounds of literature and compare benefits charts of 14 different HMOs. (See fig. 2.2.) Although HCFA has been studying ways to provide comparative benefits information nationwide since mid-1995, it has decided not to distribute printed information directly to beneficiaries. Instead, HCFA plans to make information on benefits, copayments, and deductibles available on the Internet. HCFA expects the primary users of this information to be beneficiary advocates, insurance counselors, and government entities— not beneficiaries. As of September 6, 1996, HCFA expected the information to be available electronically by June 1997—at the earliest. HCFA has a wealth of data, collected for program administration and contract oversight purposes, that can indicate beneficiaries’ relative satisfaction with individual HMOs. The data include statistics on beneficiary disenrollment and complaints. HCFA also collects other information that could be useful to beneficiaries, including HMOs’ financial data and reports from HCFA’s periodic monitoring visits to HMOs. As noted, however, HCFA does not routinely distribute this potentially useful information. Because of Medicare beneficiaries’ freedom to disenroll from managed care or change plans in any month, disenrollment data objectively measure consumer behavior toward and indicate their satisfaction with a specific HMO. Disenrollments may be more reliable than some other satisfaction measures—such as surveys—because disenrollment data do not depend on beneficiary recollection. Enrollment and disenrollment data, although collected primarily to determine payments to HMOs, can be used to construct several useful indicators of beneficiary satisfaction, such as the annual disenrollment rate: total number of disenrollees as a percentage of total enrollment averaged over the year, cancellation rate: percent of signed applications canceled before the “rapid” disenrollment rate: percent of new enrollees who disenroll within 3 “long-term” disenrollment rate: percent of enrollees who disenroll after 12 rate of return to fee for service: percent of disenrollees who return to traditional Medicare rather than enroll in another HMO, and retroactive disenrollment rate: percent of disenrollments processed retroactively by HCFA (typically done in cases of alleged beneficiary misunderstanding or sales agent abuse). Disenrollment rates that are high compared with rates for competing HMOs can serve as early warning indicators for beneficiaries, HMOs, and HCFA. (See ch. 3 for a discussion on interpreting these indicators and an analysis of disenrollment rates for HMOs serving the Miami and Los Angeles markets.) Disenrollment rates have already been used to help measure membership stability and enrollee satisfaction in the Health Plan Employer Data and Information Set (HEDIS), developed by large employers, HMOs, and HCFA under the auspices of the National Committee on Quality Assurance (NCQA). However, HEDIS’ measure of disenrollment behavior is limited to a single indicator—an annual disenrollment rate. HCFA could perform a more extensive analysis of the disenrollment data available now. The relative volume of beneficiary complaints about HMOs is another satisfaction indicator that HCFA could readily provide beneficiaries. HCFA regional staff routinely receive beneficiary complaints of sales abuses, the unresponsiveness of plans to beneficiary concerns, and other more routine service and care issues. Regardless of the type of complaint, a comparison of the number of complaints per 1,000 HMO members can give beneficiaries a view of members’ relative satisfaction with area HMOs. Although some HCFA regional offices already track complaints through the Beneficiary Inquiry Tracking System, HCFA has no plans to make these data consistent across regions or provide beneficiaries complaint volume information. HCFA could readily report on various HMO financial indicators. Large employers and HMOs have already incorporated several financial indicators—such as plans’ total revenue and net worth—into the current Health Plan Employer Data and Information Set (HEDIS 2.5). HEDIS 2.5 also requires HMOs to report the percentage of HMO revenues spent on medical services—known to insurers as the medical “loss ratio.” Xerox Corporation, for example, publicizes medical loss ratios to help employees compare the plans it offers. In addition, federal law establishes loss ratio standards for Medigap insurers. HCFA routinely collects financial information from HMOs in standard formats it jointly developed with the National Association of Insurance Commissioners in the early 1980s. HCFA uses these data to monitor contracts for compliance with federal financial and quality standards. HCFA could also report the results of periodic visits to verify HMO contract compliance in 13 separate dimensions, such as health services delivery, quality, and utilization management; treatment of beneficiaries in carrying out such administrative functions as marketing, enrollment, and grievance procedures; and management, administration, and financial soundness. After each visit, HCFA records any noncompliance with standards but does not make these reports public unless a Freedom of Information Act request is made. In contrast, NCQA, a leading HMO accreditation organization, has begun distributing brief summaries of its site visit reports to the public. NCQA’s summaries rate the degree of HMO compliance on six different dimensions, including quality management and improvement, utilization management, preventive health services, medical records, physician qualifications and evaluation, and members’ rights and responsibilities. HCFA has authority to obtain and distribute useful comparative data on health plans. Although HCFA is not now providing these data to beneficiaries and the marketplace, it is studying several future options, including joint efforts with the private sector. Eventually, these efforts could yield comparative plan information on satisfaction survey results, physician incentives, measures of access to care, utilization of services, health outcomes, and other aspects of plans’ operations. The following are examples of these efforts: HCFA is developing a standard survey, through HHS’ Agency for Health Care Policy and Research, to obtain beneficiaries’ perceptions of their managed care plans. This effort aims to standardize surveys and report formats to yield comparative information about, for example, enrollees’ experiences with access to services, interactions with providers, continuity of care, and perceived quality of care. HCFA has been developing regulations since 1990 to address financial incentives HMOs give their physicians. HCFA’s regulations, published in 1996 and scheduled to be effective beginning in January 1997, will require HMOs to disclose to beneficiaries, on request, the existence and type of any physician incentive arrangements that affect the use of services. HCFA is working with the managed care industry, other purchasers, providers, public health officials, and consumer advocates to develop a new version of HEDIS—HEDIS 3.0—that will incorporate measures relevant to the elderly population. It is also working with the Foundation for Accountability (FAcct) to develop more patient-oriented measures of health care quality. The HEDIS and FAcct initiatives are aimed at generating more direct measures of the quality of medical care and may require new data collection efforts by plans. These initiatives may eventually provide Medicare beneficiaries with objective information that will help them compare available plans. However, HCFA could do more to inform beneficiaries today. For this reason, we stress the importance of such measures as disenrollment rates, complaint rates, and results of monitoring visits, which can be readily generated from information HCFA routinely compiles. Public disclosure of disenrollment rates could help beneficiaries choose among competing HMOs and encourage HMOs to do a better job of marketing their plans and serving enrollees. Nonetheless, HCFA does not routinely compare plans’ disenrollment rates or disclose such information to the public. Because Medicare beneficiaries enrolled in HMOs can vote with their feet each month—by switching plans or returning to fee for service— comparing plans’ disenrollment rates can suggest beneficiaries’ relative satisfaction with competing HMOs. For this reason, we analyzed HCFA disenrollment data and found that Medicare HMOs’ ability to retain beneficiaries varies widely, even among HMOs in the same market. In the Miami area, for example, the share of a Medicare HMO’s total enrollment lost to voluntary disenrollment in 1995 ranged from 12 percent—about one in eight enrollees—to 37 percent—more than one in three enrollees. Although all HMOs experience some voluntary disenrollment, disenrollment rates should be about the same for all HMOs in a given market area if beneficiaries are about equally satisfied with each plan. An HMO’s disenrollment rate compared with other HMOs in the same market area, rather than a single HMO’s disenrollment rate, can indicate beneficiary satisfaction with care, service, and out-of-pocket costs. High disenrollment rates may result from poor education of enrollees during an HMO’s marketing and enrollment process. In this case enrollees may be ill informed about HMO provider-choice restrictions in general or the operation of their particular plan. High disenrollment rates may also result from beneficiaries’ dissatisfaction with access or quality of care. Alternatively, high disenrollment rates may reflect a different aspect of relative satisfaction—beneficiaries’ awareness that competing HMOs are offering better benefits or lower premiums. While statistics alone cannot distinguish among these causes, a relatively high disenrollment rate should caution beneficiaries to investigate further before enrolling. Medicare beneficiaries voluntarily disenroll from their HMOs for a variety of reasons: many who leave are dissatisfied with their HMOs’ service, but others leave for different reasons. A 1992 study reported that 48 percent of disenrollees from Medicare HMOs cited dissatisfaction as their reason for leaving, 23 percent cited a misunderstanding of HMO services or procedures, and 29 percent cited some other reason—such as a move out of the HMO’s service area. Some commonly cited reasons beneficiaries disenroll include dissatisfaction with the HMO’s provision of care, did not know had joined an HMO, did not understand HMO restrictions when joined, reached HMO’s annual drug benefit limit and enrolled in a different HMO for continued coverage of prescription drugs, attracted to competing HMO offering lower premiums or more generous moved out of HMO service area, and personal physician no longer contracts with HMO. Health plans’ retention of their members varies widely, as illustrated by our analysis of these rates for the Miami and Los Angeles markets. (See fig. 3.1 for the names of these HMOs and their associated Medicare products.) For some HMOs, disenrollment rates were high enough to raise questions about whether the HMO’s business emphasis was on providing health care or on marketing to new enrollees to replace the many who disenroll. The voluntary disenrollment rates of the seven plans active in the Miami market for all of 1995 varied substantially as measured by the percentage of an HMO’s average Medicare enrollment lost to disenrollment. (See fig. 3.2.) PCA Health Plan of Florida’s (PCA) disenrollment rate reached 37 percent; two other HMOs (HIP Health Plan of Florida (HIP) and CareFlorida) had disenrollment rates of 30 percent or higher. In contrast, Health Options had a disenrollment rate of 12 percent. The remaining five plans had a median disenrollment rate of about 17 percent. To keep total enrollment constant, HMOs must replace not only those members who leave voluntarily, but also those members who die. Thus, PCA had to recruit new enrollees equal in number to 41 percent of its membership just to maintain its total enrollment count. Percentage of Members in Plan The Los Angeles market, like Miami’s, showed substantial variation in HMOs’ disenrollment rates. (See fig. 3.3.) Los Angeles’ rates, in fact, varied slightly more than Miami’s. Foundation Health had the highest disenrollment rate (42 percent); Kaiser Foundation Health Plan (Kaiser) had the lowest (4 percent). Although reasons for disenrollment vary, beneficiaries who leave within a very short time are more likely to have been poorly informed about managed care in general or about the specific HMO they joined than those who leave after a longer time. Consequently, early disenrollment rates may better indicate beneficiary confusion and marketing problems than total disenrollment rates. Our analysis showed wide variation in plans’ early disenrollment rates. In our calculations we included both cancellations—beneficiaries who signed an application but canceled before the effective enrollment date—and “rapid disenrollment”—beneficiaries who left within 3 months of enrollment. In 1995, Medicare HMOs in the Miami market had cancellation rates of 3 to 8 percent, rapid disenrollment rates of 6 to 23 percent, and combined cancellations and rapid disenrollments of 9 to 30 percent. As figure 3.4 shows, nearly one in three beneficiaries who signed a CareFlorida application and more than one in five beneficiaries who signed a PCA application either canceled or left within the first 3 months. In contrast, only about 10 percent of Health Options’ and Prudential’s applicants left this early. Percentage of Beneficiaries Who Applied In 1995, Medicare HMOs in the Los Angeles market had cancellation rates of 1 to 7 percent, rapid disenrollment rates of 4 to 22 percent, and combined cancellations and rapid disenrollments of 5 to 29 percent. As figure 3.5 shows, a few Los Angeles plans lost beneficiaries at a rate significantly higher than the market average, and a few performed notably better than the market average. The broad middle group of plans lost between about 9 and 14 percent of new applicants before the 3-month time frame. The substantial variation in early disenrollments suggests that some HMOs do a better job than others of representing their plans to potential enrollees. Two 1991 HHS Office of Inspector General (OIG) studies support this idea. According to the studies, about one in four CareFlorida enrollees did not understand that they were joining an HMO, and one in four did not understand that they would be restricted to HMO physicians after they enrolled. In contrast, only about 1 in 25 Health Options enrollees failed to understand these fundamentals. OIG reported that CareFlorida’s disenrollment rates among beneficiaries enrolled less than a year were the highest in the Miami market for the federal fiscal years 1988 and 1989. This pattern persists, as our analysis of 1995 early disenrollment data shows. Complaints to HCFA regional offices of beneficiary confusion primarily fall into one of two categories: (1) mistaking the HMO application for a Medigap insurance application and (2) not understanding that HMO enrollees are restricted to certain providers. Confusion, whether the result of beneficiary ignorance of Medicare’s HMO option or intentional misrepresentation by HMO sales agents, exposes beneficiaries to unanticipated health expenses. Beneficiaries may also face months of uncertainty about their insured status and which specific providers they must see to have their health expenses covered. A typical complaint, according to HCFA staff, involves beneficiaries who find themselves enrolled in an HMO when they thought they were signing up for a Medicare supplemental policy. For example, in February 1995, a husband and wife signed an application for a South Florida HMO. They continued using their former physicians, who were not with the HMO, and incurred 17 separate charges in May 1995 for a knee replacement, including related services and a hospital stay. When Medicare denied payment, the couple found they were enrolled in the HMO. The HMO also denied payment, so the couple disenrolled, through the HMO, effective May 31. Still facing unpaid claims, they contacted HCFA in mid-June and complained that the sales agent had “talked real fast” and misrepresented the HMO plan as supplemental insurance. They allege he later told them they “didn’t read the fine print.” They complained that neither the government (Medicare) nor the sales agent explained the consequences of enrollment, and they would not have enrolled if they had known they would be giving up fee-for-service Medicare. In late July, HCFA retroactively disenrolled the couple and eventually paid their bills under fee-for-service Medicare. The HMO told HCFA that the sales agent had been terminated because of past concerns. Another leading category of complaints, according to HCFA staff, involves new HMO enrollees who do not understand HMO restrictions on access to care. In 1995, OIG reported that nearly one in four Medicare enrollees did not answer affirmatively when asked if they had a good knowledge from the beginning of how the HMO would operate; and one in four did not know they could appeal HMO denials of care they believe they are entitled to. Furthermore, 1 in 10 did not understand that they would need a referral from their primary care physician before they could see a specialist. The following complaint to HCFA about a Miami HMO illustrates beneficiary confusion over HMO restrictions. CareFlorida marketed its plan to an 81-year-old woman who subsequently enrolled in the plan effective February 1994, although she traveled regularly to a distant state. In her first months of membership, she visited her doctor, who was with the HMO. When she later visited a non-network physician who had also been her regular provider, Medicare denied her claims. She then requested to disenroll and told HCFA that if she had understood the requirement to visit specific providers, she would not have enrolled in the HMO. HCFA disenrolled the beneficiary from the plan effective with her use of non-network providers. This left her responsible for about $700 in out-of-plan charges. Other typical misunderstandings cited by HCFA staff and local insurance counselors include not understanding restrictions on access to specialists or other services nor restrictions to a specific medical group in an HMO’s provider network. Medicare regulations prohibit certain marketing practices, such as activities that mislead, confuse, or misrepresent; door-to-door solicitation; and gifts or payments used to influence enrollment decisions. These prohibitions are to help protect beneficiaries from abusive sales practices. Although HCFA staff could not measure the frequency of sales abuses, they expressed concern about continuing complaints of apparent abuses by sales agents. A recurring complaint, according to HCFA staff, is from beneficiaries whose signatures on enrollment forms are acquired under false pretenses. Many of these beneficiaries mistakenly believed that the form they signed—actually an enrollment form—was a request for more information or that it confirmed attendance at a sales presentation. In 1991, HCFA investigated the marketing practices of an HMO after receiving complaints and noting a high rate of retroactive disenrollments. The complaints alleged that sales agents were asking beneficiaries to sign a form indicating the agent had made a presentation. In fact, the document was an enrollment form. A recent case documented by HCFA staff is one in which at least 20 beneficiaries were inappropriately enrolled in an HMO after attending the same sales seminar in August 1995. The beneficiaries thought they were signing up to receive more information but later discovered the sales agent had enrolled them in the plan. In other cases, beneficiaries’ signatures were forged. In January 1995, for example, a beneficiary was notified by his medical group before an appointment that he was now enrolled in another plan. The beneficiary had no idea how this could be as he had not intended to change plans. Though the beneficiary signs with an “X,” the new enrollment application was signed with a legible cursive signature. HCFA re-enrolled the beneficiary into his former plan but took no action against the plan or sales agent. HCFA’s failure to take effective enforcement actions and to inform beneficiaries allows problems to persist at some HMOs. Historically, HCFA has been unwilling to sanction the HMOs it cites for violations found repeatedly during site monitoring visits. In 1988, 1991, and 1995, we reported on the agency’s pattern of ineffective oversight of HMOs violating Medicare requirements for marketing, beneficiary appeal rights, and quality assurance. Table 3.2 illustrates the weakness of HCFA’s responses in addressing one Florida HMO’s persistent problems. In the absence of HMO-specific performance indicators, beneficiaries joining this HMO have no way of knowing about its problem-plagued history spanning nearly a decade. Our reports show that this is not an isolated example. Disenrollment and complaint statistics can help identify HMOs whose sales agents mislead or fail to adequately educate new enrollees. However, HCFA does not routinely and systematically analyze these data. HCFA has uncovered problems with HMOs’ sales operations during routine visits to monitor contract compliance or when regional staff have noticed an unusual amount of complaints or disenrollments. The HHS OIG recently recommended that systematically developed disenrollment data be used in conjunction with surveys of beneficiaries to improve HCFA’s monitoring of HMOs. The OIG found that higher disenrollment rates correlated with higher beneficiary survey responses of poor service. Enrollees who said they got poor service and whose complaints were not taken seriously were more likely to come from HMOs with higher disenrollment rates. In contrast to the other surveyed HMOs, those with the five highest disenrollment rates were 1.5 times more likely to have beneficiaries report poor service (18 percent versus 12 percent). Although HCFA can identify HMOs with sales and marketing problems, it lacks the information to identify specific sales agents who might be at fault. HCFA does not routinely require HMOs to match disenrollment and complaint statistics to individual sales agents. In fact, HCFA made clear in 1991 that oversight standards for sales agents dealing with Medicare beneficiaries would be left largely to the states. States’ regulation and oversight of sales agents vary, although 32 states require HMO sales agents to be licensed. Representatives of the Florida Department of Insurance and its HMO monitoring unit said their oversight, beyond agent licensing, consisted of responding to specific complaints. One official commented that sales agents have to do something egregious to have their licenses revoked. HCFA’s HMO manual suggests specific practices that HMOs could employ to minimize marketing problems. These suggestions include verifying an applicant’s intent to enroll through someone independent of the sales agent, using rapid disenrollment data to identify agents whose enrollees have unusually high rates, and basing commissions and bonuses on sustained enrollment. HCFA staff said that some plans have implemented sales oversight like that suggested by HCFA, but others have not. Regional staff noted that plans are more likely to implement HCFA suggestions if they are trying to get approval for a contract application or service area expansion. Some HCFA regions have succeeded more than others in getting HMOs to improve their oversight of marketing agents. Publishing disenrollment data could encourage problem HMOs to reform their sales practices and more closely monitor their agents. Agents’ compensation often includes incentives such as commissions for each beneficiary they enroll. HMOs could structure their compensation to give agents a greater incentive to adequately inform beneficiaries about managed care in general and their plan in particular. For example, some HMOs pay commissions on the basis of a beneficiary’s remaining enrolled for a certain number of months. Several HMOs expressed concern that they did not know how their disenrollment rates compared with those of their competitors. Plan managers have told HCFA staff and us that comparative disenrollment information is useful performance feedback. Medicare HMOs do not compete on the basis of retention rates (low disenrollment rates) because these rates are not publicized. Publishing the rates would likely boost enrollment of plans with high retention rates and encourage plans with low retention rates to improve their performance. Millions of Medicare beneficiaries face increasingly complex managed care choices with little or no comparative information to help them. HCFA has not used its authority to provide comparative HMO information to help consumers, even though it requires standardized information for its internal use. As a result, information available to beneficiaries is difficult or impossible to obtain and compare. In contrast, other large purchasers—including the federal government for its employees—ease their beneficiaries’ decision-making by providing summary charts comparing plans. In addition, by not providing consumers with comparative information, Medicare fails to capitalize on market forces and complement HCFA’s regulatory approach to seeking good HMO performance. In an ideal market, informed consumers prod competitors to offer the best value. Without good comparative information, however, consumers are less able to determine the best value. HMOs have less incentive to compete on service to beneficiaries when satisfaction or other indicators of performance are not published. Wide distribution of HMO-specific disenrollment and other data could make Medicare’s HMO markets more like an ideal market and better ensure that consumers’ interests are served. HCFA could also make better use of indicators to improve its oversight of HMOs. By establishing benchmarks and measuring HMOs’ performance against them, HCFA could focus on plans whose statistics indicate potential problems—for example, on HMOs with high disenrollment rates. In August 1995, we recommended that the Secretary of HHS direct the HCFA Administrator to develop a new, more consumer-oriented strategy for administering Medicare’s HMO program. One specific recommendation called for HCFA to routinely publish (1) the comparative data it collects on HMOs and (2) the results of its investigations or any findings of noncompliance by HMOs. Although HCFA has announced plans to gather new data, it has no plans to analyze and distribute to beneficiaries the data on HMOs it currently collects. Therefore, we are both renewing our previous recommendations and recommending specific steps that the Secretary of HHS should take to help Medicare beneficiaries make informed health care decisions. The Secretary should direct the HCFA Administrator to require standard formats and terminology for important aspects of HMOs’ informational materials for beneficiaries, including benefits descriptions; require that all literature distributed by Medicare HMOs follow these produce benefit and cost comparison charts with all Medicare options available for each market area; and widely publicize the availability of the charts to all beneficiaries in markets served by Medicare HMOs and ensure that beneficiaries considering an HMO are notified of the charts’ availability. The Secretary should also direct the HCFA Administrator to annually analyze, compare, and distribute widely HMOs’ voluntary disenrollment rates, including cancellations, disenrollment within 3 months, disenrollment after 12 months, total disenrollment, retroactive disenrollment, and rate of return to fee for service; rate of inquiries and complaints per thousand enrollees; and summary results of HCFA’s monitoring visits. HHS agreed that “Medicare beneficiaries need more information and that informed beneficiaries can hold plans accountable for the quality of care.” HHS noted several HCFA initiatives that will eventually yield information to help beneficiaries choose plans right for their needs. We believe that these initiatives move in the right direction but that HCFA could do more for beneficiaries with information the agency already collects. The full text of HHS’ comments appears in appendix III. HHS outlined HCFA’s efforts to produce HMO comparison charts that will initially contain HMO costs and benefits and later may also include other plan-specific information—such as the results of HMOs’ satisfaction surveys. HCFA expects advocates and insurance counselors, not beneficiaries, to be the primary users of this information. HCFA plans to make the charts “available to any individual or organization with electronic access.” Information in an electronic form can easily be updated—a distinct advantage in a market that is evolving as quickly as Medicare HMOs. Providing the information in an electronic format, however, rather than in print, may make it less accessible to the very individuals who would find it useful. HHS noted that HCFA is developing the “National Managed Care Marketing Guideline,” partly in response to beneficiary complaints of confusion and misunderstanding caused by Medicare HMOs’ marketing practices. The guideline, to be implemented beginning in January 1997, will detail specific content areas to be covered in all Medicare HMO marketing materials. The guideline, as currently drafted, however, will not require standard formats or terminology and thus may not alleviate many of the difficulties beneficiaries now face when comparing HMOs’ marketing materials. Regarding our recommendation that disenrollment data be made available to beneficiaries, HHS stated that HCFA is evaluating different ways to express and present disenrollment rates. HHS cautioned that a careful analysis of disenrollment is necessary before meaningful conclusions can be drawn. We did not find such an analysis to be difficult or overly time consuming. Our recommendation is to publish disenrollment rates and let beneficiaries decide if, as we found in Los Angeles, a 42-percent annual disenrollment rate is meaningful in a market where competing HMOs have disenrollment rates of 4 percent. In short, HHS stated that HMO-specific information currently collected by HCFA could not be made publicly available until additional evaluation, data analysis, or development of data systems are complete. Even after this work is completed, however, the agency has no plans to distribute HMO-specific information directly to beneficiaries or ensure that they know such information is available. Thus, although HHS stated that one of HCFA’s highest priorities is that beneficiaries “receive timely, accurate, and useful information about Medicare,” HCFA has no plans to ensure that beneficiaries interested in HMOs receive any comparative information. | Pursuant to a congressional request, GAO reviewed the marketing, education, and enrollment practices of health maintenance organizations (HMO) participating in the Medicare risk-contract program, focusing on whether: (1) the Health Care Financing Administration (HCFA) provides Medicare beneficiaries with sufficient information about Medicare HMO; and (2) available HCFA data could be used to caution beneficiaries about HMO that perform poorly. GAO found that: (1) HCFA does not provide beneficiaries any of the comparative consumer guides that federal government and many employer-based health insurance programs routinely provide to their employees and retirees; (2) Medicare beneficiaries seeking similar information face a laborious, do-it-yourself process which includes calling to request area HMO names and telephone numbers, calling each HMO to request marketing materials, and attempting to compare plans from HMO brochures that may not use the same format or standardized terminology; (3) HCFA collects volumes of information that could be packaged and distributed to help consumers choose between competing Medicare HMO and also compiles data regarding HMO disenrollment rates, enrollee complaints, and certification results; (4) HCFA is developing comparison charts that will contain information on the benefits and costs for all Medicare HMO, but plans to post the charts in electronic format on the Internet rather than distribute them to beneficiaries; and (5) HCFA provision of information on HMO disenrollment rates may be particularly useful in helping beneficiaries to distinguish among competing HMO, since beneficiaries could then ask HMO representatives questions and seek additional information before making an enrollment decision. |
Evacuees escaping the floodwaters from Tropical Storm Harvey rest at the George R. Brown Convention Center that has been set up as a shelter in Houston, Texas, Tuesday, Aug. 29, 2017. (AP Photo/LM Otero) (Associated Press)
HOUSTON (AP) — The Latest on Tropical Depression Harvey (all times local):
4:15 a.m.
Beaumont, Texas, has lost its water supply because of Harvey.
Officials there say the city has lost service from its main pump station due to rising waters of the Neches River caused by Harvey.
The pump station is along the river and draws water from it as a main source for the city's water system.
The officials added in their statement early Thursday that the city has also lost its secondary water source at the Loeb wells in Hardin County. They say there's no water supply for Beaumont's water system at this time.
They say they must wait until the water levels from Harvey recede before determining the extent of damage.
___
1:20 a.m.
Major dangers for the U.S. Gulf Coast area loomed Wednesday with the threat of major flooding further east near the Texas-Louisiana line and an explosion at a Texas chemical plant as Harvey's floodwaters began receding in the Houston area after five days of torrential rain.
As the water receded, Houston's fire department said it would begin a block-by-block search Thursday of thousands of flooded homes. The confirmed death toll climbed to at least 31 on Wednesday, including six family members — four of them children — whose bodies were pulled Wednesday from a van that had been swept off a Houston bridge into a bayou.
Another crisis related to Harvey emerged at a chemical plant about 25 miles (40 kilometers) northeast of Houston. A spokeswoman for the Arkema Inc. plant in Crosby, Texas, said late Wednesday that the flooded facility had lost power and backup generators, leaving it without refrigeration for chemicals that become volatile as the temperature rises. ||||| (CNN) Days after Harvey struck, Houston Mayor Sylvester Turner struck an optimistic tone on Thursday, declaring the city "is open for business." The mayor and other officials pointed to small signs of recovery, such as fewer people in shelters, more bus lines resuming and the city's shipping channel reopening on a limited basis.
The mayor said parts of Houston still face flooding issues because of standing water but the rest of the city is drying out. Traffic is returning to the roadways and power has been restored to much of the region. And the Houston Astros will play a doubleheader at home on Saturday, Turner said.
"We are turning the corner," he said.
Turner added: "The city of Houston is open for business. And quite frankly, we're open for business right now."
But flood-stricken southeast Texas was still struggling with a new series of blows that left one city without running water, the operators of a flood-damaged chemical plant warning of additional fires and at least one hospital unable to care for patients.
Nearly a week after Hurricane Harvey slammed into the Texas coast, desperate residents remain stranded without food and water in the wake of unprecedented flooding. Meanwhile, authorities continue searching for survivors and made helicopter rescues from rooftops as the death toll from Harvey climbed to at least 47.
Given the disaster's scope, the commanding officer who led the federal response to Hurricane Katrina a dozen years ago questioned the adequacy of current relief efforts.
"When you have a combination of hurricane winds, flooding now for five days and you start losing the water and the electric grid, this is a game changer," retired Lt. Gen. Russel Honoré told CNN on Thursday.
"Losing electricity itself is a disaster for over a 24-hour period in America to any person because we lose access to water, we lose access to sewers, we lose our ability to communicate."
Janice Forse cries at a shelter Wednesday in Beaumont after her home was flooded.
The dangers emerging from the historic storm seem to increase by the day.
Beaumont, east of Houston, has no running water after both its water pumps failed, forcing a hospital to shut down. City officials could not say when service would be restored.
In Crosby, two blasts rocked a flooded chemical plant, and more could come.
And in Houston, authorities started going door to door looking for victims, hoping to find survivors but realizing that the death toll could rise.
Rainfall totals would fill Houston Astrodome 85,000 times
The storm dumped an estimated 27 trillion gallons of rain over Texas and Louisiana over six days, said Ryan Maue, of the weather analytics company WeatherBell. That's enough to fill the Houston Astrodome 85,000 times or San Francisco Bay 10.6 times at high tide.
"We will see additional losses of life, if history is any precedent here," Tom Bossert, homeland security adviser to President Donald Trump, told reporters Thursday.
The storm has damaged or destroyed about 100,000 homes, Bossert said.
Trump plans to donate $1 million of his money to help storm victims, according to the White House.
"You should continue to have confidence in what we're doing as a government," Bossert said. "But I would be remiss if I didn't stop and say that none of that matters if you're an affected individual."
FEMA reported Thursday that more than 96,000 people in Texas have been approved for emergency assistance, including financial aid for rent and lost property. More than $57 million has already been distributed for housing, personal property and transportation assistance.
In the hard-hit city of Rockport, Vice President Mike Pence on Thursday addressed residents outside a church.
"President Trump sent us here to say, 'We are with you. The American people are with you,'" said Pence, who later announced that Trump will visit Houston and other areas on Saturday.
Company warns of more blasts
A pair of blasts at the Arkema chemical plant in Crosby sent plumes of smoke into the sky Thursday morning, and the company warned more blasts could follow.
"We want local residents to be aware that product is stored in multiple locations on the site, and a threat of additional explosion remains," Arkema said . "Please do not return to the area within the evacuation zone until local emergency response authorities announce it is safe to do so."
Overheated organic peroxides caused blasts at a chemical plant in Crosby.
The twin blasts Thursday morning happened after organic peroxide overheated. The chemicals need to be kept cool, but the temperature rose after the plant lost power, officials said.
Containers popped. One caught fire and sent black smoke 30 to 40 feet into the air.
The thick smoke "might be irritating to the eyes, skin and lungs," Arkema officials said.
Fifteen Harris County sheriff's deputies were hospitalized, but the smoke they inhaled was not believed to be toxic, the department said. The deputies have all been released.
Harris County Sheriff Ed Gonzalez said nothing toxic was emitted and there was no imminent danger to the community.
Three other containers storing the same chemical are at risk of "overpressurization," said Jeff Carr of Griffin Communications Group, which is representing Arkema.
Arkema shut down the facility as Harvey approached last week. The company evacuated everyone within 1.5 miles of the plant as a precaution after it was flooded under more than 5 feet of water.
The company has said there's a small possibility the organic peroxide, which is used in the production of plastic resins, could seep into floodwaters, without igniting or burning.
Harvey forced the shutdown of many chemical or oil plants, including the Colonial Pipeline , which carries huge amounts of gasoline and other fuel between Houston and the East Coast. Valero and Motiva, the largest refinery in the country , have also closed some facilities.
'People are freaking out' in Beaumont
Extreme flooding caused both of Beaumont's water pumps to fail, leaving about 135,000 people without water on Thursday, said Jefferson County Judge Jeff Branick.
"We will have to wait until the water levels from this historical flood recede before we can determine the extent of damage and make any needed repairs," the city said. "There is no way to determine how long this will take at this time."
City officials plan to establish a water distribution point on Friday.
Meanwhile, earlier Thursday, residents lined up at stores hours before they opened in hopes of getting whatever bottled water they could find.
Standing in line in Beaumont Tx at Market Basket to buy #water. It's 7am, they open at 9. pic.twitter.com/cFqpxT67Rr — Maggi Carter (@maggicarter) August 31, 2017
"It's crazy," said Khayvin Williams, who started waiting in line at Market Basket at 6:50 a.m. "People are freaking out."
At a local Walmart, Jeffrey Farley said the store was only allowing 20 people in at a time and was rationing water to three cases per customer. He got in line at 6:30 a.m. and waited until 8:30 to get his water.
"It's an insult to injury for a lot of folks," Farley said. "The water situation has made things dire for everyone here."
Beaumont, along with Port Arthur, was devastated after Harvey made another landfall Wednesday.
The failure of the city's water pump forced the closure of Beaumont-based Baptist Hospitals of Southeast Texas
"Due to the citywide lack of services, we have no other alternative but to discontinue all services, which will include emergency services," the hospital system said Thursday.
Patients in stretchers and wheelchairs were evacuated to other hospitals by ambulance and helicopter.
"We had no idea when we went to bed at midnight that ... we'd get the call that says the hospital would need to think about the city's water being lost," hospital spokeswoman Mary Poole said. "We did not expect that and that's a game changer for us."
About 20 miles southeast of Beaumont, in Port Arthur, those lucky enough to get to a shelter were deluged again, when murky brown floodwater filled an evacuation shelter
Evacuees at a Port Arthur emergency shelter battle flooding once more after leaving their homes.
Actress Amber Chardae Robinson, speaking by phone from Beaumont, said getting out of Port Arthur was virtually impossible.
"Every avenue we use to get out of the city is flooded -- to get to Houston is flooded, to get to Louisiana is flooded," she said. "So people are just trying to figure out ways to get their family out of there at this point."
In most of Orange County, east of Beaumont, a mandatory evacuation order was issued Thursday afternoon by Judge Stephen Brint Carlton. The order primarily involved areas along the Neches and Sabine rivers.
Death toll expected to rise
Across the state, families are searching tirelessly for missing relatives six days after Harvey first pummeled the Texas coast
Photos: Hurricane Harvey slams Texas Photos: Hurricane Harvey slams Texas Downtown Houston is reflected in the flooded Buffalo Bayou on Wednesday, August 30, five days after Hurricane Harvey made landfall in Texas. The Category 4 storm came ashore late Friday, August 25, just north of Port Aransas, and has caused historic flooding. Correction: Previous versions of this gallery incorrectly reported that Hurricane Harvey is the strongest storm to make landfall in the United States since Wilma in 2005. Harvey is actually the strongest storm to make landfall in the United States since Charley in 2004. Hide Caption 1 of 74 Photos: Hurricane Harvey slams Texas Members of the Louisiana Department of Wildlife and Fisheries, the Florida Fish and Wildlife Conservation Commission and the Louisiana National Guard help rescue elderly people from a flooded assisted living home in Orange, Texas, on August 30. Hide Caption 2 of 74 Photos: Hurricane Harvey slams Texas A baby sits with family belongings at a Gallery Furniture store in Houston being used as a temporary shelter on August 30. Hide Caption 3 of 74 Photos: Hurricane Harvey slams Texas Floodwaters engulf homes in Port Arthur on August 30. Hide Caption 4 of 74 Photos: Hurricane Harvey slams Texas Janice Forse cries at an emergency shelter in Beaumont on August 30. Her home in Beaumont was flooded Wednesday morning. "Even Katrina wasn't this bad," Forse told the Austin American-Statesman. Hide Caption 5 of 74 Photos: Hurricane Harvey slams Texas Tammy Dominguez, left, and her husband, Christopher Dominguez, sleep on cots at the George R. Brown Convention Center, where nearly 10,000 people are taking shelter in Houston, on August 30. Hide Caption 6 of 74 Photos: Hurricane Harvey slams Texas A cat tries to find dry ground around a flooded apartment complex on August 30 in Houston. Hide Caption 7 of 74 Photos: Hurricane Harvey slams Texas Volunteer rescue workers help a woman from her flooded home in Port Arthur on August 30. Hide Caption 8 of 74 Photos: Hurricane Harvey slams Texas The Florida Air Force Reserve Pararescue team from the 308th Rescue Squadron helps evacuees board a helicopter in Port Arthur on August 30. Hide Caption 9 of 74 Photos: Hurricane Harvey slams Texas Water from the Addicks Reservoir flows into neighborhoods in Houston as floodwaters rise Tuesday, August 29. Hide Caption 10 of 74 Photos: Hurricane Harvey slams Texas Chris Gutierrez, second from right, helps his grandmother, Edelmira Gutierrez, down the stairs of their flooded house and into a waiting firetruck in the Concord Bridge neighborhood of Houston on August 29. Hide Caption 11 of 74 Photos: Hurricane Harvey slams Texas Members of the National Guard rest at a furniture store in Richmond, Texas, on August 29. Hide Caption 12 of 74 Photos: Hurricane Harvey slams Texas Alexis Hernandez holds her daughter Faith at the George R. Brown Convention Center, which is serving as a shelter in Houston. Hide Caption 13 of 74 Photos: Hurricane Harvey slams Texas Evacuees make their way though floodwaters in Houston on August 29. Hide Caption 14 of 74 Photos: Hurricane Harvey slams Texas President Donald Trump takes part in a briefing on Harvey as he visits Corpus Christi on August 29. In a stop in Austin, Trump spoke of the long-term effort and stiff costs that will be needed to rebuild the region. "Nobody's seen this kind of water," he said. "Probably, there's never been something so expensive in our country's history." While talking about recovery and relief efforts, Trump said, "We want to do it better than ever before." Hide Caption 15 of 74 Photos: Hurricane Harvey slams Texas Civilian rescuers put a boat into a flooded road to search for people in Cypress on August 29. Hide Caption 16 of 74 Photos: Hurricane Harvey slams Texas Volunteers organize items donated for Hurricane Harvey victims in Dallas on August 29. Hide Caption 17 of 74 Photos: Hurricane Harvey slams Texas An overview of downtown Houston on August 29 shows the scale of the catastrophic flooding. Hide Caption 18 of 74 Photos: Hurricane Harvey slams Texas Matthew Koser searches for important papers and heirlooms inside his grandfather's house in Houston's Bear Creek neighborhood on August 29. The neighborhood flooded after water was released from nearby Addicks Reservoir. Hide Caption 19 of 74 Photos: Hurricane Harvey slams Texas Shane Johnson removes items from a family home in Rockport, Texas, on August 29. Hide Caption 20 of 74 Photos: Hurricane Harvey slams Texas Airplanes sit at a flooded airport in Houston on August 29. Hide Caption 21 of 74 Photos: Hurricane Harvey slams Texas People set up a shelter for volunteer rescue workers at Fairfield Baptist Church in Cypress, Texas, on August 29. Hide Caption 22 of 74 Photos: Hurricane Harvey slams Texas Shardea Harrison looks at her 3-week-old baby, Sarai, as Dean Mize, right, and Jason Legnon use an airboat to rescue them from their home in Houston on Monday, August 28. Hide Caption 23 of 74 Photos: Hurricane Harvey slams Texas Thousands take shelter at the George R. Brown Convention Center in Houston on August 28. Hide Caption 24 of 74 Photos: Hurricane Harvey slams Texas Rescue boats fill Tidwell Road in Houston as they help flood victims evacuate the area on August 28. Hide Caption 25 of 74 Photos: Hurricane Harvey slams Texas People wait to be rescued from their flooded home in Houston on August 28. Hide Caption 26 of 74 Photos: Hurricane Harvey slams Texas A firefighter helps Sara Golden and her daughters Paisley, Poppy and Piper board a Texas Air National Guard C-130 at Scholes International Airport in Galveston, Texas, on August 28. Hide Caption 27 of 74 Photos: Hurricane Harvey slams Texas People make their way out of a flooded neighborhood in Houston on August 28. Hide Caption 28 of 74 Photos: Hurricane Harvey slams Texas Sam Speights removes possessions from his damaged home in Rockport on August 28. Hide Caption 29 of 74 Photos: Hurricane Harvey slams Texas Flood victims wait to unload from the back of a heavy-duty truck after being evacuated from their homes in Houston on August 28. Hide Caption 30 of 74 Photos: Hurricane Harvey slams Texas People leave a flooded area of Houston on August 28. Hide Caption 31 of 74 Photos: Hurricane Harvey slams Texas People are rescued in Houston on August 28. Hide Caption 32 of 74 Photos: Hurricane Harvey slams Texas Bridget Brundrett presents an American flag to Texas Gov. Greg Abbott while he was in Rockport on August 28. The flag had been recovered from city hall after flying during the hurricane. Hide Caption 33 of 74 Photos: Hurricane Harvey slams Texas A Coast Guard helicopter hoists a wheelchair on board after lifting a person to safety from a flooded area of Houston on August 28. Hide Caption 34 of 74 Photos: Hurricane Harvey slams Texas Houston flood victims eat and rest at the George R. Brown Convention Center on August 28. Hide Caption 35 of 74 Photos: Hurricane Harvey slams Texas Belinda Penn holds her dogs Winston and Baxter after being rescued from their home in Spring, Texas, on August 28. Hide Caption 36 of 74 Photos: Hurricane Harvey slams Texas A firefighter is wheeled to a waiting ambulance after he became fatigued while fighting an office-building fire in downtown Houston on August 28. Hide Caption 37 of 74 Photos: Hurricane Harvey slams Texas People evacuate a neighborhood in west Houston on August 28. Hide Caption 38 of 74 Photos: Hurricane Harvey slams Texas Julie Martinez, right, hugs her daughter, Gabrielle Jackson, in front of a relative's damaged apartment in Rockport on August 28. Hide Caption 39 of 74 Photos: Hurricane Harvey slams Texas Cattle are stranded in a flooded pasture in La Grange, Texas, on August 28. Hide Caption 40 of 74 Photos: Hurricane Harvey slams Texas Volunteer rescue boats make their way into a flooded subdivision in Spring, Texas, on August 28. Hide Caption 41 of 74 Photos: Hurricane Harvey slams Texas Houston police officer Daryl Hudeck carries Catherine Pham and her 13-month-old son, Aiden, after rescuing them from floodwaters on Sunday, August 27. Hide Caption 42 of 74 Photos: Hurricane Harvey slams Texas People push a stalled pickup through a flooded street in Houston on August 27. Hide Caption 43 of 74 Photos: Hurricane Harvey slams Texas Residents of Rockport return to their destroyed home on August 27. Hide Caption 44 of 74 Photos: Hurricane Harvey slams Texas The Buffalo Bayou floods parts of Houston on August 27. Hide Caption 45 of 74 Photos: Hurricane Harvey slams Texas Two men try to beat the current that was pushing them down an overflowing Brays Bayou in Houston on August 27. Hide Caption 46 of 74 Photos: Hurricane Harvey slams Texas Jane Rhodes is rescued by neighbors in Friendswood, Texas, on August 27. Hide Caption 47 of 74 Photos: Hurricane Harvey slams Texas Volunteers at Sacred Heart Catholic Church prepare cots for evacuees in Elgin, Texas, on August 27. Hide Caption 48 of 74 Photos: Hurricane Harvey slams Texas Damage to a home is seen in the Key Allegro neighborhood of Rockport on August 27. Hide Caption 49 of 74 Photos: Hurricane Harvey slams Texas Melani Zurawski cries while inspecting her home in Port Aransas on August 27. Hide Caption 50 of 74 Photos: Hurricane Harvey slams Texas Wilford Martinez, right, is rescued from his flooded car along Interstate 610 in Houston on August 27. Assisting him here is Richard Wagner of the Harris County Sheriff's Department. Hide Caption 51 of 74 Photos: Hurricane Harvey slams Texas A car is submerged by floodwaters on a freeway near downtown Houston on August 27. Hide Caption 52 of 74 Photos: Hurricane Harvey slams Texas A resident of the Bayou on the Bend apartment complex watches its first floor flood in Houston on August 27. Hide Caption 53 of 74 Photos: Hurricane Harvey slams Texas A city flag, tattered by the effects of Hurricane Harvey, flaps in the wind over the police station in Rockport on August 27. Hide Caption 54 of 74 Photos: Hurricane Harvey slams Texas Fort Bend County Sheriff Troy Nehls and Lucas Wu lift Ethan Wu into an airboat as they evacuate the Orchard Lakes subdivision in Fort Bend County, Texas, on August 27. Hide Caption 55 of 74 Photos: Hurricane Harvey slams Texas Damage is seen at a boat storage building in Rockport on August 27. Hide Caption 56 of 74 Photos: Hurricane Harvey slams Texas Water rushes from a large sinkhole along a highway in Rosenberg, Texas, on August 27. Hide Caption 57 of 74 Photos: Hurricane Harvey slams Texas Evacuees wade through a flooded section of Interstate 610 in Houston on August 27. Hide Caption 58 of 74 Photos: Hurricane Harvey slams Texas Evacuees are loaded onto a truck on an Interstate 610 overpass in Houston on August 27. Hide Caption 59 of 74 Photos: Hurricane Harvey slams Texas A graveyard is flooded in Pearland, Texas, on August 27. Hide Caption 60 of 74 Photos: Hurricane Harvey slams Texas A driver works his way through a maze of fallen utility poles in Taft, Texas, on Saturday, August 26. Hide Caption 61 of 74 Photos: Hurricane Harvey slams Texas Steve Culver comforts his dog Otis on August 26 as he talks about what he said was the "most terrifying event in his life." Hurricane Harvey destroyed most of his home in Rockport while he and his wife were there. Hide Caption 62 of 74 Photos: Hurricane Harvey slams Texas People walk through flooded streets in Galveston on August 26. Hide Caption 63 of 74 Photos: Hurricane Harvey slams Texas Aaron Tobias stands in what is left of his Rockport home on August 26. Tobias said he was able to get his wife and kids out before the storm arrived, but he stayed there and rode it out. Hide Caption 64 of 74 Photos: Hurricane Harvey slams Texas Brad Matheney offers help to a man in a wheelchair in Galveston on August 26. Hide Caption 65 of 74 Photos: Hurricane Harvey slams Texas Jessica Campbell hugs Jonathan Fitzgerald after riding out Hurricane Harvey in an apartment in Rockport. Hide Caption 66 of 74 Photos: Hurricane Harvey slams Texas Boats are damaged in Rockport on August 26. Hide Caption 67 of 74 Photos: Hurricane Harvey slams Texas A damaged home in Rockport on August 26. Hide Caption 68 of 74 Photos: Hurricane Harvey slams Texas Donna Raney makes her way out of the wreckage of her home as Daisy Graham assists her in Rockport on August 26. Raney was hiding in the shower after the roof blew off and the walls of her home caved in. Hide Caption 69 of 74 Photos: Hurricane Harvey slams Texas A laundromat's machines are exposed to the elements in Rockport on August 26. Hide Caption 70 of 74 Photos: Hurricane Harvey slams Texas A semi-truck is overturned on a highway south of Houston on August 26. Hide Caption 71 of 74 Photos: Hurricane Harvey slams Texas An American flag flies in front of a damaged mobile-home park in Rockport on August 26. Hide Caption 72 of 74 Photos: Hurricane Harvey slams Texas NASA astronaut Jack Fischer photographed Hurricane Harvey from the International Space Station on Friday, August 25. Hide Caption 73 of 74 Photos: Hurricane Harvey slams Texas Waves pound the shore as Harvey approaches Corpus Christi, Texas, on August 25. Hide Caption 74 of 74
More than 72,000 people have been rescued so far, according to officials.
Among the storm-related deaths are a Houston man who was electrocuted while walking in floodwaters and a mother whose body was floating about a half mile from her car. Rescuers found her daughter clinging to her body. The child is in stable condition after suffering from hypothermia.
"We just pray that the body count ... won't rise significantly," Houston Police Chief Art Acevedo said Wednesday.
But Houston received a bit of good news Thursday. The pool level at Barker Reservoir -- which officials feared would overflow -- has peaked and is going down, the Army Corps of Engineers said.
Residents evacuate their homes near Houston's Addicks Reservoir on Tuesday.
And the city's Addicks Reservoir, which was overwhelmed and caused widespread flooding this week, has also peaked. The water in that reservoir is also receding.
In a light moment amid the destruction, Houston police posted a video Thursday of an officer's daughter serenading him on his birthday.
When #HurricaneHarvey keeps you from home on your birthday, you're serenaded over the phone by your daughter #HoustonStrong pic.twitter.com/ldHDHeBaaI — Houston Police (@houstonpolice) August 31, 2017
But in Victoria, about 120 miles southwest of Houston, Mary Martinez returned to her heavily damaged home Wednesday.
"I did not think it was going to be this bad," said Martinez, who received assistance from volunteers with Christian charity Samaritan's Purse. "I was speechless."
Man tried to warn off friend from electrical wire
Countless stories of heroism have emerged in the aftermath of Harvey, including by some of the victims.
Andrew Pasek was walking through 4 feet of water trying to get to his sister's house when he accidentally stepped on a live electrical wire.
"He felt the charge and knew something was wrong right away and tried to shake it off right away," said his mother, Jodell.
The 25-year-old quickly asked a friend to get away from him "because if you do, you know, you will go, too," he told his friend.
Pasek was electrocuted. His mother said no one could try to resuscitate him for an hour, until the electricity was turned off.
"It could have been anybody," she said. ||||| Now: After Harvey, Houstonians eye long road to recovery
Rikki Saldivar goes through old family photos at a house that belonged to her grandparents, Tuesday, Sept. 5, 2017, in Houston. Saldivar's grandparents, and four young relatives, drowned in a van in Greens Bayou during Hurricane Harvey. less Rikki Saldivar goes through old family photos at a house that belonged to her grandparents, Tuesday, Sept. 5, 2017, in Houston. Saldivar's grandparents, and four young relatives, drowned in a van in Greens ... more Photo: Jon Shapley, Houston Chronicle Photo: Jon Shapley, Houston Chronicle Image 1 of / 62 Caption Close Now: After Harvey, Houstonians eye long road to recovery 1 / 62 Back to Gallery
WEDNESDAY, Sept. 6
12:35 p.m. Bar expands LandLine for Harvey victims
The Houston Bar Association has expanded its LegalLine to assist those affected by Hurricane Harvey and set up a toll-free line for Texans outside the Houston area, the group said in a news release.
Volunteer attorneys will answer phone calls from 3 p.m. to 5 p.m., Monday through Friday, through Sept. 29. Extended LegalLine hours will be available from 3 p.m. to 9 p.m. on Wednesday Sept. 6 and Sept. 20.
Those seeking answers to legal questions or referrals may call 713-759-1133 or 1-866-959-1133.
The HBA's Houston Volunteer Lawyers is working with Lone Star Legal Aid to coordinate legal aid for low-income persons affected by Harvey.
Information: www.makejusticehappen.org/Harvey
12:20 p.m. VA deploys mobile vet centers, medical units to Houston area
The VA has deployed five mobile vet centers, three mobile medical units, one mobile pharmacy and one mobile canteen to greater Houston and other areas affected by tropical storm Harvey. The units will offer medical care, pharmacy assistance, counseling services and benefits referral from Wednesday Sept. 6 to Sept. 30. The hours are 9 a.m. to 6 p.m.
They are located at NRG Arena, 1 NRG Park (mobile vet center); American Legion Post 658, 14890 FM 2100, Crosby (mobile vet center); Silsbee High School, 1575 U.S. 96 North, Silsbee (mobile medical unit); the Beaumont VA Outpatient Clinic, 3420 Plaza Circle, Beaumont (mobile medical unit, vet center and canteen); the Lone Star Veterans Association, 2929 McKinney St., Houston (mobile medical unit, vet center and pharmacy); and Wal-Mart, 23561 U.S. 59, Porter (mobile vet center).
Veterans may also call the Telecare Call Center at 1-800-639-5137 or 713-794-8985 for medical issues or questions
11:50 a.m. House approves $7.9 billion initial aid package for Harvey losses
The U.S. House of Representatives voted overwhelmingly Wednesday to provide $7.9 billion in aid to address losses from Hurricane Harvey, a move that could be paired with legislation to increase the federal government's borrowing limit.
The initial aid package, approved 419-3, is bigger than the amount floated by the White House over the weekend when President Donald Trump made his second trip to Texas in the wake of the storm. But divisions remain among House and Senate Republicans about tying the aid to the debt-limit increase.
The Senate is expected to attach the money to a debt-limit vote later this week. Conservatives in the House and Senate, including Texas U.S. Sen. Ted Cruz, have voiced concern about linking the two votes, which Cruz called "unrelated matters."
Senior Texas U.S. Sen. John Cornyn, the Republican Majority Whip, said he supports the plan as a way to immediately replenish needed funds for the Federal Emergency Management Agency (FEMA).
11:45 a.m. Postal service continuing its Harvey recovery
The U.S. Postal Service says its continuing the recovery from Hurricane Harvey
"We're open for business and delivering where it's accessible and safe to do so," the postal service said in a statement Wednesday morning.
The following offices have resumed normal operations: Brazoria, Lumberton, Sweeny and Thompsons, the postal service said in a news release on Tuesday. All offices in the Houston District have resumed normal operations except for Bear Creek, Deweyville, downtown Beaumont, Glen Flora, Katy, Mauriceville, Nome, Orange and Stowell. The operations for these offices have shifted to other locations.
The postal service urges customers in affected areas to check its website for updates on service interruptions. Updates on service alerts may be found at: http://about.usps.com/news/service-alerts/resident-weather-updates.htm
Those interested in information about a specific post office may call 1-800 ASK-USPS.
11:25 .a.m. Police look for man who went missing during storm
Houston police are looking for a 44-year-old man who went missing as he tried to drive to work on Aug. 26, as Tropical Storm Harvey was moving into the Houston area.
Police said Joseph Dowell left for work at around 2:30 p.m. from the 5600 block of Kennilwood but never made it to work. He is described as a bald African-American man who is 5 foot 9 and 190 pounds. Anyone with information is asked to call the missing persons unit at 832-394-1840.
11:12 a.m. H-E-B family member donates $5M to J.J. Watt's relief fund
A member of the H-E-B family has announced a major donation to Houston Texans star J.J. Watt's Houston Flood Relief Fund.
H-E-B chairman and CEO Charles Butt will deliver a personal, $5 million contribution to the Justin J. Watt Foundation's fund, which has collected in excess of $21 million. This looks to be one of the largest personal contributions so far going toward Hurricane Harvey relief efforts.
Watt's fund started last week with a modest total of $100,000 and has now become a global effort with contributions by fellow sports figures, celebrities, business owners and regular people chipping in what they can.
11:10 a.m. Nonprofit focuses on children's needs after Harvey
The nonprofit group Children at Risk was to host a meeting with more than 20 area non-profit leaders on Wednesday morning to discuss Hurricane Harvey recovery efforts.
"As we rebuild our homes, schools and communities, it is imperative to focus on our most vulnerable residents – children," the group said in a news release. "The leaders have identified 6 key areas that Texas must be aware of as we move forward with recovery efforts."
11:01 a.m. Abbott says no hazardous waste sites in Houston area found leaking so far
AUSTIN — Gov. Abbott said Tuesday that an inspection of hazardous waste sites and landfills in the Houston area has found no evidence so far of any leakage or health threats.
At a morning briefing with reporters at the state's Emergency Operations Center, Abbott said five of 17 state sites have been inspected and show no signs of leakage or other issues so far. He said one site, International Creosote, remains flooded.
Eleven others are awaiting inspection, officials said. Abbott said inspections are continuing at the state sites.
Abbott said Texas Commission on Environmental Quality are working closely with federal Environmental Protection Agency officials to closely monitor the sites for any problems.
MONDAY, Sept. 4
9:45 p.m. 500 homes "destroyed" in south Montgomery County subdivision
Five-hundred homes were "destroyed" by Hurricane Harvey in the south Montgomery County subdivision of River Plantation, said Montgomery County Precinct 2 Commissioner Charlie Riley.
Riley said volunteers, county employees and law enforcement are conducting relief efforts there, and members of the Federal Emergency Management Agency are helping victims get federal aid.
9:20 p.m. Addicks, Barker reservoirs continue to drop
Addicks and Barker reservoir continued to drop Monday night. Flooding of homes behind the dams was expected to end in the coming days.
Addicks reservoir's pool was down to 105.65 feet. At 103.4 feet, all house flooding should end, said Harris County Flood Control District meteorologist Jeff Lindner.
Barker's level was at 97.99 feet. At 94.9 feet, flooding behind Barker should end, Lindner said.
8:05 p.m. Two Harris County flood victims identified
The Harris County medical examiner on Monday identified two victims of last week's Harvey-related flooding. They had been included previously in the number of confirmed dead in the county, which now stands at 30.
The first victim is Charles Ray James, 65, who was found floating in high waters on a residential street in the 7400 block of Claiborne on Thursday. The second is Samuel Lawrence Burns, Sr., 62, who was found Wednesday in the 4900 block of Airline Drive after apparently collapsing in high water.
More than 60 people in the Houston area and other parts of the state have died since Hurricane Harvey made landfall on Aug. 25 and unleashed record flooding.
7:50 p.m. Harris County warns of heavy traffic, signal delays during Tuesday's commute
Harris County officials warned that road closures and traffic signal timings would likely create a significant traffic headache Tuesday morning.
Houston area roads are expected to see the highest number of cars Tuesday since Hurricane Harvey made landfall Aug. 25.
There are still several roads closed in Harris County including State Highway 6, Barker Cypress Road, North Eldridge Parkway and Clay Road all north of I-10. Parts of Westheimer Parkway, South Barker Cypress Road and other roads near the Harris County border with Fort Bend County are also closed.
Traffic signals will have special timing schedules to accommodate for traffic altered due to road closures.
7:25 p.m. Houston mayor to decide on curfew Tuesday
Houston Mayor Sylvester Turner said he would make a decision on whether to extend a citywide curfew on Tuesday.
The curfew prohibits people from leaving their homes between 12 a.m. and 5 a.m. It is in effect Monday night.
Turner will announce on Tuesday whether it will remain in effect Tuesday night and beyond.
"Persons involved in flood relief efforts, flood relief volunteers, individuals seeking shelters, first responders, and persons going to and from work (late shift workers) are exempt from the curfew," according to the city.
6:45 p.m. Brazoria County bans recreational boating on Brazos, San Bernard Rivers
Brazoria County Judge Matt Sebesta issued an order Monday banning recreational boating on the Brazos River, San Bernard River, Oyster Creek and Bastrop Bayou.
The order was signed in "an effort to minimize damage to real and personal property of others and for public safety during this disaster," according to a county notice. It would remain in effect until Sebesta rescinds the order or the disaster declaration for the county is removed.
The Brazos had retreated to moderate flood stage earlier in the day, days after reaching a record height.
6 p.m. Army Corps reducing releases from Barker dam
The U.S. Army Corps of Engineers began reducing the amount of water released from Barker reservoir Sunday night, a move that continued Monday and promised to reduce widespread flooding in west Houston along Buffalo Bayou downstream of Addicks and Barker dams.
As of Monday evening, Barker reservoir was releasing between 5,000 and 6,000 cubic feet per second of water, down from 6,300 CFS in the days before.
Once the Barker releases get down to 4,000 CFS, the Corps said it would begin decreasing releases from Addicks "until we have our releases back below the high banks of Buffalo Bayou," said Richard Long, Natural Resource Management Specialist at the Corps' Galveston District.
The goal, eventually, is to get releases down to 4,000 CFS combined from Addicks and Barker dams.
5:10 p.m. Face recognition software matching pictures of lost pets with those found in shelters
People who lost their pets during Hurricane Harvey can upload the pet's picture into an app that will use facial recognition to match the pictures with animals checked into shelters in recent days.
Photos of animals arriving in shelters are being uploaded into a database.
If someone has lost their pet, they can upload a picture of the pet to findingrover.com or the Finding Rover smartphone app.
An algorithm will try to make a match and provide the pet's potential location. The service is free for shelters and pet owners.
Our Mihir Zaveri has the full story here.
5 p.m. FEMA extends grace period for paying flood insurance premiums
If your flood insurance premium payments were due between July 24 and Sept. 22, and you live in a county that was included in a presidential disaster declaration after Hurricane Harvey, the Federal Emergency Management Agency has extended a grace period by which you can pay your premium.
The grace period is now 120 days, according to FEMA. That means you won't lose coverage if you can't make your payment right away, up to 120 days from when it's due.
"FEMA wants to ensure that policyholders affected by flooding caused by Hurricane Harvey can focus on their immediate needs, begin to recover, and continue to have flood insurance coverage in the event of additional flooding," the agency stated in a notice Monday.
The agency also made it easier to receive payments faster and waived some paperwork necessary to process claims.
Read more here.
4:45 p.m. City of Houston: Cleaning after Hurricane Harvey may pose health hazards
People 7-years-of-age and older with cuts or other wounds should get a tetanus shot and see a doctor if they were exposed to Hurricane Harvey's floodwaters, city officials warned.
The officials said: residents should use special, N-95 rated dust masks when cleaning moldy homes that took on floodwaters.
Surfaces should be washed with soap and clean, warm water and sanitized with bleach.
Any standing water should be drained so disease-carrying mosquitoes cannot breed, the officials warned.
3:30 p.m. FEMA registrations up to more than 550,000
The Federal Emergency Management Agency said Sunday it has received more than 550,000 applications for aid in the aftermath of Tropical Storm Harvey.
FEMA has so far granted applications for 176,000 people. Aid totals include $50 million for housing assistance, including rental assistance and $91 million to replace personal property, pay for transportation and shoulder medical and dental costs.
The agency also opened a disaster recovery center at the George R. Brown Convention Center that will be open from 7 a.m. to 7 p.m., seven days a week until further notice. Those affected can apply for assistance, get a status update on an application, speak to a FEMA representative, or discuss a low-interest disaster loan with a Small Business Administrative representative.
3:15 p.m. Harris County begins debris removal
Harris County began removing debris from outside of people's homes Monday.
Officials said residents should place debris curbside "without blocking roadway or storm drains" and sort it into the following categories: vegetative debris, construction and demolition debris, appliances and electronics.
Debris that won't be picked up includes normal household trash and hazardous waste.
Those with questions are encouraged to call (713) 274-3880 or can find more information here.
3:02 p.m.: Gov. Abbott asks that 7 counties added to presidential disaster area
Gov. Greg Abbott asked that seven counties be added to the Federal Disaster Declaration previously granted by FEMA, bringing the total request to 43 counties.
The new additions are Austin, Bastrop, DeWitt, Gonzales, Karnes, Lavaca and Lee counties.
11:48 a.m. East of Houston, death toll rises to 17
The Southeast Texas death toll from Harvey rose to 17 on Monday, the Beaumont Enterprise reports.
Newton County Sheriff Billy Rowles said a Deweyville man and a Newton woman have died in his county due to floodwaters.
Orange County officials said Sunday there have been nine storm-related deaths in the county so far. Four of the deaths were elderly people and possibly related to a power outage, according to officials.
"If you have a friend or family member that did not leave areas that were impacted by the rising waters and you have not heard from them, we urge you to contact your local law enforcement."
Port Arthur spokeswoman Risa Carpenter said an elderly woman's body was found inside her home on 22nd Street in Port Arthur on Saturday.
Colette Sulcer, 41, died after being pulled from rushing waters with her 3-year-old daughter clinging to her side on Tuesday in Beaumont. The girl survived. A second woman, who was found on the city's low-lying North End early Wednesday, has not been identified. Rescue teams found a body floating in the 8600 block of Overhill Lane Wednesday evening. Police did not identify the person.
On Tuesday night, Russell Barnes, 51, and Ginger Barnes, 34, both of Alvin, were killed in Jasper County when a tree fell on their truck.
10:15 a.m. Convicts return to 2 prisons near Richmond
State officials on Monday began returning 1,400 convicts to two prisons near Richmond that were evacuated a week ago because of flooding from Tropical Storm Harvey.
Jason Clark, a spokesman for the Texas Department of Criminal Justice, said transfers of prisoners to the Jester 3 and Vance units began around dawn from prisons in South Texas, after the two lockups near the Brazos River were determined to be safe.
The two prisons had been among five that were evacuated beginning two weeks ago because of record flooding southwest of Houston after Harvey began dumping up to 51 inches of rain on the area as it moved ashore.
More than 5,900 convicts were relocated in secure buses accompanied by correctional officers and other corrections staff. Clark said no decision has been made on when convicts will return to the three remaining prisons that were evacuated -- Stringfellow, Ramsey and Terrell.
9:55 a.m. Brazos River still high, but retreating
Days after cresting at a record-high water level, the Brazos River has retreated to the moderate flood stage level.
Monday measurements from the National Weather Service show the river at a height of 49.6 feet, a read that puts it just below the major flood stage.
The river topped 55 feet for the first time ever last week, devastating towns and neighborhoods throughout Fort Bend and Brazoria counties.
The NWS expects the river's water level will fall into the minor flooding stage by Tuesday, and then rapidly decrease throughout the week.
9:34 a.m. Man with Alzheimer's reported missing since Friday
A 63-year-old man with early onset Alzheimer's disease has been missing since last week.
The family of James Simmon said he was last seen around 6 p.m. Friday in the 1600 block of Castle Court, in south Montrose.
Simmon is 5'9 and white, with brown hair and blue eyes. He was wearing a yellow plaid shirt, blue jeans and a Houston Astros hat when he was last seen.
Simmon was the Houston Chronicle's political editor from 1990 to 1994 and the editor of the Houston Press from 1994 to 1998, according to his LinkedIn page. He was last the city editor of the Bryan Eagle.
Anyone with information should call Houston Police Department's Missing Persons Unit at 832-394-1840.
9:04 a.m. After reports of scams, Houston police accompany Energy Insights employees
Uniformed first responders will now be accompanying Energy Insights employees as they work to shut off power in flooded homes around Harris County.
The move, Police Chief Art Acevedo tweeted Sunday, is in response to reports of people impersonating Energy Insights employees.
"(Legitimate employees) are not shutting off power in houses not flooded," he wrote. "Don't open doors. ... Call 911 if you spot impostors."
8:37 a.m. Fort Bend County ends curfew
Fort Bend County has ended its curfew meant to protect evacuated properties and their owners.
County Judge Bob Hebert lifted the order Sunday, and also announced that mandatory evacuation orders for districts in the county that operate levees were ended. Such orders are still in effect for some neighborhoods along the Barker Reservoir, and Hebert warned that "individual neighborhoods and homes may still pose hazards" such as "displaced animals, contaminated flood waters and unstable structures."
8:21 a.m. Houston Theater District closes underground parking
The underground parking garages in Houston's Theater District are closed until further notice due to flooding, the Houston First Corporation said Sunday.
"Houston First is assessing damages, and working with contractors to safely and efficiently pump water out of the garages," HFC said in a press release. "Although the pumping process has begun, full restoration of the garages could take weeks."
HFC said it will in the meantime make available 2,000 spaces in other parking facilities nearby. The closed underground parking lots span 18 underground blocks, and have more than 3,350 parking spaces.
7:49 a.m. Chance of more rain through Wednesday
The Houston area could see more rain and thunderstorms this week, according to the National Weather Service.
The Labor Day forecast is dry and hot, with highs expected near 90 degrees during the day.
More rain could come Tuesday, according to NWS, with a 20 percent chance of rain and thunderstorms, and a high of 91 degrees. Chances for rain will climb to 40 percent Tuesday evening, and dip down to 20 percent Wednesday.
The forecast for Wednesday night through Sunday is for clear skies, with highs in the high 80s and lows around 60.
7:45 a.m. Evacuation zone lifted near Crosby chemical plant after 'controlled burn'
A day after a controlled burn destroyed six final trailers of decomposing chemicals, authorities lifted the 1.5-mile evacuation zone around the troubled Arkema chemical plant in Crosby.
The company announced the decision early Monday morning in a press release crediting the Crosby Fire Department and unified command.
"Arkema thanks the unified command for their hard work and professionalism to ensure the safety of all during the post-Hurricane Harvey period," the statement said.
"Arkema will continue to work with its neighbors and the community to recover from the substantial impact of Hurricane Harvey."
SUNDAY, Sept. 3
10:21 p.m.: Gov. Abbott meets with Sen. Cornyn, Reps. McCarthy and McCaul to discuss Harvey
Elected officials committed themselves to help victims of Harvey at a meeting on Sunday and discussed Congress' response to the storm, Gov. Greg Abbott's office said.
Attendees committed to act "swiftly" to pass a funding measure that would help Texans, a statement read.
Attendees included U.S. Reps. Kevin McCarthy and Michael McCaul.
Monday, Cornyn planned to join House Majority Leader Kevin McCarthy and U.S. Reps. Randy Weber and Brian Babin in Beaumont for a briefing from emergency management officials on conditions in that inundated city.
9:17 p.m.: Houston trash pickup requires moving cars
City of Houston officials urged residents to move vehicles, trailers and debris from the roadways as they prepare to pick up trash on Monday.
Crews will begin work at 7 a.m. in Fosters Mill on Monday. A separate convoy will start at Kings Point on 7 a.m. Monday. Additional trucks will move to Kings Point after completing work at Barrington.
Any trash blocked by a vehicle will not be picked up, city officials said.
9:08 p.m.: City of contrasts: Some Houston residents in crisis, others find normal
In west Houston late Sunday morning, first responders went door-to-door to ensure people had evacuated homes still flooded more than a week after Hurricane Harvey roared through Texas. At that moment, in a dry neighborhood across town, a few dozen residents were enjoying brunch at Pax Americana: Brisket hash, honey-butter chicken, cold mimosas.
More than a week after the worst disaster in state history, Houston officials and residents began confronting a city of contrasts: Between those spared by Harvey, and those still in crisis. Between aiding people in need, and returning to business as usual. Between starting the rebuilding process, and pausing to reconsider the cost of unmitigated development.
For our full story, click here.
8:40 p.m.: Missing volunteer pulled from Cypress Creek
Nearly four days after Harvey's record flooding slammed a rescue boat into an Interstate 45 frontage road bridge, family members of the final, missing volunteer pulled his body from Cypress Creek in Spring.
Alonso Guillen, a 31-year-old disc jockey from Lufkin, disappeared on Wednesday around midnight along with two friends after their boat hit the bridge over the creek and capsized. One of them was rescued after clinging to a tree in the rushing water, but days later, after the rains let up and the creek level receded, Guillen and Tomas Carreon Jr. were still missing.
For the full story, click here.
8:21 p.m.: Mayor Turner: Houston is open for business
In west Houston late Sunday morning, first responders went door-to-door to ensure people had evacuated homes still flooded more than a week after Hurricane Harvey roared through Texas.
More than a week after the worst disaster in state history, Houston officials and residents began confronting a city of contrasts: Between those spared by Harvey, and those still in crisis. Between aiding people in need, and returning to business as usual. Between starting the rebuilding process, and pausing to reconsider the cost of unmitigated development.
A day after issuing a mandatory evacuation order for 300 people in flooded parts of west Houston — one of several areas that are likely to remain inundated for weeks longer — Mayor Sylvester Turner went on national Sunday talk shows with a bullish message for those thinking about visiting his beleaguered city.
"The airport system is up and running. The transit system is up and running. We've started picking up heavy debris," Turner said on CBS' "Face the Nation." "Let me be very, very clear," he added. "The city of Houston is open for business."
8:17 p.m.: Numbers in George R. Brown, NRG Center dwindling
About 1,000 people are living in the George R. Brown Convention Center, and about 2,700 people are in the NRG Center as of Sunday, officials said.
The City of Houston said it aims to relocate the last 1,000 people by the end of the week.
So far, Harris County officials said, about 2,275 people have been relocated from the NRG Center.
There are 171 volunteers on site, those officials said.
6:37 p.m.: Fort Bend curfew, evacuation updates
The curfew in the unincorporated areas of Fort Bend County has been lifted, that sheriff's office said at 6:32 p.m. Sunday.
That office also said that the Levee Improvement District evacuation orders have been lifted. The county's Office for Emergency Management warned that there still could be flooded roads, fallen trees, displaced animals and standing water.
Low-lying areas near the Brazos and San Bernard rivers are still under voluntary evacuation.
"The conditions have improved enough to warrant a cautious lifting of these orders for much of the county," stated Fort Bend County Judge Robert Hebert, who also lifted the county's curfew.
6:30 p.m.: The Red Cross has updated emergency support figures
• Saturday night, at least 32,399 people sought refuge in 226 Red Cross and partner shelters across Texas overnight. The Red Cross is also assisting the Louisiana state government with an emergency shelter which hosted nearly 1,700 people .
• More than 2,700 Red Cross disaster workers are on the ground, and more than 660 are on the way.
• Shelter supplies to support more than 85,000 people are on the ground.
• Along with its partners, Red Cross served more than a half million (515,000) meals and snacks since the storm began.
5:39 p.m.: Mexican government says Harvey help to arrive by Tuesday
Amb. Carlos Sada, Mexico's undersecretary of foreign Affairs for North America, said Texas officials on Saturday night cleared the way for Mexican relief teams to begin arriving by Tuesday.
"We are ready to jump in and help as soon as possible," Sada said.
The Mexican government will send high-clearance trucks, all-terrain vehicles, cargo aircraft, boats, communications equipment, large generators, mobile community kitchens and a mobile water treatment plant.
For the full story, click here.
5:27: First Disaster Recovery Center opens in Houston
The Federal Emergency Management Agency on Sunday opened its first Disaster Recovery Center in Houston. The center is on the north end of the George R. Brown Convention Center downtown.
The agency is working to identify locations for additional centers, where residents affected by Tropical Storm Harvey can apply for aid, ask questions or resolve problems, said agency spokesman Peter Herrick Jr.
The center at the George R. Brown is open from 7 a.m. to 7 p.m. daily. Another center opened Sunday at 1303 W. Gayle St. in the town of Edna, near Victoria southwest of Houston. It's open daily from 7 a.m. to 6 p.m., Herrick said.
5:02 p.m.: Shippin' down from Boston
Nine truckloads of food, formula, toiletries and blankets from the City of Boston will arrive at 9 p.m. at the Houston Food Bank.
Ed Emmett, Harris County Judge, said he would greet the trucks, backed by Marty Walsh, Boston's mayor.
Boston collected goods from Tuesday through Friday, the city said in a release.
4:13 p.m.: Last six Arkema chemical containers ignited
Arkema Inc. is igniting the remaining six containers of chemicals at its Crosby plant, a spokeswoman for the Harris County Fire Marshal's office said Sunday afternoon.
She declined to comment on how the company is setting off these six vehicles.
"They've started the operation," spokeswoman Rachel Moreno said.
Crosby residents should expect to see visible smoke around the area, she said. A 1.5-mile radius surrounding the plant has been evacuated.
The fire marshal's office called Arkema's ignition of the containers "a proactive approach to minimize the impacts to the community."
Company officials, who said they made the decision to set off the last six containers, said they believe that the chemicals in the trailers have been decomposing. Without the vehicles catching flames, however, they would not be able to know if the the chemicals are totally neutralized, spokesman Jeff Carr said.
For the full story, click here.
4:04 p.m.: Most Metro HOT lanes open Tuesday
Most HOT lanes managed by Metropolitan Transit Authority will reopen on Tuesday, officials said Sunday.
The one exception will be along Interstate 45, where officials are still evaluating the safety of the lanes.
All park and ride service will also resume after the Labor Day holiday on Monday. Along I-45, service will be detoured, but operational.
As of Sunday, Metro said 70 bus routes are in service, along with most rail service. The Red Line is running along all stops, while Green and Purple line trains are not running through the central business district.
3:26 p.m.: Navajo Nation to distribute supplies
Navajo Nation President Russell Begaye and Vice President Jonathan Nez picked up toiletries, hygiene items, non-perishable food and school supplies at the First Indian Baptist Church of Houston.
They plan to distribute these goods to Navajo families in need. Right now, they're on the way to a Meyerland home, said Mihio Manus, a spokesman.
12:44 p.m.: So much water, it skipped a watershed
Storms and flooding from Harvey became so severe, water on the Colorado River jumped into the neighboring San Bernard watershed, according to the National Weather Service.
As they analyze flows from the rivers and watersheds, officials said Sunday it appeared levels in the Colorado plateaued, as San Bernard levels remained higher than officials predicted.
The conclusion of researchers was that Colorado water reached the watershed's natural peak and spilled into the San Bernard.
11:36 a.m.: Coast Guard says most ports are open
Most channels of the ports, including the Galveston Bay Entrance, are now open both day and night, the Coast Guard confirmed Sunday. The Houston Ship Channel is open from the entrance channel to Baytown Highlands, but only for vessels with a maximum 40-foot draft. The Houston Ship Channel above the Baytown Highlands is open to towing vessels transits.
At the Port of Freeport vessels with under a 38-foot draft can arrive during daylight. The Galveston Harbor and Texas City are also open, but only for vessels with a maximum 37-foot draft.
Bolivar Roads Anchorages A, B, and C are open at the pilot's discretion. The Coast Guard warns that there is a short duration use for bunkering and inspections.
11:16 a.m.: FEMA updates relief figures, 37,000 still in shelters
More than 37,000 spent the night in Texas shelters as the state slowly digs out from the fierce storms that flooded neighborhoods and sent thousands scrambling for higher ground.
In a Sunday morning update, FEMA officials said President Donald Trump’s approval of disaster assistance authorized the federal government to pick up 90 percent of the cost of debris removal, something officials anticipate will strain area landfills and heavy haul trucking companies.
In the meantime, the Red Cross and others are caring for evacuees along the Texas coast at more than 270 shelters. FEMA has given Texas over the past days 4.7 meals, 4.3 million liters of water, 13,900 blankets and 13,400 cots.
10:59 a.m.: Dam releases continuing, with no further home flooding envisioned
Controlled releases continue from Addicks and Barker reservoirs, as city officials tamp down concerns that more homes could flood as waters recede.
A combined 13,300 cubic feet of water is flowing from the two reservoirs each second, according to the Harris County Flood Control District. The U.S. Army Corps of Engineers is managing the reservoirs.
As a result of the flows, flood control officials said the pools behind Addicks and Barker are shrinking, though it will be up to two weeks before homes impacted are out of the water.
In the meantime, officials do not expect more homes to flood downstream along Buffalo Bayou. City officials called for mandatory evacuations on Saturday, which led some to worry more flooding was imminent.
Houston District G Councilman Greg Travis sent notes Sunday trying to tamp down the concerns.
“Let me make this simple: Do you have standing water in your house? If NO, then your power will NOT be cut and you do NOT have to evacuate this morning,” Travis’ office wrote in a message to residents.
Mayor Sylvester Turner said the evacuations were necessary to ensure safety of residents and first responders. Electricity to the area was being shut off to protect everyone from downed lines.
10:35 a.m.: First-responders going door-to-door in West Houston mandatory evacuation area
CenterPoint and first responders are walking the flooded areas marked for mandatory evacuation in west Houston to determine where power needs to be shut off.
The area in question is from State Highway 6 to South Gessner and From Highway 10 south to Briar Forest. Residents in that area who have water in their homes are being asked to evacuate as soon as possible.
All CenterPoint employees will be accompanied by a law enforcement officer of someone from the fire department.
8:42 a.m.: Officials optimistic and cautious about Harvey rebuild
As Houston dries out, Mayor Sylvester Turner said in a trio of national television appearances he doesn’t want its businesses to dry up.
“The City of Houston is open for business,” Turner told CBS’ Face the Nation host Margaret Brennan.
Municipal workers report back on Tuesday, after the Memorial Day holiday, and Turner said the long rebuilding process will continue as Houston roars back to life.
“I am expecting employers to open and employees to get back to work,” he said.
Turner, however, did not diminish the gravity of what lies ahead. Debris piles the size of railcars must be hauled away from some inundated communities. He told Meet the Press host Chuck Todd that simply cleaning up could take 10 days, while rebuilding is poised to last much longer.
"What we need is rapid repair housing, so people can stay in their homes while they make the bigger repairs,” Turner told Todd.
The rebuild, however, might not be a return to Houston exactly as it has been, according to Gov. Greg Abbott.
“It would be insane for us to rebuild on property that has been flooded multiple times,” Abbott told ABC News’ Martha Raddatz. “ I think everybody probably is in agreement that there are better strategies that we must employ.”
Meanwhile, Abbott, Turner and Texas Sen. Ted Cruz both applauded the way the community pulled together in crisis as they made the television rounds.
“People are hurting,” Cruz told Raddatz. “But in the face of that disaster we have seen incredible bravery.”
7:56 a.m.: Four rescued from fierce Neches River waters
Four boaters stranded without fuel upstream from Beaumont on the Neches River were plucked from the waters by a U.S. Coast Guard recovery team on Saturday night, Jefferson County officials said.
The boaters called emergency dispatchers around 10:45 p.m., saying they believed they were about five or six miles upstream from Beaumont’s Riverfront Park, tied off to a tree. A Coast Guard helicopter spotted the four, and sent a swift water rescue team to their location to pick them up.
“The Neches River is still a danger for boaters,” the Jefferson County Sheriff’s Office said in a release. “The current is swift and turbulent. That coupled with floating debris makes it very hazardous.”
SATURDAY, Sept. 2
11:50 p.m. Mail delivery resumes in some neighborhoods
The U.S. Postal Service is trying to get back to normal delivery in Houston. But first, it'll deliver some mail on Sunday.
The Oak Forest post office, at 2499 Judiway, will resume delivery to Oak Forest residents on Sunday after a several-day interruption. At first, door-to-door service will include just the most urgent mail: checks and medications.
In other neighborhoods, the post office still isn't able to deliver. But about three dozen post offices are now open 10 a.m.-6 p.m. daily (including weekends) so residents can pick up U.S. Treasury checks and "identifiable medications." Customers must present proper ID to receive items. Get the full list of pickup stations here.
10:20 p.m. Fort Bend County lifts mandatory evacuation for some areas in Barker Reservoir
Some residents can reenter their homes near the Barker Reservoir in Fort Bend County, while others are still under a mandatory evacuation order.
For a list of which areas no longer are under mandatory evacuation, click here.
County officials caution that even though the evacuation order is lifted, returning may not be safe.
"Many neighborhoods within the Barker Reservoir area may still have hazards present such as flooded roads, fallen trees, displaced animals, and standing water," the county said in a statement Saturday evening. "Residents should use extreme caution when returning to their homes."
9:20 p.m. Harris County to help residents test well water
Harris County plans to help residents test private water wells for contamination in Tropical Storm Harvey's aftermath.
Floodwaters harbor bacteria, fungi, viruses and other contaminants that could have infiltrated private water wells. Water should be tested and chlorinated before drinking, county officials say.
Starting Tuesday, residents can pick up bottles to collect samples and have them tested at 11 locations around the county.
A list of locations can be found here.
8:30 p.m. Harvey victim found dead floating in Cypress Creek
Tomas Carreon-Esquivel, 25, became the 29th flood victim confirmed by the Harris County medical examiner.
Carreon-Esquivel's body was found floating in Cypress Creek, according to the medical examiner.
Officials estimate Carreon-Esquivel died on Friday at around 1:15 p.m. His body was found around 22411 Greenbrook Drive.
More than 50 people - including a veteran Houston police officer - have died or are feared dead in the Houston area and beyond in flooding or circumstances connected to Tropical Storm Harvey, according to local officials.
Our staff has the full story here.
7:30 p.m. Harris County flood control officials: no more uncontrolled releases from Addicks, Barker
Water is no longer spilling out of the north side of Addicks reservoir, Harris County Flood Control District Officials said Saturday.
That means flooding near Eldridge Parkway and Tanner Road from the uncontrolled releases is decreasing.
Meanwhile, the U.S. Army Corps of Engineers is still releasing about 13,300 cubic feet per second of water from Addicks and Barker reservoirs. This has been largely the same for days.
Water from Tropical Storm Harvey pooling in the reservoirs has caused widespread flooding upstream, while releases contributed to flooding of thousands of homes along Buffalo Bayou.
Houston Mayor Sylvester Turner called for a mandatory evacuation of west Houston homes that are inundated.
7:15 p.m. Harris County ramps up debris collection
People who live in unincorporated Harris County and need help with debris removal or repairs to their homes can call a hotline with questions: (713) 274-3880.
The hotline can help people answer questions about debris removal, permits needed to repair or rebuild homes or other buildings, and other general questions about basic needs.
"The major goal of the Harris County Residential Debris and Damage Assessment Hotline is to ensure that public roads and other infrastructure do not pose an immediate threat to public safety," the county stated in a news release Saturday. "Harris County Residential Debris and Damage Assessment teams are currently working to conduct safety and damage assessments, while clearing debris from public roads in areas where flood waters have receded."
6:15 p.m. FEMA registration applications top half a million
The Federal Emergency Management Agency has received more than 507,000 applications for aid in the aftermath of Hurricane Harvey.
FEMA has approved $114.7 million in aid to 161,000 people so far. About $33.6 million is for assistance with housing, such as paying displaced victims' rent, and $81 million will help victims replace personal property, pay for transportation, as well as medical and dental assistance.
5:40 p.m. Mayor Sylvester Turner orders mandatory evacuation of West Houston homes flooded by Buffalo Bayou and dam releases
Mayor Sylvester Turner ordered a mandatory evacuation of an area of West Houston which has been inundated by high waters on Buffalo Bayou.
The order affects areas of Houston south of I-10, north of Briarforest, east of Addicks and Barker reservoirs and west of Gessner.
About 300 people are believed to be in the area, which includes approximately 4,000 homes.
That part of Houston will likely be underwater for weeks as the U.S. Army Corps of Engineers releases water from the Addicks and Barker reservoirs to empty them out in case they need to hold back water from future rains.
Our Ileana Najarro has the full story here.
5:20 p.m. Kingwood High School closed for foreseeable future, students to attend Summer Creek High
The Humble Independent School District said that students attending Kingwood High School will have to attend Summer Creek High School while the district attempts to restore Kingwood High from Hurricane Harvey damage.
"This will require a modified schedule," said spokesperson Jamie Mount. "Humble ISD has asked for families and staff to share input through a survey on scheduling options."
Mount said Kingwood high could be closed for the entire 2017-18 school year -- though it could open sooner if restoration work is completed.
"Under normal circumstances, we would never ask two large high schools to coexist under one roof," Mount said. "Unfortunately, Hurricane Harvey took away normal."
4:15 p.m. Harris County homeowners can report damage to decrease home appraisals
The Harris County Appraisal district is encouraging homeowners to report damage from Hurricane Harvey so residents can lower their property tax bill.
The district said homeowners can report damage through the "HCAD app," which can be downloaded on iPhone, iPad or Android phones.
Residents can also report damage by calling (713) 821-5805, or emailing help@hcad.org. HCAD asks residents to provide name, address, phone number, account number and inches or feet of water that flooded victims' homes.
"The appraisal district can use this information to identify the most damaged neighborhoods and properties to help the homeowner next year when the property is reappraised January 1 by possibly reducing the value because of existing damage or ongoing repair work," said Roland Altinger, chief appraiser at HCAD.
4:00 p.m. City of Houston warns victims of insurance fraud
Houston officials said scammers are robo-calling flood victims and telling them they may not get covered for Hurricane Harvey damage unless they pay "past due" premiums.
City officials say this is a scam, and that real warnings come between 30 and 90 days before an insurance company rescinds coverage.
"Insurance companies and agents selling flood insurance policies do NOT use this process to communicate with customers about their flood insurance policies," officials said in a release.
The officials said victims receiving these robo-calls should call their insurance companies or the National Flood Insurance Program at 1-800-638-6620.
3:50 p.m. PetWell Partners to offer free vet services after Harvey
When PetWell Partners reopened its Bellaire clinic just a few days after Tropical Storm Harvey swept the region, David Strauss realized the toll the storm had taken on hundreds of desperate pet owners.
The co-founder of the company met a couple whose dog had been having seizures for three days. At one point, they brought it to a fire station to get oxygen, Strauss said.
"It's crazy," he said. "There are so many good people trying to help."
PetWell Partners, which reopened eight of its animal clinics throughout the Houston area, will offer free services and supplies for pets sickened or injured during the storm. Starting Monday, vets will provide basic screening and treatments at certain locations.
Our Katherine Blunt has the full story here.
3:20 p.m. James Harden of the Houston Rockets announces $1 million donation for relief
Chants of "Houston! Houston! Houston!" erupted in Hall E of the George R. Brown Convention Center Saturday when James Harden of the Houston Rockets strode in with Mayor Sylvester Turner at his side. The basketball star announced he would donate $1 million to Harvey relief.
"I am thankful for this guy right here," Turner said.
As Harden walked through the center-turned-shleter, fans ran up to him, taking selfies, getting autographs and receiving fist bumps.
"It's the first time I've seen you in person James, but I love you," a woman shouted as the star signed her pocket notebook.
3:15 p.m. Six-month-old infant swept away in Harvey floodwaters
Authorities in Walker County confirmed Saturday that a six-month-old baby was missing and had been swept away in gushing floodwaters on August 27.
Firefighters had working to to rescue two men trapped in their pickup in the swollen waters of Winters Bayou on Highway 150 near New Waverly and Coldspring, and heard screams nearby.
First responders found a couple up a tree, seeking refuge from the water. They had been fleeing Houston flooding, heading for Louisiana, said Jimmy Williams, with the New Waverly Fire Department.
They became trapped in high water on Highway 150, and had to flee their pickup.
"The current was so fast, it ripped the baby out of their arms," Williams said. "So the baby was lost."
Our St. John Barned-Smith has the full story here.
3:oo p.m. President Trump leaves Houston
President Donald Trump left Houston after meeting residents sheltered at NRG Center.
He arrived Saturday morning to meet with members of the Texas Delegation at Ellington Field Joint Reserve Base. He is expected to meet with members of the Louisiana Delegation and emergency response teams in Louisiana this afternoon.
1:15 p.m. Oil and gas workers reboard offshore rigs in Gulf of Mexico
Oil and gas workers are reboarding offshore platforms and rigs in the Gulf of Mexico to assess damages after Tropical Storm Harvey.
On Saturday, personnel had returned to all five rigs evacuated during the storm. About 6 percent of 737 stationary production platforms remained evacuated.
The Bureau of Safety and Environmental Enforcement is inspecting offshore facilities and monitoring efforts to restart production operations. It has not yet received any damage reports.
1:00 p.m. Church holds benefit for Saldivar family
By the time the flood waters receded and the white van reappeared, Virginia Saldivar was already expecting the worst.
But Saldivar, the grandmother of the four children who drowned in Greens Bayou and daughter in law of the the elderly couple in the van, will cherish her memories of the children as their short lives drew to a tragic end.
Belia and Manuel Saldivar, ages 81 and 84, were found in the front seats. The bodies of their four great-grandchildren -- ages 16, 14, 8 and 6 -- were found in the rear of the van.
"They were our life," Saldivar said. "That's what we're remembering — how wonderful they were."
A benefit was held Saturday at the Iglesia Cristiana Principe de Paz for the family of the six victims who were trapped in flood waters on Greens River Road.
The community room of the church filled with friends, extended family and members of the community coming together to show support for a family devastated by the flood that's taken over 50 lives.
"We want everyone to know that we're very thankful for all the love and support," Saldivar said. "It's really and truly been overwhelming."
12:50: Humble ISD announces back-to-school date
Humble ISD will reopen for the first day of school Sept. 7.
Kingwood High School and Summer Creek High School will reopen Sept. 11 because of the damage they sustained after the storm.
Staff members are required to report to work Sept. 5.
12:35 p.m. Last cruise stuck at sea due to Harvey makes port
For the more than 3,000 passengers aboard the Carnival Breeze, an extra week at sea due to Harvey's wrath ended on a steamy Saturday morning in Galveston.
The Breeze was the last of four ships to dock in the Port of Galveston's cruise terminal after being turned away last week as Harvey strengthened from a tropical storm to a hurricane. The cruise was originally supposed to end Sunday, and was given clearance to dock early Saturday.
Our Marialuisa Rincon has the full story here.
12:20 p.m. HISD offers more details on damages and displacement
Houston ISD officials said they've surveyed 245 schools as of early Saturday, with plans to reach the roughly 35 remaining campuses. About 200 schools had standing water, with 53 suffering "major" damage and 22 receiving "extensive" damage, which is more severe, Chief Operating Officer Brian Busby said.
Some schools may never be inhabitable again, Superintendent Richard Carranza said, but it's too early to make that judgment.
Carranza said a decision about relocating students to other campuses will be made no earlier than Tuesday. He's exploring the possibility of "double shifts" at some campuses, with students from one school attending classes in the morning to early afternoon, and students from another school coming into the same building for classes from early afternoon to evening.
Our Jacob Carpenter has the full story here.
12:15 p.m. President Trump arrives at NRG Center
President Donald Trump and Texas Governor Greg Abbott arrived at NRG Center Saturday to visit with those affected by the storm.
Read more here.
12:10 p.m. Cy-Fair ISD delays start of school to Sept. 11
Cypress-Fairbanks ISD has delayed the start of school from Sept. 6 to Sept. 11 because several campuses face sewage issues, Superintendent Mark Henry said in an announcement.
'This is a dynamic situation," he said. "It is difficult to predict other issues we may face in the coming weeks."
Staff will report to work Sept. 7.
The district we have extended its free meal program at Holbrook and Owens elementary schools and Cypress Lakes High School through Sept. 10.
12:05 p.m. Brazoria County issues mandatory evacuations
Brazoria County has issued a mandatory evacuation order for areas where that action had been voluntary. The order includes most areas west of State Highway 288.
Noon: Parents arrive to Carrillo Elementary School to feed their children
Deimin Ramirez, 28, tapped the wooden cafeteria table.
"Eat the pear too," she ordered her 7 and 5-year-old.
Ramirez was the first mother at Carrillo Elementary School Saturday morning receiving a free hot lunch for herself and her three daughters as part of a new HISD effort that launched Saturday to aid Harvey victims.
The brisket sandwiches, mashed potatoes and diced pears were a welcomed reprieve from the few beans and rice the Ramirez family subsisted on for days.
The remaining food stamps in her possession were barely enough for Ramirez to shop at the nearby Fiesta Mart, which itself had been running low on supplies for the storm's duration.
"They were hungry," Ramirez said of her daughters, as she uncapped a water bottle for her 2-year-old.
Water seeped into their one bedroom apartment a few blocks away from the school, but the only damage sustained was a soaked living room carpet that has already been torn out.
Structural damage to their house during Hurricane Ike forced the family to move into an apartment with a more manageable rent. Last year they moved again, this time into the apartments surrounding Fiesta and the school.
School officials put out calls to neighboring families in both English and Spanish announcing the free breakfast, lunch, and dinner available this weekend. Concerned some may have lost power and therefore missed the phone call, staff members printed out flyers on Saturday to deliver door-to-door.
As her girls wiped gravy off their mouths with the back of their hands, Ramirez called up a couple of her neighbors letting them know the school had food.
She packed up the leftovers--mostly pears--as she headed out to search for any milk at Fiesta.
"They're used to milk in the morning," Ramirez said. "We didn't have any since Saturday."
11:45 a.m. H-E-B resumes normal business hours at many Houston stores
Fifty-one H-E-B stores resumed normal hours Saturday as the company worked to restock its chain with essentials.
About 30 stores, including the Gulfgate, Bay City, Grand Parkway, Dairy Ashford, Friendswood and Lake Jackson locations, will keep modified hours through Sunday. All will close at 6 p.m. or later. Some will stay open until 10 or 11 p.m.
Four locations remained closed: Braeswood/Chimney Rock, Kingwood, Orange and Joe V's Smart Shop in Wallisville.
11:20: President Trump touches down at Ellington Field
President Donald Trump and First Lady Melania Trump touched down in Houston at Ellington Field with plans to meet residents affected by the storm and visit a relief shelter.
11:00 a.m. City of Houston asks some residents to refrain from normal water use
The West District and Turkey Creek wastewater treatment plants have flooded, and the City of Houston has asked residents in those area to refrain from flushing and using extra water to clean, shower or bathe until further notice. It's working to make repairs.
The following zip codes are affected: 77024, 77041, 77043, 77055, 77077, 77079, 77080 and 77094.
10:50 a.m. Houston ISD outlines widespread damage and student displacement
Houston ISD Superintendent Richard Carranza said at a Saturday press conference there is a small chance the district will delay the start of school later than Sept. 11, but it's working to resume operations by then.
He said as many as 12,000 students will need to be temporarily relocated from damaged schools, some of which won't open for months. The district does not yet have a list of those schools.
10:05 a.m. Coast Guard continues rescue efforts in Port Arthur
The Coast Guard dispatched more than 200 personnel on Saturday to aid the Port Arthur area.
It deployed 27 shallow-draft vessels, which are capable of operating in flooded urban areas. Responders there have rescued more than 490 people and 155 pets in the past 24 hours.
9:45 a.m. Houston activists organize Trump protest
The Houston Socialist Movement and other organizations are planning to protest President Donald Trump when he visits a local relief center at noon. The location has not been disclosed.
In a statement, the activists said they plan to "send a powerful message of opposition to the President and the white supremacists and misogynists who support him."
9:30 a.m. Cy-Fair ISD offers free meals before reopening
Cypress-Fairbanks ISD is offering free meals for children and accompanying adults Sept. 2-4 at Owens and Holbrook elementary schools and Cypress Lakes High School. Food will be served from 8 a.m. to 2 p.m.
The school district will reopen Sept. 6. The USDA has waived all free and reduced meal eligibility requirements through the end of the month.
9:00 a.m. Houston firefighters respond to house fire near Memorial
Houston firefighters are battling a one-alarm fire at a house on Whitewing Lane south of Memorial Drive. Flames have breached the roof in multiple places.
The department has also responded to a one-alarm fire at a house on Frey Road near Edgebrook.
8:15 a.m. Meteorologists expect almost no rain Labor Day weekend
Space City Weather's latest forecast anticipates warm, sunny weather with almost no chance of rain in the Houston area this weekend. Early next week, a cool front might increase the chance of rain, but it's not expected to accumulate.
The forecast anticipates that Hurricane Irma, now brewing in the Atlantic Ocean, will turn north before moving into the Gulf of Mexico later next week.
7:40 a.m. President Trump to meet with Harvey victims in Houston
President Donald Trump and First Lady Melania Trump will arrive in Texas Saturday morning to meet individuals affected by the storm, visit a relief center and meet with members of the Texas Delegation at Ellington Field Joint Reserve Base in Texas. They will also meet with members of the Louisiana Delegation and emergency response teams in Louisiana this afternoon. They visited Corpus Christi and Austin earlier this week.
6:35 a.m. Trump announces Sunday as National Day of Prayer for Harvey
Trump announced Sunday would be a National Day of Prayer for Hurricane Harvey victims, national responders and recovery efforts, according to a press release from the White House.
"I urge Americans of all faiths and religious traditions and backgrounds to offer prayers today for all those harmed by Hurricane Harvey, including people who have lost family members or been injured, those who have lost homes or other property, and our first responders, law enforcement officers, military personnel, and medical professionals leading the response and recovery efforts," the statement read.
6:15 a.m. Benefit to be held for family of 6 who died during Harvey
A benefit will be held in north Houston for the family of six that was found dead | Arkema CEO Rich Rowe said Wednesday night that an explosion at his company's flooded Houston-area chemical plant was inevitable—and he was proved right within hours. The Harris County Emergency Operations Center said early Thursday that there had been two explosions and black smoke was coming from the Crosby plant, the Houston Chronicle reports. Police say a deputy was hospitalized after inhaling fumes and nine others drove themselves to the hospital as a precaution. Rowe warned earlier that the flooded plant had lost power and backup generators had failed, meaning that chemicals that become volatile above certain temperatures were not receiving the necessary cooling. Elsewhere in the region, the city of Beaumont, population 120,000, says its main pump station has been damaged, leaving residents with no water supply, the AP reports. Authorities say determining the extent of the damage, let alone making repairs, will have to wait until floodwaters recede. The death toll from Hurricane Harvey and its aftermath now stands at 37, and the number is expected to rise when flooded areas can be fully accessed, CNN reports. Flooding is expected to continue for days, but the National Hurricane Center says it is no longer tracking Harvey, which is now a tropical depression expected to bring heavy rains to Mississippi and Tennessee before reaching Kentucky by Friday night. |
Op-ed: My Farewell to Exodus International
The 'ex-gay' movement reached into the living room of Jaime Bayo as a young gay teen understanding his sexuality. And he's glad to see it go.
Wednesday night I heard that Exodus International — the world’s largest “ex-gay” ministry — announced during its annual conference that it will close. Shortly after, the organization’s president issued a public apology to everyone who has ever been a victim of “change therapy” meant to alter sexual orientation.
In an instant, I found myself back in my grandmother’s house —12 years old, kneeling with her on the floor of her living room, praying the rosary. In between the Hail Marys and the sorrowful mysteries, I inserted my own private prayer for God to take away my homosexuality. We would do this together almost every night, and each night I would ask God to make me straight.
Everything I had learned at church — a cornerstone of my Hispanic Catholic family — said that being gay is a sin, that it is somehow worse than other sins, but that if you are really committed to God you could overcome this great sin. Since 1976, Exodus International has spread that message to thousands upon thousands of young people and adults, telling them that in order to be faithful they have to denounce their sexuality. That homosexuality is a spiritual disease. That if you pray enough, starve yourself enough, endure enough physical and emotional pain, you can be saved.
As a young person, I often imagined what it would be like to try a “change therapy” program. I would hear horror stories of torture and lies, and I would be scared. But when I stared into the mesh screen in the confessional, as the pastor asked, “Is there anything else?” I would have to fight to keep the words “I’m gay” inside. I wanted to be accepted by God, by my family, by my grandmother who knelt beside me each night to pray.
In 1973, just three years before the birth of Exodus and only 40 years ago, homosexuality was removed from the Diagnostic and Statistical Manual of Psychiatric Disorders as a mental illness. The medical community made a very public statement that being gay or lesbian was not a disease. Now Exodus International has denounced “change therapy” and apologized for the “pain and the hurt” it caused victims throughout the years. Exodus joins thousands of congregations and faith-based organizations that have come to the realization that being gay is not a religious disease.
In just a few days we could receive a decision from the U.S. Supreme Court on same-sex marriage. Although the decision is unknown, there is an undeniable growing shift in support for LGBT equal rights. Citizens across this great country are stepping forward and affirming publicly that being gay is not a social disease.
I feel ever more confident that we are winning and that there will be a day in my lifetime when my doctor, my pastor, and my elected officials all agree that I am healthy.
I’m not evil, and I am worthy of the same rights as my grandmother. My grandmother last year came to a similar realization, telling me that even though I was a “different fellow,” she just wanted me to be happy and at peace.
Nothing brings me closer to my faith than taking a stand against the social injustices of our generation — a calling mentioned numerously in the Bible. It makes me proud to work for an organization that continues to lead the fight for LGBT equality through grassroots action focused on civil rights, parental rights, and relationship recognition. My life is overflowing with support and love from family, friends, and colleagues. Each and every day I am reminded that I am among the lucky few who are able to live my life openly and without fear.
You can rest assured that right now there is a young boy praying silently to God to take away his homosexuality. There is a girl at Bible camp who feels dirty and sinful when she changes in front of her bunkmates. There is a young person being sent away to an organization much like Exodus International. It’s happening because the leaders in our faith communities have given in to fear and hate above love and compassion. The antigay ideas taught at the pulpit then propagate through our communities and are ingrained in fabric of families. But today, there is one less hatefully misguided organization leading the charge against perfectly normal people who just so happen to be gay.
As Exodus International closes its doors, I join a chorus of voices sharing the same message: It is time for our religious and political leaders to embrace and respect our community. Exodus International will soon become only a sad part of our collective history. Thank God.
JAIME BAYO is the director of development at SAVE Dade, a nonprofit organization dedicated to promoting, protecting, and defending the rights of LGBT citizens in Miami-Dade. He can be reached at [email protected]
(Editor's Note: This piece was updated to reflect that 1973 was 40 years ago, not 30). |||||
.Proclaiming Freedom From Homosexuality. by Google Images w/caption by Rev Dan
Sometimes a blogger runs into a bit of, shall we say, ironic serendipity regarding the news everyone expects him to cover. Yesterday morning I was playing the song that catapulted Connie Francis to pop stardom, "Who's Sorry Now," when I came across the news that Exodus International, the world's largest "ex-gay" and pray-away-the-gay ministry was closing its doors.
The Apology? by Google Images w/caption by Rev Dan
It's good that Exodus is apologizing, because they've caused a lot of damage to gay people, said Pastor Greg Bullard of Covenant of the Cross church in Madison, Tenn.
"I understand they're doing the apology tour, but they're still not saying it's OK," Bullard said. "They've borne false witness. They've told people they could change things, things they could never change because it's imprinted on them as something they are.
"Their theology is still warped," he said.
In response to Exodus's movements of late, hardcore fundamentalist "ex-gay" figures have moved to create their own new group, the Restored Hope Network, which is chaired by Anne Paulk, the estranged wife of John Paulk, the former poster-boy for the "ex-gay" movement who now admits that he is an openly gay man. Restored Hope's co-founder, Andrew Comiskey, has claimed that "Satan delights in homosexual perversion," which shows that this new group is committed to doing as much as or more damage than Exodus ever did.
Two disgruntled Christian Right organizations by Google Images w/caption by Rev Dan
Never in a million years would I intentionally hurt another person. Yet, here I sit having hurt so many by failing to acknowledge the pain some affiliated with Exodus International caused, and by failing to share the whole truth about my own story. My good intentions matter very little and fail to diminish the pain and hurt others have experienced on my watch. The good that we have done at Exodus is overshadowed by all of this.
And President Alan Chambers issued an apology to the LGBT Community.The closing of Exodus International, at a time when "reparative therapy" is being banned for minors in some states, at a time when the Supreme Court is about to issue to important rulings concerning gay marriage, at a time when the Christian Right is screaming that homosexuality is a "choice," is crucial in today's bloodiest culture war.An apology after 41 years of literal torture, after many suicides, after thousands of people having been displaced and disowned (especially during the age of AIDS), after countless harangues from the pulpit about "change is possible through the grace of Jesus Christ" (the logo states "Proclaiming Freedom From Homosexuality Since 1976"), is an apology even meaningful?Although Exodus claims that it will reverse course in its new ministry - helping gays to assimilate themselves into an accepting Christian community, some people are very wary about the turnabout:Yes, Chambers still says it's a sin (albeit no worse than, say, swearing), but it's more innate - kind of like Original Sin - and unchangeable.And while Wayne Besen of Truth Wins Out applauded Chambers' statement, he was quick to report that another ministry has been waiting in the wings to fill the gap in the ex-gay movement:The mea culpas of Exodus, International may not be able to slow down the onslaught of such groups as Family Research Council and NARTH (National Association for Reparative Therapy of Homosexuality, currently fighting in California courts concerning the state's ban on reparative therapy for minors). They certainly think that they have experienced treason from a group that is not "truly Christian."Just as tears from a Pope at Jerusalem's Wailing Wall did not exonerate 1500 years of Christians persecuting Jews, one tearful message of a (now) ex ex-gay does nothing to eradicate the pain and loss caused by a movement to change the innate. Chambers, of course, realizes this:But in the exodus from Exodus, Chambers will fail to take on the powers that will continue to cause the pain: like a remorseful little puppy, he will slink penitently away to hide in a new "ministry." ||||| By
Three years ago, Leslie and I began a very public conversation with Our America’s Lisa Ling, from the Oprah Winfrey Network (OWN) regarding some of our deeply held beliefs about Christianity and the LGBT community. Today, we have decided to carry this public conversation even further. While this conversation has and may well continue to be met with many different responses from supporters and critics, it is our desire to keep having these honest discussions in the hopes of arriving to a place of peace.
Several months ago, this conversation led me to call Lisa Ling to take another step on this messy journey. I asked if she would, once again, help us add to the unfolding story by covering my apology to the people who have been hurt by Exodus International. Our ministry has been public and therefore any acknowledgement of wrong must also be public. I haven’t always been the leader of Exodus, but I am now and someone must finally own and acknowledge the hurt of others. I do so anxiously, but willingly.
It is strange to be someone who has both been hurt by the church’s treatment of the LGBT community, and also to be someone who must apologize for being part of the very system of ignorance that perpetuated that hurt. Today it is as if I’ve just woken up to a greater sense of how painful it is to be a sinner in the hands of an angry church.
It is also strange to be an outcast from powerful portions of both the gay community and the Christian community. Because I do not completely agree with the vocal majorities in either group and am forging a new place of peaceful service in and through both, I will likely continue to be an outsider to some degree. I imagine it to be very much like a man I recently heard speak at a conference I attended, Father Elias Chacour, the Melkite Catholic Archbishop of Israel. He is an Arab Christian, Palestinian by birth, and a citizen of Israel. Talk about a walking contradiction. When I think of the tension of my situation I am comforted by the thought of him and his.
My desire is to completely align with Christ, his Good News for all and his offer of peace amidst the storms of life. My wife Leslie and my beliefs center around grace, the finished work of Christ on the cross and his offer of eternal relationship to any and all that believe. Our beliefs do not center on “sin” because “sin” isn’t at the center of our faith. Our journey hasn’t been about denying the power of Christ to do anything – obviously he is God and can do anything.
With that, here is an expanded version of the apology I offered during my recent interview with Lisa Ling to the people within the LGBTQ community who have been hurt by the Church, Exodus International, and me. I realize some within the communities for which I apologize will say I don’t have the right, as one man, to do so on their behalf. But if the Church is a body, with many members being connected to the whole, then I believe that what one of us does right we all do right, and what one of us does wrong we all do wrong. We have done wrong, and I stand with many others who now recognize the need to offer apologies and make things right. I believe this apology – however imperfect – is what God the Father would have me do.
To Members of the LGBTQ Community:
In 1993 I caused a four-car pileup. In a hurry to get to a friend’s house, I was driving when a bee started buzzing around the inside of my windshield. I hit the bee and it fell on the dashboard. A minute later it started buzzing again with a fury. Trying to swat it again I completely missed the fact that a city bus had stopped three cars in front of me. I also missed that those three cars were stopping, as well. Going 40 miles an hour I slammed into the car in front of me causing a chain reaction. I was injured and so were several others. I never intended for the accident to happen. I would never have knowingly hurt anyone. But I did. And it was my fault. In my rush to get to my destination, fear of being stung by a silly bee, and selfish distraction, I injured others.
I have no idea if any of the people injured in that accident have suffered long term effects. While I did not mean to hurt them, I did. The fact that my heart wasn’t malicious did not lessen their pain or their suffering. I am very sorry that I chose to be distracted that fall afternoon, and that I caused so much damage to people and property. If I could take it all back I absolutely would. But I cannot. I pray that everyone involved in the crash has been restored to health.
Recently, I have begun thinking again about how to apologize to the people that have been hurt by Exodus International through an experience or by a message. I have heard many firsthand stories from people called ex-gay survivors. Stories of people who went to Exodus affiliated ministries or ministers for help only to experience more trauma. I have heard stories of shame, sexual misconduct, and false hope. In every case that has been brought to my attention, there has been swift action resulting in the removal of these leaders and/or their organizations. But rarely was there an apology or a public acknowledgement by me.
And then there is the trauma that I have caused. There were several years that I conveniently omitted my ongoing same-sex attractions. I was afraid to share them as readily and easily as I do today. They brought me tremendous shame and I hid them in the hopes they would go away. Looking back, it seems so odd that I thought I could do something to make them stop. Today, however, I accept these feelings as parts of my life that will likely always be there. The days of feeling shame over being human in that way are long over, and I feel free simply accepting myself as my wife and family does. As my friends do. As God does.
Never in a million years would I intentionally hurt another person. Yet, here I sit having hurt so many by failing to acknowledge the pain some affiliated with Exodus International caused, and by failing to share the whole truth about my own story. My good intentions matter very little and fail to diminish the pain and hurt others have experienced on my watch. The good that we have done at Exodus is overshadowed by all of this.
Friends and critics alike have said it’s not enough to simply change our message or website. I agree. I cannot simply move on and pretend that I have always been the friend that I long to be today. I understand why I am distrusted and why Exodus is hated.
Please know that I am deeply sorry. I am sorry for the pain and hurt many of you have experienced. I am sorry that some of you spent years working through the shame and guilt you felt when your attractions didn’t change. I am sorry we promoted sexual orientation change efforts and reparative theories about sexual orientation that stigmatized parents. I am sorry that there were times I didn’t stand up to people publicly “on my side” who called you names like sodomite—or worse. I am sorry that I, knowing some of you so well, failed to share publicly that the gay and lesbian people I know were every bit as capable of being amazing parents as the straight people that I know. I am sorry that when I celebrated a person coming to Christ and surrendering their sexuality to Him that I callously celebrated the end of relationships that broke your heart. I am sorry that I have communicated that you and your families are less than me and mine.
More than anything, I am sorry that so many have interpreted this religious rejection by Christians as God’s rejection. I am profoundly sorry that many have walked away from their faith and that some have chosen to end their lives. For the rest of my life I will proclaim nothing but the whole truth of the Gospel, one of grace, mercy and open invitation to all to enter into an inseverable relationship with almighty God.
I cannot apologize for my deeply held biblical beliefs about the boundaries I see in scripture surrounding sex, but I will exercise my beliefs with great care and respect for those who do not share them. I cannot apologize for my beliefs about marriage. But I do not have any desire to fight you on your beliefs or the rights that you seek. My beliefs about these things will never again interfere with God’s command to love my neighbor as I love myself.
You have never been my enemy. I am very sorry that I have been yours. I hope the changes in my own life, as well as the ones we announce tonight regarding Exodus International, will bring resolution, and show that I am serious in both my regret and my offer of friendship. I pledge that future endeavors will be focused on peace and common good.
Moving forward, we will serve in our pluralistic culture by hosting thoughtful and safe conversations about gender and sexuality, while partnering with others to reduce fear, inspire hope, and cultivate human flourishing. | Up until this week, Exodus International was the biggest Christian ministry still trying to "cure" gay people though prayer and counseling. And then leader Alan Chambers apologized "for the pain and hurt many of you have experienced" and announced he was shutting the group down. (Read his full statement here.) It's about time, writes Jaime Bayo at the Advocate, who recalls being 12 years old and being made to feel like a sinner because of his homosexuality. It was only a few decades ago that being gay was considered a mental illness, he notes. My how things have changed, as a glance at the Supreme Court docket makes clear. "I feel ever more confident that we are winning and that there will be a day in my lifetime when my doctor, my pastor, and my elected officials all agree that I am healthy," writes Bayo. Exodus, meanwhile, will "become only a sad part of our collective history." Maybe, but the Rev. Dan Vojir at OpEdNews isn't in a forgiving mood. "Just as tears from a Pope at Jerusalem's Wailing Wall did not exonerate 1,500 years of Christians persecuting Jews, one tearful message of a (now) ex ex-gay does nothing to eradicate the pain and loss caused by a movement to change the innate." Read his full column, or Bayo's full column. |
Various child nutrition programs have been established to provide nutritionally balanced, low-cost or free meals and snacks to children throughout the United States. The school lunch and school breakfast programs are among the largest of these programs. The National School Lunch Program was established in 1946; a 1998 expansion added snacks served in after-school and enrichment programs. In fiscal year 2000, more than 27 million children at over 97,000 public and nonprofit private schools and residential child care institutions received lunches through this program. The School Breakfast Program began as a pilot project in 1966 and was made permanent in 1975. The program had an average daily participation of more than 7.5 million children in about 74,000 public and private schools and residential child care institutions in fiscal year 2000. According to program regulations, states can designate schools as severe need schools if 40 percent or more of lunches are served free or at a reduced price, and if reimbursement rates do not cover the costs of the school’s breakfast program. Severe need schools were generally reimbursed 21 cents more for free and reduced-price breakfasts in school year 2000-01. The National School Lunch and School Breakfast Programs provide federally subsidized meals for all children; with the size of the subsidy dependent on the income level of participating households. Any child at a participating school may purchase a meal through the school meals programs. However, children from households with incomes at or below 130 percent of the federal poverty level are eligible for free meals, and those from households with incomes between 130 percent and 185 percent of the poverty level are eligible for reduced-price meals. Similarly, children from households that participate in three federal programs— Food Stamps, Temporary Assistant for Needy Families, or Food Distribution Program on Indian Reservations—are eligible to receive free or reduced-price meals. School districts participating in the programs receive cash assistance and commodity foods from USDA for all reimbursable meals they serve. Meals are required to meet specific nutrition standards. For example, school lunches must provide one-third of the recommended dietary allowances of protein, vitamins A and C, iron, calcium, and calories. Schools have a great deal of flexibility in deciding which menu planning approach will enable them to comply with these standards. Schools receive different cash reimbursement amounts depending on the category of meals served. For example, a free lunch receives a higher cash reimbursement amount than a reduced-price lunch, and a lunch for which a child pays full price receives the smallest reimbursement. (See table 2.) Children can be charged no more than 40 cents for reduced-price meals, but there are no restrictions on the prices that schools can charge for full-price meals. Various agencies and entities at the federal, state, and local levels have administrative responsibilities under these programs. FNS administers the school meal programs at the federal level. In general, FNS headquarters staff carry out policy decisions, such as updating regulations, providing guidance and monitoring, and reporting program review results. Regional staff interact with state and school food authorities, and provide technical assistance and oversight. State agencies, usually departments of education, are responsible for the statewide administration of the program, including disbursing federal funds and monitoring the program. At the local level, two entities are involved—the individual school and organizations called school food authorities, which manage school food services for one or more schools. School food authorities have flexibility in how they carry out their administrative responsibilities and can decide whether to delegate some tasks to the schools. To receive program reimbursement, schools and school food authorities must follow federal guidelines for processing applications for free and reduced-price meals, verifying eligibility for free or reduced-price meals, and counting and reporting all reimbursable meals served, whether full- price, reduced-price, or free. This means processing an application for most participants in the free and reduced-price programs, verifying eligibility for at least a sample of approved applications, and keeping daily track of meals provided. These processes comprise only a small part of the federal school meal programs’ administrative requirements. According to a USDA report, school food authorities spend the majority of their time on other administrative processes, including daily meal production records and maintaining records documenting that the program is nonprofit as required by regulations. The data we were asked to obtain focus on the participant eligibility and meal counting and reimbursement processes and do not include estimates for other administrative tasks, which are outside the scope of the request. The federal budget provides funds separate from program dollars to pay for administrative processes at the federal and state level. In contrast, officials at the local level pay for administrative costs from program dollars that include federal and state funding and student meal payments. Districts and schools that participate in the school meal programs vary in terms of locale, size of enrollment, percent of children approved for free and reduced-price meals, and types of meal counting systems used. We selected 10 districts and 20 schools located in rural areas, small towns, mid-size central cities, urban fringe areas of mid-size and large cities, and large central cities. At the districts, enrollment ranged from 1,265 to 158,150 children, while at the 20 schools, it ranged from 291 to 2,661 children. The rate of children approved for free and reduced-price meals ranged from 16.7 to 74.5 percent at the districts and from 10.5 to 96.5 percent at the schools. Nine of these schools used electronic meal counting systems. Table 3 summarizes the characteristics of selected districts and schools. For school year 2000-01, the estimated application process costs at the federal and state levels were much less than 1 cent per program dollar, and the median cost at the local level was 1 cent per program dollar. (See table 4.) At the federal and state levels, costs related to the application process were primarily for tasks associated with providing oversight, issuing guidance, and training throughout the year. At the local level, the costs varied, the tasks were primarily done at the beginning of the school year by the school food authorities, and different staff performed the tasks. Our limited number of selected schools differed in many aspects, making it difficult to determine reasons for most cost differences, except in a few instances. The estimated federal costs for performing the duties associated with the application process were small in relation to the program dollars. FNS headquarters estimated its costs were about $358,000. When compared with the almost $8 billion in program dollars that FNS administered throughout the 2000-01 school year, these costs were much less than 1 cent per program dollar. However, these costs did not include costs for FNS’s seven regional offices. At the one region we reviewed, which administered about $881 million program dollars, estimated costs were about $72,000 for this time period. FNS’s costs were related to its overall program management and oversight duties. FNS officials said that they performed duties and tasks related to the application process throughout the year. The primary tasks and duties performed by FNS headquarters and/or regional staff included the following: Updating and implementing regulations related to the application process. Revising eligibility criteria. Reviewing state application materials and eligibility data. Providing training to states. Responding to questions from states. Conducting or assisting in reviews of the application process at the state and school food authority levels, and monitoring and reporting review results. Estimated costs incurred by the five selected states ranged from $53,000 to $798,000 for performing tasks related to the application process, while the total program dollars administered ranged from $122 million to $1.1 billion. For four of the five states we reviewed, total application costs were generally in proportion to the program dollars administered. However, the estimated application costs for one state were higher than for other selected states with significantly larger programs. Officials from this state attributed these higher costs to the large number of districts in that state compared with most other states. At the state level, costs were incurred primarily for providing guidance and training to school food authority staff and for monitoring the process. Just as at the federal level, state level officials said that they performed their application process duties throughout the year. These tasks included updating agreements with school food authorities to operate school meal programs, preparing prototype application forms and letters of instruction to households and providing these documents to the school food authorities, and training managers from the school food authorities. State officials also reviewed the application process as part of required reviews performed at each school food authority every 5 years. For the sites we reviewed, the estimated median cost at the local level to perform application process tasks was 1 cent per program dollar and ranged from less than half a cent to about 3 cents. The school food authorities incurred most of the application process costs—from about $3,000 to nearly $160,000, and administered program dollars ranging from about $315,000 to nearly $18 million. Not all schools incurred application process costs, but for those that did, these costs ranged from over $100 to as much as $3,735. The schools reviewed were responsible for $65,000 to $545,000 in program dollars. Table 5 lists the estimated application process costs, program dollars, and cost per program dollar for each of the school food authorities and schools included in our review. At the local level, the costs associated with conducting the application process for free and reduced-price meals were primarily related to the following tasks: Downloading the prototype application and household instruction letter from the state’s Web site and making copies of it before the school year begins. Sending the applications and household instruction letters home with children at the beginning of the school year or mailing them to the children’s homes. Collecting completed applications that were either returned to school or mailed to the district office. Reviewing applications and returning those with unclear or missing information, or calling applicants for the information. Making eligibility determinations for free or reduced-price meals. Sending letters to applicants with the results of eligibility determinations for free or reduced-price meals. Preparing rosters of eligible children. Most of the application process tasks were performed at the beginning of the school year because parents must complete a new application each year in order for their children to receive free or reduced-price meals.Some applications are submitted throughout the school year for newly enrolled or transferred children or children whose families have changes to their household financial status. Program regulations direct parents to notify school officials when there is a decrease in household size or an increase in household income of more than $50 per month or $600 per year. Staff at 8 of the 10 school food authorities performed most of the application tasks for all schools that they managed. For the 2 other school food authorities, the schools reviewed performed most of the application tasks. Sixteen of 20 schools distributed and collected the applications. However, 4 schools did not distribute applications because their school food authorities mailed applications to households instead. Various staff supported the application process at the school food authorities and the schools. Two school food authorities hired temporary workers to help process the applications at the start of the school year, and the costs at these locations were below the median. Several schools involved various nonfood service staff in the process. At one school guidance counselors and teachers helped distribute and collect applications. At another school, a bilingual community resource staff person made telephone calls to families to help them apply for free and reduced-price meals. Clerical workers copied and pre-approved applications at two schools, and at another school, the school secretary collected the applications and made eligibility determinations. While the variation in the staff assigned to perform application duties may account for some cost differences, the limited number of selected schools and their related school food authorities differed in many aspects, making it difficult to determine reasons for most cost differences, except in a few instances. In one case, we were able to compare two schools and their related school food authorities because the two schools had some similar characteristics, including size of school enrollment, grade span, and percentage of children approved for free and reduced-price school meals. However, the school food authorities differed in size and locale. At these two schools, the combined costs—costs for the school and its share of the related school food authority’s costs for processing applications—differed. The combined costs at one school were almost 3 cents per program dollar, while the combined costs at the other school were less than 1 cent per program dollar. The school with the higher costs enlisted teachers and guidance counselors to help hand out and collect applications and was part of a smaller school food authority that used a manual process to prepare a roster of eligible children. The other school did not perform any application process tasks, since these tasks were done centrally at the school food authority. This school was part of a district that had a much higher enrollment and an electronic system to prepare a roster of eligible children. For the remaining 18 schools, we were generally not able to identify reasons for cost differences. For the 2000-01 school year, the estimated costs per program dollar for the verification process were much less than 1 cent at the federal, state, and local levels. (See table 6.) At the federal and state levels, the costs of verifying eligibility for free and reduced-price meals were primarily related to oversight tasks performed throughout the year. At the local level, duties associated with the verification process were done in the fall of the school year. Only one school food authority significantly involved its schools in the verification process. At the 10 selected school food authorities, the verification process resulted in some children being moved to other meal categories, because households did not confirm the information on the application or did not respond to the request for verification documentation. FNS has implemented several pilot projects for improving the application and verification processes and plans to complete these projects in 2003. For school year 2000-01, the estimated costs at the federal and state levels for performing duties associated with the verification process were much less than 1 cent per program dollar. The estimated costs at FNS headquarters of about $301,000 and the estimated costs at the selected FNS region of about $28,000 were small in relation to the program dollars administered—about $8 billion and $881 million, respectively. FNS performed a number of tasks to support the verification process. FNS officials said that during the year the primary tasks that staff at headquarters and/or regions performed included the following: Updating regulations and guidance related to the verification process. Holding training sessions. Responding to questions from states and parents. Clarifying verification issues. Reviewing state verification materials and data. Conducting or assisting in reviews of the process at the state and school food authority levels. Monitoring and reporting review results. Costs incurred by the selected states ranged from about $5,000 to $783,000 for performing tasks related to the verification process. During this period, these states administered $122 million to $1.1 billion program dollars. States incurred costs associated with overseeing and monitoring the verification process and performed many tasks throughout the year. The primary state task involved reviews of the verification process, where states determined whether the school food authorities appropriately selected and verified a sample of their approved free and reduced-price applications by the deadline, confirmed that the verification process was completed, and ensured that verification records were maintained. In addition to the review tasks, state officials provided guidance and training to school food authority staff. The selected school food authorities’ costs ranged from $429 to $14,950 for the verification process tasks, while costs at selected schools, if any, ranged from $23 to as much as $967. Schools reported few, if any, costs because they had little or no involvement in the verification process. During school year 2000-01, the school food authorities administered program dollars ranging from about $315,000 to over $28 million, and the schools were responsible for program dollars ranging from about $65,000 to $545,000. The estimated median cost at the local level—school food authorities and schools combined—was much less than 1 cent per program dollar. Table 7 lists the estimated verification process costs, program dollars, and cost per program dollar for each of the school food authorities and schools included in our review. At the local level, costs associated with verifying approved applications for free and reduced-price school meals were for duties performed primarily in the fall of the school year. Each year school food authority staff must select a sample from the approved applications on file as of October 31 and complete the verification process by December 15. According to USDA regulations, the sample may be either a random sample or a focused sample. Additionally, the school food authority has an obligation to verify all questionable applications, referred to as verification “for cause.” However, any verification that is done for cause is in addition to the required sample. Furthermore, instead of verifying a sample of applications, school food authorities may choose to verify all approved applications. Also, school food authorities can require households to provide information to verify eligibility for free and reduced-price meals at the time of application. This information is to be used to verify applications only after eligibility has been determined based on the completed application alone. In this way, eligible children can receive free or reduced-price school meals without being delayed by the verification process. Of the 10 selected school food authorities, 7 used a random sample method and 3 used a focused sample method. At the local level, the costs associated with verifying approved applications for free and reduced-price meals were primarily related to the following tasks: Selecting a sample from the approved applications on file as of October 31. Providing the selected households with written notice that their applications have been selected for verification and that they are required to submit written evidence of eligibility within a specified period of time. Sending follow-up letters to households that do not respond. Comparing documentation provided by the household, such as pay stubs, with information on the application to determine whether the school food authority’s original eligibility determination is correct. Locating the files of all the siblings of a child whose eligibility status has changed if a school district uses individual applications instead of family applications. Notifying the households of any changes in eligibility status. Generally, the selected school food authorities performed most of the verification tasks, while the schools had little or no involvement in the process. However, the schools in one school food authority did most of the verification tasks, and the tasks performed by the school food authority were limited to selecting the applications to be verified and sending copies of parent notification letters and verification forms to the schools for the schools to distribute. The costs at these two schools were less than 1 cent per program dollar. The verification process is intended to help ensure that only eligible children receive the benefit of free or reduced-price meals, and at the locations we visited, the verification process resulted in changes to the eligibility status for a number of children. During the verification process, generally, household income information on the application is compared with related documents, such as pay stubs or social security payment information. When the income information in the application cannot be confirmed or when households do not respond to the request for verification documentation, the eligibility status of children in the program is changed. That is, children are switched to other meal categories, such as from free to full price. Children can also be determined to be eligible for higher benefits, such as for free meals, rather than reduced-price meals. At the locations we visited, the verification process resulted in changes to the eligibility status for a number of children. For example, at one school food authority in a small town with about half of its children approved for free and reduced-price school meals, 65 of 2,728 approved applications were selected for verification, and 24 children were moved from the free meals category to either the reduced-price or full-priced meals categories, while 1 child was moved to the free category. At another school food authority in the urban fringe of a large city, with about 40 percent of its children approved for free and reduced-price school meals, 40 of about 1,100 approved applications were selected for verification and 8 children were moved to higher-priced meal categories. According to program officials, some children initially determined to be ineligible for free or reduced-price meals are later found to be eligible when they reapply and provide the needed documents. We did not determine whether any of the children were subsequently reinstated to their pre-verification status. The accuracy of the numbers of children who are approved for free and reduced-price meals affects not only the school meals program but also other federal and state programs. A USDA report, based on the agency’s data on the number of children approved for free meals and data from the U. S. Bureau of Census, indicates that about 27 percent more children are approved for free meals than are income-eligible. As such, the federal reimbursements for the school meals program may not be proper. Furthermore, some other programs that serve children in poverty distribute funds or resources based on the number of children approved to receive free or reduced-price meals. For example, in school year 1999-2000 nine states used free and reduced-price meals data to distribute Title I funds to their small districts (those serving areas with fewer than 20,000 total residents). In addition, districts typically use free and reduced- price meals data to distribute Title I funds among schools. At the state level, some state programs also rely on free and reduced-price lunch data. For example, Minnesota distributed about $7 million in 2002 for a first grade preparedness program based on these data. As of July 2002, FNS had three pilot projects underway for improving the application and verification processes. These projects are designed to assess the value of (1) requesting income documentation and performing verification at the time of application, (2) verifying additional sampled applications if a specified rate of ineligible children are identified in the original verification sample, and (3) verifying the eligibility of children who were approved for free school meals based on information provided by program officials on household participation in the Food Stamp, Temporary Assistance for Needy Families, or Food Distribution on Indian Reservations programs, a process known as direct certification. FNS plans to report on these projects in 2003. According to officials from three organizations that track food and nutrition issues, the American School Food Service Association, the Center on Budget and Policy Priorities, and the Food Research and Action Center, requesting income documentation at the time of application would likely add to application process costs and may create a barrier for eligible households. Having to provide such additional information can complicate the school meals application process and may cause some eligible households not to apply. In 1986, we reported this method as an option for reducing participation of ineligible children in free and reduced-price meal programs, but recognized that it could increase schools’ administrative costs, place an administrative burden on some applicants, or pose a barrier to potential applicants. For the 2000-01 school year, costs for meal counting and claiming reimbursement at the federal and state levels were much less than 1 cent per program dollar. The median cost was nearly 7 cents at the local level and was the highest cost. (See table 8.) The federal and state level costs were incurred for providing oversight and administering funds for reimbursement throughout the school year. Similarly, costs at the local level were incurred throughout the school year because the related duties, which apply to all reimbursable meals, were performed regularly. A number of factors come into play at the local level that could affect costs; however, except in a few instances, we could not identify any clear pattern as to how these factors affected meal counting and reimbursement claiming costs. At the federal and state levels, the costs associated with the meal counting and reimbursement claiming processes were much less than 1 cent per program dollar. FNS headquarters estimated that the costs associated with its meal counting and reimbursement claiming tasks were $254,000, and the costs of one FNS region were estimated at $93,000 in school year 2000-01. In comparison, FNS administered $8 billion and the region administered $881 million in the school meals program. FNS’s costs for meal counting and claiming reimbursement were less than their costs for application processing and verification tasks. FNS’s meal counting and reimbursement costs were primarily incurred for providing technical assistance, guidance, monitoring, and distributing federal funds to state agencies that administer school food programs. FNS distributes these funds through the regional offices, with the regions overseeing state and local agencies and providing guidance and training. Prior to the beginning of the fiscal year, FNS reviewed meal reimbursement requests from the prior year to project funding needs for each state. FNS awarded grants and provided letters of credit to states. Each month, states obtained reimbursement payments via the letters-of- credit, and FNS reviewed reports from states showing the claims submitted. At the end of the year, FNS closed out the grants and reconciled claims submitted with letter-of-credit payments. In addition to these tasks, FNS issued guidance, provided training, and responded to inquiries. Also, FNS regional staff conducted financial reviews of state agencies, such as reviews of reimbursement claim management, and assisted state agencies during reviews of school food authorities. For the five states, the cost per program dollar was also considerably less than 1 cent for the 2000-01 school year. The state agencies’ cost estimates ranged from $51,000 to $1 million, with the size of their programs ranging from $122 million to $1.1 billion. In all five states, the costs for meal counting and reimbursement tasks exceeded the costs for verification activities. In four of the five states, these costs were less than the costs for application activities. State agencies are responsible for operating a system to reimburse school food authorities for the meals served to children. Of the five state agencies in our sample, four had systems that allowed school food authorities to submit their monthly claims electronically, although one state agency’s system began operating in the middle of the 2000-01 school year. The other state agency received claims from its school food authorities through conventional mail services. The state agencies reviewed claims and approved payments as appropriate and conducted periodic reviews of school food authority meal counting and reimbursement activities. The median cost for meal counting and reimbursement claiming at the local level—school food authorities and schools—was about 7 cents per program dollar and ranged from 2 cents to 14 cents. The estimated meal counting and reimbursement claiming costs at the 10 selected school food authorities ranged from $2,461 to $318,436, and ranged from $1,892 to $36,986 for the 20 schools. Schools usually had a higher share of the cost per program dollar than their respective school food authorities; 18 of the 20 schools reviewed incurred more than half the cost per program dollar, with 14 schools incurring more than 75 percent. For example, one school’s costs were $19,000--about 90 percent of the combined school and school food authority costs. Table 9 lists the estimated costs for meal counting and obtaining reimbursement, program dollars, and cost per program dollar for each of the school food authorities and schools included in our review. The local level costs were much higher than the costs for application processing and verification because the duties were performed frequently throughout the school year, and costs were incurred for all reimbursable meals served under the program. As such, these costs do not reflect separate costs by type of meal served. At the schools, each meal was counted when served, the number of meals served were tallied each day, and a summary of the meals served was sent periodically to the school food authority. The school food authorities received and reviewed reports from its schools at regular intervals, including ensuring that meal counts were within limits based on enrollment, attendance, and the number of children eligible for free, reduced price and paid lunches. On the basis of these data, the school food authorities submitted claims for reimbursement to the state agency each month of the school year. Program officials noted that even without the federal requirement for meal counting by reimbursement category, schools would still incur some meal counting costs in order to account for the meals served. Most of the costs at the local level were for the labor to complete meal counting and claiming tasks. Those school food authorities with electronic meal counting systems reported substantial costs related to purchasing, maintaining, and operating meal counting computer systems and software. In addition to the frequency, another reason for the higher cost is that, unlike application and verification, meal counting and claiming reimbursement pertains to all reimbursable meals served—free, reduced- price and full price. For example, during school year 2000-01, FNS provided reimbursement for over 2 billion free lunches, about 400 million reduced-price lunches and about 2 billion full-price lunches. Costs for meal counting and reimbursement claiming varied considerably at the local level—school food authorities and schools combined. The costs per program dollar ranged from 2 cents to 14 cents, compared with the costs per program dollar for the other processes, which were much more consistent—from about half a cent to 3 cents for the application process and from less than 1 cent to 1 cent for the verification process. Various factors may contribute to this range of costs at the local level. For example, larger enrollments may allow economies of scale that lower the cost of food service operations. Use of an electronic meal counting system, as opposed to a manual system, has the potential to affect meal counting costs, since electronic systems require the purchase of equipment, software, and ongoing maintenance. Food service procedures may also have a bearing on costs, such as the number and pay levels of cashiers and other staff performing meal counting and reimbursement claiming tasks. The interaction of these factors and our limited number of selected sites prevents a clear explanation for the differences in estimated costs per program dollar incurred at the selected locations reviewed, except in a few instances. For example, at the local level, the school with the highest combined meal counting cost per program dollar for the school and its share of the school food authority costs (14 cents) had an enrollment of 636 children, relatively few of its children approved for free and reduced- price meals (14 percent), and a manual meal counting system. The school with the lowest combined meal counting cost (2 cents per program dollar) had about twice the enrollment, 96 percent of its children approved for free and reduced-price meals, and an electronic meal counting system. Both schools were elementary schools in mid-size city locales. For the remaining 18 schools in the sample, we saw no distinct relationship between cost and these factors. We provided a draft of this report to USDA’s Food and Nutrition Service for review and comment. We met with agency officials to discuss the report. These officials stated that written comments would not be provided. However, they provided technical comments that we incorporated where appropriate. We are sending copies of this report to the Secretary of Agriculture, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have questions concerning this report, please call me on (202) 512-7215. Key contacts and staff acknowledgments are listed in appendix III. This appendix discusses cost estimates for the application, verification, and meal counting and reimbursement claiming processes. The scope of our review included the National School Lunch Program and the School Breakfast Program as they relate to public schools. To the extent that we could, we excluded from our analyses other federal child nutrition programs and nonprofit private schools and residential child care institutions, which also participate in the school meals programs. Our review included the paper application process and did not include the direct certification of children for free and reduced-price school meals. Our focus was school year 2000-01, the most recent year for which data were available. The data we collected relate to that year. National data on the costs of the application, verification, and meal counting and reimbursement claiming processes are not available for the federal, state, or local levels, since these costs are not tracked separately. Therefore, we developed estimates of these costs on the basis of cost information provided by program managers and staff. To obtain data on the costs related to applying for free and reduced-price school meals, verifying approved applications, and counting meals and claiming reimbursements, we visited selected locations, including 5 state agencies, 10 school food authorities in public school districts, and 2 schools at each district. We chose sites that would provide a range of characteristics, such as geographical location, the size of student enrollment, the rate of children approved for free and reduced-price meals, and the type of meal counting system. We selected districts with schools that were located in rural areas, small towns, mid-size central cities, urban fringe areas of mid-size and large cities, and large central cities based on locale categories assigned to their respective districts by the National Center for Education Statistics. To include districts of various sizes in our study, we selected 2 districts in each selected state—1 with enrollment over 10,000 and 1 with enrollment under 10,000, except in Ohio. In Ohio, we selected 2 districts with enrollments of less than 5,000, since almost 90 percent of the public school districts nationwide have enrollments under that amount. We also selected districts with rates of children approved for free and reduced-price meals that ranged from 16.7 to 74.5 percent and schools with rates that ranged from 10.5 to 96.5 percent. We worked with state and school food authority officials at our selected districts to select a mix of schools that had either manual or electronic meal counting systems. Electronic meal counting systems were used at 9 selected schools. We also obtained information from officials at the Food and Nutrition Service’s (FNS) headquarters and one regional office. We selected one regional office that, according to FNS officials, had the best data available to develop estimates for the application, verification, and meal counting processes. We developed interview guides to use at selected sites. We also met with FNS and professional association officials to obtain their comments on these interview guides, and we revised them where appropriate. Using these guides, we interviewed program managers and staff at the selected locations to obtain information on tasks associated with the application, verification, and meal counting and reimbursement claiming processes for the 2000-01 school year. We obtained estimated labor and benefit costs associated with these tasks. We also obtained other estimated nonlabor costs such as those for translating, copying, printing, mailing, data processing, travel, hardware, software, and automated systems development costs. On the basis of this information, we calculated estimated costs associated with each process, that is, application, verification, and meal counting and reimbursement claiming. Using our cost estimates, we calculated costs relative to program dollars. Program dollars at the federal level for both FNS headquarters and the one region included the value of reimbursements for school meals and commodities, both entitlement and bonus, for public and nonprofit private schools and residential child care institutions because FNS was not able to provide program dollars specific to public schools. However, according to FNS officials, reimbursements and commodities provided to public schools make up the vast majority of these dollars. Program dollars at the state level included this federal funding specific to public school districts for school meals and state school meal funding. Information specific to public school districts is available at the state level because claims are made separately by each school food authority. At the local level, program dollars included the amounts children paid for the meals as well as federal and state funding. Since some school food authorities could not provide the dollar value of commodities used at selected schools, we assigned a dollar value of commodities to each of these schools based on their proportion of federal reimbursements. We included federal and state program funding and the amounts children paid for the meals because these are the revenues related to the sale of reimbursable meals. Because the definition of program dollars differed by level, we were unable to total the costs for the three levels—federal, state, and local. However, since the definition of program dollars was the same for school food authorities and schools, we were able to calculate the cost per program dollar at the local level for each school. To calculate these costs we: (1) divided the school program dollars by the school food authority program dollars; (2) multiplied the resulting amount by the total school food authority costs for each process—application, verification, and meal counting and reimbursement claiming—to determine the portion of the costs for each process at the school food authority that was attributable to each selected school; (3) added these costs to the total costs for each of the schools; and (4) divided the resulting total amount by the program dollars for each selected school to arrive at the cost per program dollar at the local level for each school. We calculated a median cost per program dollar for school food authorities and schools separately for each process—application, verification, and meal counting and reimbursement claiming. We also calculated a median cost for each process for school food authorities and schools combined to arrive at local level medians for each process. The cost estimates do not include indirect costs. For 2 of the 10 school food authorities, indirect rates were not available and in other cases, the rates varied significantly due to differing financial management and accounting policies. Also, for 2 of the 10 school food authorities, including indirect rate calculations could have resulted in some costs being double counted because during our interviews with staff, they provided estimates for many of the tasks that would have been included in the indirect rates. Depreciation costs for equipment, such as computer hardware and software, were generally not calculated nor maintained by states and school food authorities. Therefore, we obtained the costs for equipment purchased in the year under review. We did not obtain costs for equipment at the federal level because these costs could not be reasonably estimated, since equipment was used for purposes beyond the processes under review. We obtained information on the verification pilot projects from FNS officials. We also obtained information from the American School Food Service Association, the Center on Budget and Policy Priorities, and the Food Research and Action Center on several options related to the program, one of which was the same as one of the pilot projects. We did not verify the information collected for this study. However, we made follow-up calls in cases where data were missing or appeared unusual. The results of our study cannot be generalized to schools, school food authorities, or states nationwide. Program dollars include cash reimbursements and commodities (bonus and entitlement) at the federal level, the amounts provided to school food authorities for these programs at the state level, and the amounts students paid for their meals at the local level. In addition to the individuals named above, Peter M. Bramble, Robert Miller, Sheila Nicholson, Thomas E. Slomba, Luann Moy, and Stanley G. Stenersen made key contributions to this report. | Each school day, millions of children receive meals and snacks provided through the National School Lunch and National School Breakfast Programs. Any child at a participating school may purchase a meal through these school meal programs, and children from households that apply and meet established income guidelines can receive these meals free or at a reduced price. The federal government reimburses the states, which in turn reimburse school food authorities for each meal served. During fiscal year 2001, the federal government spent $8 billion in reimbursements for school meals. The Department of Agriculture's Food and Nutrition Service, state agencies, and school food authorities all play a role in these school meal programs. GAO reported that costs for the application, verification, and meal counting and reimbursement processes for the school meal programs were incurred mainly at the local level. Estimated federal and state-level costs during school year 2000-2001 for these three processes were generally much less than 1 cent per program dollar administered. At the local level--selected schools and the related school food authorities--the median estimated cost for these processes was 8 cents per program dollar and ranged from 3 cents to 16 cents per program dollar. The largest costs at the local level were for counting meals and submitting claims for reimbursement. Estimated costs related to the application process were the next largest, and estimated verification process costs were the lowest of the three. |
SSA’s disability programs provide cash benefits to people with long-term disabilities. The DI program provides monthly cash benefits and Medicare eligibility to severely disabled workers; SSI is an income assistance program for blind and disabled people. The law defines disability for both programs as the inability to engage in substantial gainful activity because of a severe physical or mental impairment that is expected to last at least 1 year or result in death. Both DI and SSI are administered by SSA and state disability determination services (DDS). SSA field offices determine whether applicants meet the nonmedical criteria for eligibility and at the DDSs, a disability examiner and a medical consultant (physician or psychologist) make the initial determination of whether the applicant meets the definition of disability. Denied claimants may ask the DDS to reconsider its finding and, if denied again, may appeal to an ALJ within SSA’s Office of Hearings and Appeals (OHA). The ALJ usually conducts a hearing at which applicants and medical or vocational experts may testify and submit new evidence. Applicants whose appeals are denied may request review by SSA’s Appeals Council and may further appeal the Council’s decision in federal court. Between fiscal years 1986 and 1996, the increasing number of appealed cases has caused workload pressures and processing delays. During that time, appealed cases increased more than 120 percent. In the last 3 years alone, average processing time for appealed cases rose from 305 days in fiscal year 1994 to 378 days in fiscal year 1996 and remained essentially the same for the first quarter of fiscal year 1997. In addition, “aged” cases (those taking 270 days or more for a decision) increased from 32 percent to almost 43 percent of the backlog. In addition to the backlog, high ALJ allowances (in effect, “reversals” of DDS decisions to deny benefits) have been a subject of concern for many years. Although the current ALJ allowance rate has dropped from 75 percent in fiscal year 1994, ALJs still allow about two-thirds of all disability claims they decide. Because chances for award at the appeals level are so favorable, there is an incentive for claimants to appeal. For several years, about three-quarters of all claimants denied at the DDS reconsideration level have appealed their claims to the ALJ level. In 1994, SSA adopted a long-term plan to redesign the disability decision-making process to improve its efficiency and timeliness. As a key part of this plan, SSA developed initiatives to achieve similar decisions on similar cases regardless of whether the decisions are made at the DDS or the ALJ level. In July 1996, several of these initiatives, called “process unification,” were approved for implementation by SSA’s Commissioner. SSA expects that process unification will result in correct decisions being made at the earliest point possible, substantially reducing the proportion of appealed cases and ALJ allowance rates as well. Because SSA expects that implementation of its redesigned disability decision-making process will not be completed until after the year 2000, SSA developed a Short Term Disability Project Plan (STDP) to reduce the existing backlog by introducing new procedures and reallocating staff. STDP is designed to expedite processing of claims in a way that will support redesign and achieve some near-term results in reducing the backlog. SSA expects that STDP’s major effect will come primarily from two initiatives—regional screening unit and prehearing conferencing activities. In the screening units, DDS staff and OHA attorneys work together to identify claims that could be allowed earlier in the appeals process. Prehearing conferencing shortens processing time for appealed cases by assigning OHA attorneys to perform limited case development and review cases to identify those that could potentially be allowed without a formal hearing. The plan called for reducing the backlog to 375,000 appealed cases by December 31, 1996. Despite SSA attempts to reduce the backlog through its STDP initiatives, the agency did not reach its goal of reducing this backlog to 375,000 by December 1996. SSA attributes its difficulties in meeting its backlog target to start-up delays, overly optimistic projections of the number of appealed cases that would be processed, and an unexpected increase in the number of appealed cases. The actual backlog in December was about 486,000 cases and has risen in the last few months to 491,000 cases, still about 116,000 over the goal. Although SSA did not reach its backlog goal, about 98,000 more cases may have been added to the backlog if STDP steps had not been undertaken. The contribution made by STDP underscores the need for SSA to continue its short-term effort while moving ahead to address the disability determination process in a more fundamental way in the long term. In addition to the backlog problem, SSA’s decision-making process has produced a high degree of inconsistency between DDS and ALJ awards, as shown in table 1. Although award rates representing DDS decision-making vary by impairment, ALJ award rates are high regardless of the type of impairment. For example, sample data showed that DDS award rates ranged from 11 percent for back impairments to 54 percent for mental retardation. In contrast, ALJ award rates averaged 77 percent for all impairment types with only a smaller amount of variation among impairment types. SSA’s process requires adjudicators to use a five-step sequential evaluation process in making their disability decisions (see table 2). Although this process provides a standard approach to decision-making, determining disability often requires that a number of complex judgments be made by adjudicators at both the DDS and ALJ levels. Social Security Disability: SSA Actions to Reduce Backlogs and Achieve More Consistent Decisions Deserve High Priority Questions asked in the sequential process Is the claimant engaging in substantial gainful activity? Does the claimant have an impairment that has more than a minimal effect on the claimant’s ability to perform basic work tasks and is expected to last at least 12 months? Do the medical facts alone show that the claimant’s impairment meets or equals the medical criteria for an impairment in SSA’s Listing of Impairments? Comparing the claimant’s residual functional capacity with the physical and mental demands of the claimant’s past work, can the claimant perform his or her past work? Based on the claimant’s residual functional capacity and any limitations that may be imposed by the claimant’s age, education, and skill level, can the claimant do work other than his or her past work? As the application proceeds through the five-step process, claimants may be denied benefits at any step, ending the process. Steps 1 and 2 ask questions about the claimant’s work activity and the severity of the claimant’s impairment. If the reported impairment is judged to be severe, adjudicators move to step 3. At this step, they compare the claimant’s condition with a listing of medical impairments developed by SSA. Claimants whose conditions meet or are medically equivalent to the listings are presumed by SSA to be unable to work and are awarded benefits. Claimants whose conditions do not meet or equal the listings are then assessed at steps 4 and 5, where decisions must be made about the claimant’s ability to perform prior work and any other work that exists in the national economy. To do this, adjudicators assess the claimant’s capacity to function in the workplace. evidence, including physician opinions and reported symptoms, such as pain. Mental impairment assessments include judgments about the claimant’s ability to understand, remember, and respond appropriately to supervision and normal work pressures. For physical impairments, adjudicators judge the claimant’s ability to walk, sit, stand, and lift. To facilitate this, SSA has defined five levels of physical exertion ranging from very heavy to sedentary. However, for those claimants unable to perform even sedentary activities, adjudicators may determine that a claimant can perform “less than a full range of sedentary” activities, a classification that often results in a benefit award. Our analysis found that differing functional assessments by DDSs and ALJs are the primary reason for most ALJ awards. Since most DDS decisions use all five steps of the sequential evaluation process before denying a claim, almost all DDS denial decisions appealed to ALJs included such a functional assessment. On appeal, the ALJ also follows the same sequential evaluation process as the DDS and also assesses the claimant’s functional abilities in most awards they make. Data from SSA’s ongoing ALJ study indicate that ALJs are much more likely than DDSs to find that claimants have severe limitations in functioning in the workplace (see table 3). contrast, reviewers using the DDS approach found that less than 6 percent of the cases merited this classification. Functional assessment also played a key role in a 1982 SSA study, which controlled for differences in evidence. This study indicated that DDS and ALJ decisionmakers reached different results even when presented with the same evidence. As part of the study, selected cases were reviewed by two groups of reviewers—one group reviewing the cases as ALJs would and the other reviewing the cases as DDSs would. Reviewers using the ALJ approach concluded that 48 percent of the cases should have received awards, while reviewers using the DDS approach concluded that only 13 percent of those same cases should have received awards. The use of medical expertise appears to influence the decisional differences at the DDS and ALJ levels. At the DDS level, medical consultants are responsible for making functional assessments. In contrast, ALJs have the sole authority to determine functional capacity and often rely on claimant testimony and the opinions of treating physicians. Although ALJs may call on independent medical experts to testify, our analysis shows that they do so in only 8 percent of the cases resulting in awards. To help reduce inconsistency, SSA issued nine rulings on July 2, 1996, which were written to address pain and other subjective symptoms, treating source opinions, and assessing functional capacity. SSA also plans to issue a regulation to provide additional guidance on assessing functional capacity at both the DDS and ALJ levels, specifically clarifying when a “less than sedentary” classification is appropriate. In addition, based on the nine rulings, SSA completed nationwide process unification training of over 15,000 adjudicators and quality reviewers between July 10, 1996, and February 26, 1997. In the training, SSA emphasized that it expects the “less than sedentary” classification would be used rarely. In the longer term, SSA plans to develop a simplified decision-making process, which will expand the role of functional capacity assessments. Because differences in functional capacity assessments are the primary reason for inconsistent decisions, SSA should proceed cautiously with its plan to expand the use of such assessments. Procedures at the DDS and ALJ levels limit the usefulness of the DDS decision as a foundation for the ALJ decision. Often, ALJs are unable to rely on DDS decisions because they lack supporting evidence and explanations of the reasons for denial, laying a weak foundation for the ALJ decision if the case is appealed. Moreover, although SSA requires ALJs to consider the DDS medical consultant’s assessment of functional capacity, procedures at the DDS level do not ensure that such assessments are clearly explained. In a 1994 study, SSA found that written explanations of critical issues at the DDS level were inadequate in about half of the appealed cases that turned on complex issues. Without a clear explanation of the DDS decision, the ALJ could neither effectively consider it nor give it much weight. At the ALJ level, claimants are allowed to claim new impairments and submit new or additional evidence, which also affects consistency between the two levels. Moreover, in about 10 percent of cases appealed to the ALJ level, claimants switch their primary impairment from a physical claim to a mental claim. In addition, data from a 1994 SSA study show that claimants submit additional evidence to the ALJ in about three-quarters of the sampled cases and that additional evidence was an important factor in 27 percent of ALJ allowances. To address the documentation issues, SSA plans to take steps to ensure that DDS decisions are better explained and are based on a more complete record so that they are more useful if appealed. On the basis of feedback during the process unification training, SSA plans further instructions and training in May 1997 for the DDSs on how and where in the case files they should explain how they reached their decisions. SSA also plans to issue a regulation clarifying the weight given to the DDS medical consultants’ opinions at the ALJ level. To deal with the potential effect of new evidence, SSA plans to return to the DDSs about 100,000 selected cases a year for further consideration when new evidence is introduced at the ALJ level. In cases where the DDS would award benefits, the need for a more time-consuming and costly ALJ decision would be avoided. SSA plans to implement this project in May 1997. Moreover, SSA’s decision to limit such returns to about 100,000 cases may need to be reassessed in light of the potential benefits that could accrue from this initiative. Although SSA has several quality review systems to examine disability decisions, none is designed to identify and reconcile factors that contribute to differences between DDS and ALJ decisions. For example, although ALJs are required to consider the opinion of the DDS medical consultant when making their own assessment of a claimant’s functional capacity, such written DDS opinions are often lacking in the case files. Quality reviews at the DDS level do not focus effectively on whether or how well these opinions are explained in the record, despite the potential importance of such medical opinion evidence at the ALJ level. Moreover, SSA reviews too few ALJ awards to ensure that ALJs give appropriate consideration to the medical consultants’ opinions or to identify means to make them more useful to the ALJs. Feedback on these issues could help improve consistency by making the DDS decision a more useful part of the overall adjudication process. To improve consistency, SSA is completing work on a notice of proposed rulemaking, with a target issue date of August 1997 for a final regulation, to establish the basis for reviewing ALJ awards, which would require ALJs to take corrective action on remand orders from the Appeals Council before benefits are paid. SSA has just started conducting preliminary reviews of ALJ awards, beginning with 200 cases a month. After the regulation is issued, they plan to increase the number of cases per month. SSA has set a first-year target of 10,000 cases to be reviewed, but this reflects only about 3 percent of approximately 350,000 award decisions made by ALJs in 1996. Ultimately, SSA plans to implement quality review measures to provide consistent feedback on the application of policy. By doing this, the agency hopes to ensure that the correct decision is made at the earliest point in the process. 18 who are likely to improve and for all low-birthweight babies within the first year of life. In addition, SSA is required to redetermine, using adult criteria, the eligibility of all 18-year-olds on SSI beginning on their 18th birthdays and to readjudicate 332,000 childhood disability cases by August 1997. Finally, thousands of noncitizens and drug addicts and alcoholics could appeal their benefit terminations, further increasing workload pressures. Despite SSA’s Short Term Disability Project Plan, the appealed case backlog is still high. Nevertheless, because the backlog would have been even higher without STDP, SSA will need to continue its effort to reduce the backlog to a manageable level until the agency, as a part of its long-term redesign effort, institutes a permanent process to ensure timely and expeditious disposition of appeals. In addition, SSA is beginning to move ahead with more systemwide changes in its redesign of the disability claims process. In particular, it is on the verge of implementing initiatives to redesign the process, including ones for improving decisional consistency and the timeliness of overall claims processing. However, competing workload demands could jeopardize SSA’s ability to make progress in reducing inconsistent decisions. We urge the agency to follow through on its initiatives to address the long-standing problem of decisional inconsistency with the sustained attention required for this difficult task. To do so, SSA, in consultation with this Subcommittee and others, will need to sort through its many priorities and do a better job of holding itself accountable for meeting its deadlines. Otherwise, plans and target dates will remain elusive goals and may never yield the dual benefits of helping to restore public confidence in the decision-making process and contributing to permanent reductions in backlog. Mr. Chairman, this concludes my prepared statement. At this time, I will be happy to answer any questions you or the other Subcommittee members may have. For more information on this testimony, please call Cynthia Bascetta, Assistant Director, at (202) 512-7207. Other major contributors are William Hutchinson, Senior Evaluator; Carol Dawn Petersen, Senior Economist; and David Fiske, Ellen Habenicht, and Carlos Evora, Senior Evaluators. Appealed Disability Claims: Despite SSA’s Efforts, It Will Not Reach Backlog Reduction Goal (GAO/HEHS-97-28, Nov. 21, 1996). Social Security Disability: Backlog Reduction Efforts Under Way; Significant Challenges Remain (GAO/HEHS-96-87, July 11, 1996). Social Security Disability: Management Action and Program Redesign Needed to Address Long-Standing Problems (GAO/T-HEHS-95-233, Aug. 3, 1995). Disability Insurance: Broader Management Focus Needed to Better Control Caseload (GAO/T-HEHS-95-233, May 23, 1995). Social Security: Federal Disability Programs Face Major Issues (GAO/T-HEHS-95-97, Mar. 2, 1995). Social Security Disability: SSA Quality Assurance Improvements Can Produce More Accurate Payments (GAO/HEHS-94-107, June 3, 1994). Social Security: Most of Gender Difference Explained (GAO/HEHS-94-94, May 27, 1994). Social Security: Disability Rolls Keep Growing, While Explanations Remain Elusive (GAO/HEHS-94-34, Feb. 8, 1994). Social Security: Increasing Number of Disability Claims and Deteriorating Service (GAO/HRD-94-11, Nov. 10, 1993). Social Security: Rising Disability Rolls Raise Questions That Must Be Answered (GAO/T-HRD-93-15, Apr. 22, 1993). Social Security Disability: Growing Funding and Administrative Problems (GAO/T-HRD-92-28, Apr. 27, 1992). Social Security: Racial Difference in Disability Decisions Warrants Further Investigation (GAO/HRD-92-56, Apr. 21, 1992). Social Security: Results of Required Reviews of Administrative Law Judge Decisions (GAO/HRD-89-48BR, June 13, 1989). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed the Social Security Administration's (SSA) actions to reduce the current backlog of cases appealed to the agency's administrative law judges, focusing on: (1) how functional assessments, differences in procedures, and quality review contribute to inconsistent results between different decisionmakers; and (2) SSA'a strategy to obtain greater decisional consistency. GAO noted that: (1) GAO's work shows that while SSA has developed broad-based plans to improve the management of its disability programs, many initiatives are just beginning and their effectiveness can be assessed only after a period of full-scale implementation; (2) for example, in the short term, SSA has taken action to try to deal with the backlog crisis, but it is still about 116,000 cases over its December 1996 goal of 375,000 cases; (3) in the longer term, SSA needs to come to grips with the systemic factors causing inconsistent decisions, which underlie the current high level of appealed cases and, in turn, the backlog crisis; (4) for example, GAO found that differences in assessments of functional capacity, different procedures, and weaknesses in quality reviews contribute to inconsistent decisions; and (5) although SSA is on the verge of implementing initiatives to deal with these factors, GAO is concerned that other congressionally mandated workload pressures, such as significantly increasing the number of continuing disability reviews and readjudicating childhood cases, could jeopardize the agency's ability to move ahead with its initiatives to reduce inconsistent decisions. |
The Department of Education manages the federal investment in education and leads the nation’s long-term effort to improve education. Established as a separate department in 1980, Education’s mission is to ensure equal access by the nation’s populace to education and to promote improvements in the quality and usefulness of education. For fiscal year 1995, Education was appropriated $32.4 billion and authorized 5,131 FTE positions to administer and carry out its 240 educational assistance programs, including aid to distressed schools through the Elementary and Secondary Education Act, support for technical training through the Carl D. Perkins Vocational and Applied Technology Education Act, support for special education programs for the disabled, and support for higher education through subsidized and unsubsidized loans and grant programs. Although Education only became a department in 1980, its field structure dates back to 1940 when the Office of Education had its own representatives in federal regional offices to assist in administering federal education laws. Historically, the major function of these offices has been to help local administrators understand federal education legislation and obtain available federal funds for education purposes. The Department of Labor’s mission is to foster, promote, and develop the welfare of U.S. wage earners; improve their working conditions; and advance their opportunities for profitable employment. In carrying out this mission, Labor—established as a department in 1913—administers and enforces a variety of federal labor laws guaranteeing workers’ rights to work places free from safety and health hazards, a minimum hourly wage and overtime pay, unemployment insurance, workers’ compensation, and freedom from employment discrimination. Labor also protects workers’ pension rights; provides for job training programs; helps workers find jobs; and tracks changes in employment, prices, and other national economic measurements. Although Labor seeks to assist all Americans who need and want to work, special efforts are made to meet the unique job market needs of older workers, economically disadvantaged and dislocated workers, youth, women, the disabled, and other groups. In fiscal year 1995, Labor had a budget of $33.8 billion and was authorized 17,632 FTE positions to administer and carry out its activities. In fiscal year 1995, the Department of Education had 72 field offices and the Department of Labor had 1,074. These field offices were located in 438 localities across the 50 states, the District of Columbia, and two territories (see fig. 1). Concentrations of offices are found in the 10 federal region cities, where a total of 279 Education and Labor field offices are located, with a total of 5,987 staff (see table 1). About 245 localities had a single Education or Labor field office, and 148 localities had between two and five offices (see fig. 2). Six of Education’s 17 major components maintained field offices (see table 2). Each of the six Education components with field offices had an office in all 10 federal region cities. In total, 94 percent of Education’s field staff were located in these 10 cities. The concentration of Education’s field offices in the federal region cities is a reflection of the role of Education’s field structure, which is principally to ensure the integrity of grant and loan programs and to ensure that federal programs are equitably accessible. For example, the Office of Postsecondary Education (OPE) formulates policy and oversees the student loan program and other sources of federal support for postsecondary students and schools. The OPE field offices carry out technical assistance, debt collection, and monitoring activities that affect students, institutions, contractors, lenders, and guaranty agencies. The mission of OCR is somewhat different in that its responsibility is to enforce civil rights laws in the nation’s schools. Its regional offices carry out these functions. Two-thirds of the Department of Education’s staff was located in headquarters in fiscal year 1995. Of Education’s 5,131 authorized FTE positions, 4,835 were actually used and 1,501, or about 31 percent of this amount, were used to support Education’s field operations. Staff usage for three components—OCR, OIG, and OPE—taken together represented 90 percent of Education’s field strength in fiscal year 1995. OCR and OIG used the preponderance of their staff resources in their field offices— about 80 percent for OCR and 68 percent for OIG (see fig. 3). OPE had about a third of Education’s total field staff positions. In fiscal year 1995, 1,074 field offices supported 17 of Labor’s 26 components (table 3). Of Labor’s total authorized staffing of 17,632 FTEs, about 63 percent (11,095) were allocated to field offices. Labor’s field offices were in a total of 437 localities across the nation. About 21 percent (229 offices) of Labor’s field offices and 42 percent of on-board field staff were located in the 10 federal region cities; together these offices were supported by 4,486 staff. Most of Labor’s components with field offices had more than half of their staff resources assigned to the field (see fig. 4). MSHA has the highest proportion of its staff positions in the field, 91 percent, to inspect mines and protect the life and health of the nation’s miners. Similarly, the Occupational Safety and Health Administration had about 82 percent of its staff positions allocated to its field offices. ESA had 84 percent of its 3,677 staff resources allocated to its 396 field offices. The concentration of Labor’s staff in its field offices reflects the primary mission of these components’ responsibilities. For example, ESA, MSHA, the Occupational Safety and Health Administration (OSHA), and the Pension and Welfare Benefits Administration are all focused on ensuring workers’ rights to safe, healthful, and fair workplaces through their enforcement and inspection activities. The occupational series that predominated in both Departments varied by component and were related to the mission of the component. For example, half the field staff of Education’s Office of Special Education and Rehabilitative Services were rehabilitation services program specialists, about half the staff of OCR were equal opportunity specialists, and about 60 percent of OIG’s field staff were auditors (see table 4). Similarly, Labor’s field staff occupational series were related to a component’s primary functions. For example, in fiscal year 1995, ESA had three major subcomponents, each with a different mission; thus, a third of its staff were wage and hour compliance specialists, a quarter were workers’ compensation claims examiners, and about 20 percent were equal opportunity specialists (see table 5). Two-thirds of OSHA’s staff were safety/health specialists or industrial hygienists. Field office staff at both Departments were composed primarily of employees in General Schedule (GS) or General Management (GM) grades 11 through 13, representing about 60 percent of both Education and Labor field staff (see fig. 5). Seven percent of both Education and Labor field staff were senior managers (GS-14 and –15). Together Education and Labor spent about 1.3 percent ($867 million) of their combined budget of approximately $66 billion in support of their field operations; more than three quarters of this amount was for staff salaries and benefits. According to GSA, Education’s 72 field offices occupied about 495,000 square feet of space. Approximately 357,000 square feet of Education’s field office space was leased from private entities, while 28 percent was federally owned. In fiscal year 1995, Education spent about $112 million on field office costs such as rent and utilities, staff salaries and benefits, and other administrative costs (see fig. 6). According to GSA, Labor occupied a total of 3 million square feet of space, 2.1 million square feet of which was leased. Labor spent a total of $755 million on its field operations, mostly for staff salaries. Both Education and Labor have eliminated and/or consolidated a few field offices within the last 5 years to improve service delivery or office operations. Within Education, such restructuring activities occurred in OIG and OCR, while at Labor, ESA, the Office of the American Workplace (OAW), and the Office of the Solicitor reported that they are reorganizing their field offices and functions along with the Employment and Training Administration (ETA), MSHA, OIG, the Office of the Assistant Secretary for Administration and Management (OASAM), and the Veterans’ Employment and Training Service (VETS). In fiscal year 1995, Education’s OIG restructured its 10 regional and 11 field offices into four areas: the Northeast Area includes Boston, New York, Philadelphia, and the Division of Headquarters Operations; the Capital Area includes Headquarters Audit Region and Accounting and Financial Management staff; the Central Southern Area includes Atlanta and Chicago; and the Western Area includes Dallas, Kansas City, Denver, San Francisco, and Seattle. The OIG reduced the amount of rented space in 10 locations to lower its leasing costs and eliminated the Baton Rouge field office and the Denver regional office as of June 30, 1996. Education’s OCR is in the process of reorganizing its headquarters division and 1 field and 10 regional offices into four mega-regions called enforcement divisions. These enforcement divisions will be (1) Enforcement Division A—New York, Philadelphia, and Boston; (2) Enforcement Division B—Atlanta, Dallas, and the new Washington, D.C./Metro office; (3) Enforcement Division C—Kansas City, Chicago, and Cleveland; and (4) Enforcement Division D—Seattle, San Francisco, and Denver. (For a more complete discussion of Education field office changes, see the component profiles in app. II.) In fiscal year 1995, Labor’s Office of the Solicitor examined its regional office structure in light of agencywide streamlining and reinvention initiatives. The analysis led to the decision to close the Solicitor’s branch office in Ft. Lauderdale, Florida. By fiscal year 1999, Labor plans to have completed the reorganization of ESA’s Wage and Hour Division and its Office of Federal Contract Compliance Programs (OFCCP) field operations. Wage and Hour’s eight regional offices will be reduced to five through the consolidation of its current (1) Philadelphia, New York and Boston regional offices into a northeast regional office and (2) Chicago and Kansas City regional offices into a single office. Labor also plans to reduce the number of Wage and Hour district offices and increase its area offices. This will essentially involve redefining the duties of about 10 district offices to provide more frontline services and fewer management-related activities. Also, through employee attrition, management/supervisory staff buyouts, and selective staff hiring, Labor plans to reduce the number of its Wage and Hour staff and its management-to-staff ratios to increase the proportion of frontline employees to better serve its many customers. Four of OFCCP’s regional offices will be combined into two. Its current Chicago and Kansas City regional offices will be merged to form one new office, and its Dallas and Denver regional offices will be combined to form the other. Also, Labor plans to eliminate at least two OFCCP district offices. OAW is in the process of reorganizing to streamline field office management and operations. The target field structure would consist of 20 field offices and 13 resident investigator offices divided into five geographic regions. The reorganization is expected to eliminate two and, in some instances, three layers of program review, significantly expand supervisory span of control, and increase the number of resident investigative offices. ETA has begun to reassess its field structure and is considering realigning and/or consolidating certain programs, functions, services, and field offices. ETA is currently reevaluating its operations in the 10 federal region cities with a view to locating them in the same area or building where feasible. ETA has reduced its total staff by 20 percent, well above its streamlining goal of a 12 percent reduction in total staffing by fiscal year 1999. Four other Labor components—MSHA, OIG, OASAM, and VETS—have also been involved in restructuring efforts. In fiscal year 1995, MSHA eliminated several of its coal mine safety and health subdistrict offices as a way to eliminate a managerial layer. Plans to restructure the OIG’s entire field structure were in process in fiscal year 1995 resulting in the elimination of eight field offices in fiscal year 1996 and a realignment of management functions and fewer GS-15 positions. The OIG is currently evaluating its Washington, D.C., field offices. OASAM, while maintaining a physical presence in each of its regions, reduced its number of regional administrators from 10 to 6. VETS is awaiting congressional approval to reduce the number of field offices that support its operations. (For a more complete discussion of Labor field office changes, see the component profiles in app. III.) The Department of Education provided us with technical comments on a draft of this report, which we have incorporated as appropriate. Education’s letter is printed in appendix VI. The Department of Labor also provided us with comments on a draft of this report and made two specific comments. First, it questioned our definition of a field office, and was concerned that using the same term to refer to all types of offices implied they were all of the same value and that this would be misleading to the reader. The list of field offices we used in this report was provided to us by Labor. In addition, the definition of field office used in this report is consistent with the information contained in our June 1995 report, Federal Reorganization: Congressional Proposal to Merge Education, Labor, and EEOC (GAO/HEHS-95-140, June 7, 1995), upon which this report follows up. The definition we used separately counts offices that had different functions or were part of different components, even if they were at the same location. The information contained in appendix III of this report explains the roles, functions, and differences between the various types of field offices associated with each of Labor’s components. Second, Labor questioned the utility of using fiscal year 1995 data, noting that the Department was making changes in its field operations that the use of fiscal year 1995 information would not capture. We used fiscal year 1995 data because it was the most recent, comprehensive, and consistent information available on Education’s and Labor’s headquarters’ and field operations. The detailed discussion of Labor’s components, their staffing, costs, and field office functions contained in appendix III was designed to provide a current and up-to-date picture of the Department’s field operations. It also contains a separate discussion of field office and organizational changes that have occurred since September 30, 1995, and notes future changes that Labor told us were planned. Labor also provided us with technical comments, which we incorporated as appropriate. Labor’s comments are printed in appendix VII. We are sending copies of this report to the Secretaries of Education and Labor; the Director, Office of Management and Budget; and other interested parties. Please contact me on (202) 512-7014 or Sigurd R. Nilsen, Assistant Director, on (202) 512-7003 if you have any questions about this report. GAO contacts and staff acknowledgments are listed in appendix VIII. We designed our study to gather information on the Departments of Education and Labor field office structures. Specifically, we gathered data on the location, staffing, square footage, and operating cost for each Department in total and its field offices. For purposes of our review, we defined a field office as any type of office other than a headquarters office—for example, a regional office, district office, or area office— established by an Education or Labor component. To perform our work, we obtained and analyzed General Services Administration (GSA) facility data and the Departments’ staffing, cost, and location data. We did our work between January and July 1996 in accordance with generally accepted government auditing standards. Data were obtained from a variety of sources because no one single source maintained all the information we sought. GSA provided data on the amount of space occupied, space usage, and rent and utilities costs for each of Labor’s components by city and state. GSA also provided total space and rent and utility cost information for Education, without component breakouts. Education provided information on the square footage occupied by its field offices and their rent and utility costs. Education also provided information on full-time equivalent (FTE) staff positions; on-board staff; personnel costs (salaries and benefits); other operating costs, such as travel and supplies; and office locations by field office. All information received from Labor was obtained through the Office of the Assistant Secretary for Administration and Management (OASAM). Labor provided data on FTEs by component. To calculate on-board staff counts, we obtained an extract of Labor’s personnel management information system showing personnel by component by city and state location. These data were augmented with information from Labor’s components. Additionally, Labor provided departmentwide and field information on personnel and other costs by component—but not by field office. To analyze field office space and rent and utility cost data, we obtained an extract of GSA’s Public Building Service Information Systems (PBS/IS) and National Electronic Accounting Report System (NEARS) databases covering all Labor and Education space rented or owned by GSA as of September 30, 1995. The PBS/IS database contained square footage allocations and information on space usage and the status and duration of the lease or rental. The NEARS database contained rent and utilities cost information. Both files were organized by GSA assignment number—that is, the unit used by GSA for billing the Departments. The file contained 1,056 unique assignment numbers for Labor and 62 for Education. These assignment numbers do not necessarily indicate different locations or individual field offices. The focus for this review was on field office rather than headquarters function and space. The GSA files used for our square footage, space usage, and rent and utility cost analyses did not contain information linking square footage with the organizational level—for example, area, district, regional, or headquarters—of the specific office. This created a special problem for identifying Washington, D.C., field offices. Thus, because we were unable to separate Washington, D.C., field offices from headquarters, for the purposes of identifying square footage and rent and utility costs, we treated all offices located in Washington, D.C., as headquarters.Eliminating the D.C. offices from this analysis resulted in the exclusion of 18 cases for Education and 17 for Labor, giving us 44 assignment numbers for Education and 1,039 for Labor in our analytic file. Because the level of detail of GSA’s information on Education’s space was not equivalent to that provided for Labor—that is, for Education we could not identify organizational level, or component, associated with square footage or cost, nor could we identify square footage by use category—we augmented the data for Education with information directly from the Department. In presenting detailed square footage estimates for Labor in appendix III, we used GSA’s four use categories—total square footage; office space; storage; and special square footage, which includes training areas, laboratories and clinics, automated data processing, and food service space. Discussions of square footage for Education in appendix II are in the 3 categories as forwarded to us by the Department—office, parking, and storage. Total agency square footage estimates presented in the body of the report for both Labor and Education—including rent and utilities costs—were provided to us by GSA. To determine the number of Education and Labor field offices and their locations, we used data prepared for us by the Departments. This information was in the form of listings, organized by component, linking organizational level—such as regional office or district office—with the relevant city and state where an office was located in fiscal year 1995. These listings identified 72 Education (as of April 20, 1995) and 1,037 Labor field offices (as of August 1, 1995). Additional Labor field offices were identified in other documents provided by the Department. As a result, our field office database increased to 1,056 Labor field offices. We based our analyses on this count of Labor offices along with the 72 Education field offices. After Education and Labor reviewed a draft of this report, Labor revised its count of field offices, amending its previous list of field offices operational in fiscal year 1995 as provided to us on August 1, 1995. Our final field office database contained 1,074 Labor and 72 Education field offices. The Departments differed in their ability to provide FTE data. We obtained from Education the number of FTEs used—not authorized—by component and field office because Education does not allocate authorized FTEs below the component level. We obtained from Labor, authorized and used FTEs by component, but not by field office because Labor does not track either authorized or used FTEs at this level. Both Departments provided us with agencywide FTE data. For on-board staff, the Departments provided nonidentifying data on the grade, occupational series, organizational, and geographic location for each employee as of September 30, 1995. Our analysis of Labor field office on-board staff was based on information extracted from the Department’s personnel management information system, which indicated 10,632 on-board staff as of September 30, 1995. After reviewing a draft of this report, Labor revised its count of on-board staff to 10,654 on the basis of input by its components. Personnel cost data (salary and benefits) along with other cost information for items such as supplies, materials, and travel was provided by the Departments in summary form by component at the national level. For both location and staffing information, we aggregated the data and prepared summary statistics by component, city, and state. Similarly, we developed summary statistics of city and state localities for field offices and field staff. Some individuals were employed at locations other than an official field office. Therefore, the total number of localities for field staff is greater than the number of localities for field offices. Unlike Education, Labor does not centrally maintain information on its components’ field office locations, staffing, and costs. Instead, each component maintains such information itself and provides OASAM with information as requested. Thus, much of the information we requested from Labor for the individual components had to be obtained from the components by OASAM. Although each component was asked to give the same information, there is no assurance that all the information provided used consistent definitions and collection methods. Thus, some variation in data quality and consistency is possible. We were unable to report data for those Labor field offices that were housed in state-owned buildings because our analysis of field office space and costs was limited to available GSA data. Additionally, because we could not directly identify square footage and rent and utility costs associated with field office functions located in headquarters space, we eliminated all Washington, D.C., locations from our field office analysis of space and rent and utility costs. This results in the estimates of costs and space for field locations to be understated by the amount allocated to field offices within the District of Columbia. Actual total field office space and rent and utility costs, therefore, may be somewhat higher than reported here. Additionally, square footage use categories reported for Labor were provided by GSA, while Education provided the information itself. Because these data were obtained from two different sources, the resultant calculations cannot be directly compared. We did not visit the field offices and could not evaluate the adequacy of the reported space provided, nor could we determine whether the number and skill levels of the staff were sufficient to perform field office activities. In addition, we did not verify any of the data provided on field office location or staffing by the Departments, nor did we independently verify the accuracy of the data provided by GSA. This appendix provides a snapshot of the Department of Education’s field offices as of September 30, 1995. Each profile shows the locations of and describes the mission and activities performed by the field offices supporting six Education components in fiscal year 1995. In addition, each profile provides the following information about the field offices: (1) staffing, (2) space occupied, (3) costs to operate, and (4) field office restructuring activities or plans. (See table II.1 for a summary of staffing, space, and cost data for all six components.) In these profiles, regional, area, district, state, and other types of offices are referred to generically as field offices. Unless otherwise noted, we used Education data to estimate the amount and cost of field office space by component because GSA does not provide square footage totals and rent/utility costs for units within Education. We also used Education data to identify the locations of official field offices; the FTE usage and on-board personnel strength of each component; salary, benefit, and other field office costs; and information about field office restructuring activities within the Department. Space (square feet) Costs (dollars in millions) Office of the Inspector General Office of Intergovernmental and Interagency Affairs Office of Special Education and Rehabilitative Services Space is provided by the Office of Intergovernmental and Interagency Affairs. Space rental costs are included with rental costs for the Office of Intergovernmental and Interagency Affairs; staff salaries and benefits and other costs are not available. The primary mission of the Office for Civil Rights (OCR) is to enforce civil rights laws in America’s schools, colleges, and universities. OCR focuses on preventing discrimination from occurring. Staff in OCR’s 11 field offices (see fig. II.1) investigate and resolve individual and class complaints of discrimination filed by members of the public and initiate compliance reviews of local and state educational agencies or higher education institutions. Field office staff provide targeted technical assistance in priority areas and respond to individual requests for information and assistance. According to OCR officials, field offices are maintained because compliance activities often require on-site investigations at educational agencies and institutions throughout the country. When conducting compliance activities, it is beneficial for OCR field staff to have the support of state and local educational institutions. Table II.2 provides key information about the 10 regional offices and 1 field office that compose OCR’s field office structure. OCR has had a field office presence in all 10 federal region cities (Boston, New York, Philadelphia, Atlanta, Chicago, Dallas, Kansas City, Denver, San Francisco, and Seattle) in addition to an office in Cleveland, Ohio, before the establishment of the Department of Education in 1980. OCR field offices in the regions are located with all other Education field offices in the regions. As of September 30, 1995, more than half of OCR’s field employees were equal opportunity specialists, attorneys, and investigators. Most of the remaining staff performed administrative and managerial duties, such as program manager, management assistant, and administrative officer (see fig. II.2). Two-thirds of the employees ranged between GS-11 and GS-13 (see fig. II.3). Ten of the 11 OCR field offices were regional offices. The Atlanta regional office (Region IV) had the most on-board staff (102), and the Cleveland field office in Region V had the fewest staff (27) (see table II.3). Boston (regional office) New York (regional office) Philadelphia (regional office) Atlanta (regional office) Chicago (regional office) Cleveland (field office) Dallas (regional office) Kansas City (regional office) Denver (regional office) San Francisco (regional office) Seattle (regional office) OCR occupied about 154,848 square feet of Education’s total field office space. Of that space, OCR leased 99,806 square feet (64 percent) in privately owned buildings, and 55,042 square feet (36 percent) was in GSA-owned buildings. OCR used about 99 percent of this space for offices and the remainder for storage (see fig. II.4). OCR’s total field office costs were $43.7 million in fiscal year 1995. Field office costs included rent and utilities; staff salaries and benefits; and other costs, such as travel, equipment, supplies, and materials. Rent and utility costs were $3.2 million, staff salaries and benefits totaled $35.7 million, and other costs totaled $4.8 million. Currently, OCR is reorganizing its headquarters division and field office into four mega-regions, called enforcement divisions, consisting of 12 sites. The enforcement divisions will be split into Enforcement Division A, which includes New York, Philadelphia, and Boston; Enforcement Division B, which includes Atlanta, Dallas, and the new Washington, D.C./Metro office; Enforcement Division C, which includes Kansas City, Chicago, and Cleveland; and Enforcement Division D, which includes Seattle, San Francisco, and Denver. The redesign of OCR’s field management structure is proposed to increase efficiency in complaint resolution, provide for better resource coordination and allocation, and reassign a significant percentage of headquarters staff to case-specific duties. According to Education, the change will also reduce administrative layers and supervisory staff to address the goals of the Vice President’s National Performance Review. The primary mission of the Office of Inspector General (OIG) is to (1) increase the economy, efficiency, and effectiveness of Education programs and operations and (2) detect and prevent fraud, waste, and abuse in them. Staff in 21 field offices are responsible for auditing and investigating activities related to Education’s programs and operations in their respective geographic locations (see fig. II.5). Staff perform program audits to determine compliance with applicable laws and regulations, economy and efficiency of operations, and effectiveness in achieving program goals. Auditors and investigators inspect entities about which there are indications of abuse significant enough to warrant a recommendation to curtail federal funding. Staff also investigate allegations of fraud by recipients of program funds and employee misconduct involving Education’s programs or operations. Because program effectiveness audits require on-site work to accurately assess program results, according to Education, field offices help to save travel dollars. A field presence also encourages the development of strong working relationships with state and local officials. The information gleaned from these officials increases the OIG’s effectiveness. Table II.4 provides key information about the 10 regional offices and 11 suboffices (known within Education as field offices) that compose OIG’s field office structure. OIG maintained a field office presence in many of its regions prior to the establishment of the Department of Education in 1980. In fiscal year 1995, OIG operated more field office locations than all the other Education components. Only two (OIG and OCR) of Education’s six components maintained field offices other than regional offices. OIG staff were located in nine federal region cities: the Washington, D.C., headquarters office; and 11 field locations (Boston; New York; Philadelphia; Atlanta; Chicago; Dallas; Kansas City; Denver; San Francisco; Seattle; Puerto Rico; Pittsburgh; District of Columbia; Nashville; Plantation, Florida; St. Paul; Austin; Baton Rouge; Long Beach; and Sacramento). OIG field offices in the federal regions are located with all Education field offices. As of September 30, 1995, auditors and criminal investigators made up approximately 92 percent of OIG’s field office staff. The remaining staff performed managerial and administrative duties, such as management services specialist, investigative assistant, administrative officer, and clerk (see fig. II.6). Seventy-two percent of the employees were in grades ranging from GS-11 to –13 (see fig. II.7). The Chicago regional office had the most on-board staff (28), and two offices—Nashville and Seattle—had the fewest staff (4 persons each) (see table II.5). Boston (regional office) New York (regional office) Puerto Rico (field office) Philadelphia (regional office) Pittsburgh (field office) Washington, D.C. (field office) Atlanta (regional office) Plantation, Fla. (field office) Nashville (field office) Chicago (regional office) St. Paul (field office) Dallas (regional office) Austin (field office) Baton Rouge (field office) Kansas City (regional office) Denver (regional office) San Francisco (regional office) Long Beach (field office) Sacramento (field office) Seattle (field office) Washington, D.C. (regional office) OIG field offices occupied 74,594 square feet of Education’s total field office space. Of that space, OIG leased 45,050 square feet (60 percent) in privately owned buildings, and 29,544 square feet (40 percent) of space was in GSA-owned buildings. OIG used about 84 percent of this space for offices and the remainder for parking and storage (see fig. II.8). OIG’s total field office costs were $18.3 million in fiscal year 1995. Field office costs included rent and utilities; staff salaries and benefits; and other costs, such as travel, equipment, supplies, and materials. Rent and utility costs were $1.3 million, staff salaries and benefits totaled $14.2 million, and other costs totaled $2.8 million. As of July 1995, OIG restructured its 10 regional and 11 field offices into four areas: the Northeast Area (includes Boston, New York, Philadelphia, and the Division of Headquarters Operations); the Capital Area (includes Headquarters Audit Region and Accounting and Financial Management staff); the Central Southern Area (includes Atlanta and Chicago); and the Western Area (includes Dallas, Kansas City, Denver, San Francisco, and Seattle). As of June 1996, OIG completed cost-cutting initiatives as follows: Reduction of space in selected areas to minimize leasing costs, including the identification of four nonheadquarters sites for possible rent savings thus far: Austin, Nashville, Seattle, and St. Paul. The elimination of one field office (Baton Rouge) and one regional office (Denver) where the amount of work no longer justifies an on-site presence. A number of auditor and investigative positions will be filled at other locations where the workload warrants additional staff. The primary mission of the Office of Intergovernmental and Interagency Affairs (OIIA) is to provide intergovernmental and public representation of the Secretary and the Department except in matters where Assistant Secretaries or their equivalents manage regional operations. OIIA is responsible for providing overall leadership in coordinating regional and field activities. OIIA has a Secretary’s regional representative in each of its 10 regional offices who serves as the Secretary’s field office representative. (See fig. II.9.) The primary mission of the Office of Management (OM) is to provide the administrative services required to assist field office staff. According to Education, regional staff (1) administer the Federal Real Property Assistance Program, to ensure maximum utilization of surplus federal property for educational purposes, and (2) provide personnel services to regional employees in other program offices. Table II.6 provides key information about the 10 regional offices that compose OIIA and OM’s field office structure. Education did not provide separate costs information for OM. Education does not maintain information on headquarters office rent by component. Rent for OM field office staff is included with OIIA rental costs. OIIA and OM had staff in the 10 federal region cities (Boston, New York, Philadelphia, Atlanta, Chicago, Dallas, Kansas City, Denver, San Francisco, and Seattle). In fiscal year 1995, the total on-board staff in OIIA’s and OM’s 10 field offices was 69 (47 for OIIA and 22 for OM). As of September 30, 1995, OIIA and OM staff performed duties in 10 job categories. OIIA had staff in six of those categories and OM had staff in five. Staff in clerical job categories supported both OIIA and OM. Three-fourths of regional OIIA staff were classified as Secretary’s regional representative, program assistant/clerk, and public affairs specialist. Approximately 73 percent of OM staff performed duties as personnel management specialists. The remaining staff performed other managerial and administrative duties, such as personnel assistant, secretary, clerk, realty specialist (OM), education program specialist (OIIA), and administrative officer (OIIA) (see figs. II.10 and II.11). OM had no staff at the GS-15 level; however, 21 percent of OIIA staff were GS-15s—representing the largest percentage of staff at any one grade level in the component. These GS-15s generally served as Secretary’s regional representatives. OIIA staff were almost evenly distributed among grades GS-1 through –13. Most OM staff were in grades GS-11 through –13 (see figs. II.12 and II.13). All 20 of the OIIA and OM field offices were regional offices (see table II.7). In fiscal year 1995, OIIA occupied 46,315 square feet of Education’s total field office space. Of that space, OIIA leased 28,561 square feet (62 percent) in privately owned buildings and 17,754 square feet of space (38 percent) was in GSA-owned buildings. OIIA used 99 percent of this space for offices and the remainder for storage (see fig. II.14). OIIA’s total field office costs were $4.6 million in fiscal year 1995. Field office costs included rent and utilities; staff salaries and benefits; and other costs, such as travel, equipment, supplies, and materials. Rent and utility costs were $948,000, staff salaries and benefits totaled $2.8 million, and other costs totaled $915,000. OM cost information for field office staff salaries and benefits and other costs was unavailable. None. The primary mission of the Office of Postsecondary Education (OPE) is to administer postsecondary education and student financial assistance programs. Programs of student financial assistance include Pell grants, supplemental educational opportunity grants, grants to states for state student incentives, direct loans to students in institutions of higher education, work-study, and the guaranteed student loan program. OPE programs also provide assistance for increasing access to postsecondary education programs, improving and expanding American educational studies and services, improving instruction in crucial academic subjects, and supporting international education. OPE maintains 10 field offices to perform activities associated with (1) training, technical assistance, and oversight of student aid programs, (2) loan servicing and debt collection, and (3) overseeing specific higher education projects (see fig. II.15). Field staff conduct program reviews of institutions to determine compliance with Title IV requirements, provide training and technical assistance for financial aid and business officers at institutions, and monitor operations at guaranty agencies. Staff also collect defaulted loans and other debts, contract with servicers, monitor collection contracts, and help in the preparation of legal actions. Regional staff also serve as focal points and as experts assisting with field readings for OPE’s higher education programs. Staff may also be called on to work on school-to-work initiatives. According to Education, because field office staff gain in-depth knowledge of the institutions in their regions, effectiveness is increased. Regional training facilities provide hands-on use of computer programs needed toward student aid and determine student eligibility. They are also a place for institutions, lenders, and guaranty agencies to call upon for technical assistance and specific help on an individual basis. In addition, several oversight activities are supported by information gathered from on-site reviews. Table II.8 provides key information about the 10 regional offices that constitute OPE’s field office structure. In fiscal year 1995, OPE’s Field Operations Service and Division of Project Services had staff in all 10 federal region cities, and Debt Collection Service had staff in three region cities—Atlanta, Chicago, and San Francisco. Half of all OPE employees were specialists in one of the following: job categories lender review specialist, institutional review specialist, contract monitor specialist, training specialist, paralegal specialist, education program specialist, computer specialist, or accounts resolution specialist/clerk. The remaining staff included management analysts, student financial accounts examiners, program manager, data transcriber, administrative officer, and clerk (see fig. II.16). About half of the employees were in grades ranging between GS-11 and –13. Most of the remaining employees were in grades ranging from GS-7 through –10 (see fig. II.17). The Chicago regional office had the most on-board staff, and Boston had the fewest staff (see table II.9). In fiscal year 1995, OPE occupied about 125,456 square feet of Education’s total field office space. Of that space, OPE leased 82,587 square feet (66 percent) in privately owned buildings and 42,869 square feet of space (34 percent) in GSA-owned buildings. OPE used about 99 percent of this space for offices and the remainder for parking (see fig. II.18). OPE’s total field office costs were $38.5 million in fiscal year 1995. Field office costs included rent and utilities; staff salaries and benefits; and other costs, such as travel, equipment, supplies, and materials. Rent and utility costs were $2.5 million, staff salaries and benefits totaled $28.4 million, and other costs totaled $7.6 million. None. The Office of Special Education and Rehabilitative Services (OSERS) administers comprehensive coordinated programs of vocational rehabilitation and independent living for individuals with disabilities. OSERS programs include support for the training of teachers and other professional personnel; grants for research; financial aid to help states to initiate, expand, and improve their resources; and media services and captioned films for people who are hearing-impaired. The Rehabilitative Services Administration (RSA) is the only OSERS unit with field offices. RSA coordinates vocational rehabilitation services programs that help individuals with physical or mental disabilities to obtain employment through the provision of such supports as counseling, medical and psychological services, job training, and other individualized services. In addition, RSA coordinates and funds a wide range of formula and discretionary programs in areas such as training of rehabilitation personnel, rehabilitation research and demonstration projects, Independent Living, Supported Employment, and others. The 10 OSERS field offices (see fig. II.19) that support RSA activities provide leadership, technical assistance, monitoring, consultation, and evaluation services and coordinate RSA and other resources used in providing services to disabled individuals through state-federal administered programs and through grantees receiving discretionary project funds. These offices are also responsible for helping colleges, universities, and other organizations and agencies to develop, implement, improve, and expand training programs designed to prepare a wide variety of rehabilitation workers who provide services to disabled individuals. According to Education officials, an OSERS regional presence encourages interactions with states and providers of services and provides unique insights into the issues involved in the rehabilitation of people with disabilities. It enables federal-state interactions closer to the point of service delivery where the unique circumstances and considerations of each state and grantee are best understood. Regional office staff have more frequent and extended contacts with state agency staff and other grantees, resulting in long-term, customer-oriented relationships and trust. Table II.10 provides key information about the 10 regional offices that make up OSERS’ field office structure. OSERS had staff in all 10 federal region cities (Boston, New York, Philadelphia, Atlanta, Chicago, Dallas, Kansas City, Denver, San Francisco, and Seattle). OSERS’ field offices in the regions are located with all other Education regional offices. As of September 30, 1995, almost half of all OSERS on-board staff were classified as rehabilitation services program specialists. Almost one-third were employed as financial management specialists and grant management specialists The remaining staff were classified as clerks, staff assistants, and secretaries (see fig. II.20). Most employees were in grades ranging from GS-11 through –13 (see fig. II.21). All 10 of the RSA field offices were regional offices. The Seattle regional office had the fewest on-board staff (4), and the remaining offices had between 5 and 10 employees (see table II.11). On September 30, 1995, OSERS occupied 28,632 square feet of Education’s total field office space. OSERS leased 17,735 square feet (62 percent) in privately owned buildings and 10,897 square feet (38 percent) in GSA- owned buildings. OSERS used 97 percent of this space for offices and the remainder for storage and parking (see fig. II.22). OSERS’ total field office costs were $6.4 million in fiscal year 1995. Field office costs included rent and utilities; staff salaries and benefits; and other costs, such as travel, equipment, supplies, and materials. Rent and utility costs were $553,000, salaries and benefits were $4.8 million, and other costs were $1.1 million. None. This appendix provides a snapshot of the Department of Labor’s field offices as of September 30, 1995. Each profile shows the locations of and describes the mission and activities performed by the field offices supporting 10 Labor components in fiscal year 1995. In addition, each profile provides the following information about the field offices: (1) staffing, (2) space occupied, (3) costs to operate, and (4) field office restructuring activities or plans. (See table III.1 for a summary of staffing, space, and cost data for all 10 components.) In these profiles, regional, area, district, state, and other types of offices are referred to generically as field offices. Because neither GSA nor Labor maintains information about field offices located in state-owned buildings, we were unable to identify the exact amount and cost of all space that Labor field staff occupied in fiscal year 1995. (Labor is not billed for the use of space in state-owned buildings.) Unless otherwise noted, we used (1) GSA data to estimate the amount and cost of Labor field office space and (2) Labor information to identify the locations of official field offices; the numbers of FTEs and on-board personnel for each component; and salary, benefit, and other field office costs. Labor also provided information about field office restructuring activities. Space (square feet) Costs (dollars in millions) Many small organizations within the Department are consolidated for administrative purposes in a Departmental Management (DM) account. This account consolidates a wide range of agencywide managerial, administrative, technical, and support activities carried out by approximately 20 different units. Our discussion of Labor’s DM function includes only the following units that were supported by field offices in fiscal year 1995: (1) Assistant Secretary for Administration and Management (OASAM), (2) Office of the Solicitor (SOL), (3) Office of Administrative Law Judges (ALJ), (4) Office of Public Affairs (OPA), (5) Office of Congressional and Intergovernmental Affairs (OCIA), and (6) the Women’s Bureau (WB). Figure III.1 shows the locations of the 62 field offices that supported Labor’s DM function in fiscal year 1995. Table III.2 provides key information about DM’s 47 regional, 8 field, and 7 branch offices. As shown in table III.3, field offices in the 10 federal region cities and 11 other localities supported DM in fiscal year 1995. Camden (N.J.) Newport News (Va.) Metairie (La.) Long Beach (Calif.) Nashville (Tenn.) Birmingham (Ala.) Arlington (Va.) The field offices that support the DM function generally perform the following activities: Office of the Assistant Secretary for Administration and Management. OASAM staff are responsible for providing a centralized source of administrative, technical, and managerial support services. Each of OASAM’s 10 regional offices—located in the federal region cities—provides a full range of services to all Labor components in their field offices in the following areas: financial management, including payroll, travel, accounting and voucher payment services; personnel classification, recruitment, training, and position management services; general administrative support, including procurement, property and space management, communications, and mail services; automatic data processing management, including programming support; and safety and health services, including safety inspections of regional Job Corps Centers and support for wellness fitness programs for Labor field office employees. In addition, staff in OASAM’s regional offices helped to manage and direct affirmative action and equal employment opportunity programs within Labor, ensuring full compliance with title VII of the Civil Rights Act of 1964; title IX of the Education Amendments of 1972, as amended; title I of the Civil Rights Act of 1991; Section 504 of the Rehabilitation Act of 1973, as amended; the Age Discrimination Act of 1973, as amended; and the investigation of certain complaints alleging discrimination on the basis of disability arising under the Americans With Disabilities Act. According to Labor, OASAM’s field presence in all of these areas allows the personal contact with program managers and employees that enhances the Department’s ability to provide effective and efficient support services. OASAM’s staff work in localities with the greatest concentrations of Labor managers and employees. Office of the Solicitor. SOL is responsible for providing the Secretary of Labor and other Department officials with the legal services required to accomplish Labor’s mission and the priority goals set by the Secretary. SOL devotes over two-thirds of its resources to Labor’s major enforcement programs (for example, OSHA and MSHA). Its eight regional offices and seven branch offices provide legal services and guidance to each of the Labor component’s regional administrators. Within a specific geographic area, each regional or branch office primarily performs trial litigation support for Labor’s enforcement programs and provides legal support and services to selected Labor components that perform work in the area. Office of Administrative Law Judges. Judges at the eight field offices primarily preside over cases related to Labor’s Black Lung and Longshore programs. These programs provide income support for workers disabled in coal mining and longshore operations. Federal regulations require that hearings be held within 75 miles of a Black Lung claimant’s residence. Labor applies this standard also to Longshore cases. Approximately 60 percent of all Black Lung cases each year are handled by the three ALJ offices in the Camden, New Jersey, Cincinnati, Ohio, and Pittsburgh, Pennsylvania, field offices. Four other field offices handle 75 percent of Labor’s Longshore cases annually. According to Labor, ALJ’s field presence allows the judges to establish better working relationships with local attorneys. As a result, compliance with Labor laws is achieved more readily because the local bar is more familiar with case law in specific localities. Office of Public Affairs. Staff at OPA’s 10 regional offices, located in the federal region cities, provide, for example, (1) media relations services, such as issuing press releases and arranging media coverage of Labor programs and law enforcement actions; (2) public information services designed to educate and inform workers, employers, and the general public about their rights and responsibilities under the laws and programs administered by Labor; and (3) publicity services that advertise public meetings, conferences, and special projects sponsored by Labor’s components. According to Labor, OPA’s field offices allow staff to identify local news media and reporters that have an interest in particular Labor programs or events. Field staff are then able to alert reporters to news releases and respond to questions in a timely manner. Office of Congressional and Intergovernmental Affairs. OCIA’s function is generally performed by one person—the Secretary’s representative. These representatives (1) serve as the ongoing liaison in the region with governors, mayors, state officials, congressional offices, organized labor, and the business community; (2) represent Labor at educational forums, meetings, and regional conferences; (3) educate public officials and constituents about the policies, programs, and initiatives of the Secretary of Labor and the agency; (4) provide regional perspective and feedback to headquarters on policies and programs; and (5) carry out special projects in the regions for the Secretary. Women’s Bureau. WB’s 10 regional offices play a key role in administering two federal programs: the Nontraditional Employment for Women Act (P.L. 102-235) and Women in Apprenticeship and Nontraditional Occupation Act (P.L. 102-530). In addition, regional office staff (1) make presentations to the public and the media on a variety of issues such as women’s job rights, labor force participation, job training activities, and work place safety and health issues; (2) work with federal, state, and local government officials on behalf of working women; (3) provide technical assistance and education services to women in the workforce; and (4) organize public meetings on working women’s issues. DM staff represented over 40 different professional and administrative job categories. Attorneys and judges made up approximately 30 percent of the DM field office workforce (see fig. III.2). The remaining staff were paralegal specialists, personnel management specialists, personnel classification clerks, fiscal clerks, and accountants. Approximately 34 percent of DM field office staff were grades GS-11, –12, and –13. Staff at the GS-5 and –7 grade levels constituted 22 percent of its field office workforce (see fig. III.3). In fiscal year 1995, DM field offices occupied space in 59 buildings throughout the United States, totaling 482,648 square feet. According to GSA data, 207,813 square feet of space was owned by GSA and 274,835 square feet was leased from privately owned sources. Most of the space used by the DM functions was used for offices and the remainder for storage and other uses, such as training, conferences, and data processing (see fig. III.4). DM field costs totaled $47.2 million in fiscal year 1995. These costs included rent and utilities; staff salaries and benefits; and other costs, such as equipment, supplies, and materials. Rent and utility costs were $8.7 million, which was 18 percent of the function’s total field office costs. Costs for staff salaries and benefits totaled $32.9 million and other costs totaled $5.6 million, which were about 70 and 12 percent, respectively, of the total field office costs for this function. In fiscal year 1995, SOL examined its regional office structure in light of agencywide streamlining and reinvention initiatives. This analysis led to the decision to close the SOL branch office in Ft. Lauderdale, Florida. Effective in fiscal year 1996, while maintaining a physical presence in each of its regions, OASAM will have reduced its number of regional administrators from 10 to 6. The primary mission of the Bureau of Labor Statistics (BLS) is to collect, process, analyze, and disseminate data relating to employment, unemployment, and other characteristics of the labor force; prices and consumer expenditures; wages and other worker compensation, and industrial relations; productivity and technological change; economic growth and employment projections; and occupational safety and health. These basic data—practically all supplied voluntarily by business establishments and members of private households—are issued in monthly, quarterly, and annual news releases; bulletins, reports, and special publications; and periodicals. Statistical data are also made available to the general public through electronic news service, magnetic tape, diskettes, and microfiche, as well as through the Internet. BLS conducts many of its mission-related activities through its eight field offices (see fig. III.5). According to Labor, BLS’ field structure maximizes the effectiveness of BLS’ data collection activities, saves travel expenditures, and accommodates workload requirements. Table III.4 provides key information about BLS’ eight regional offices. In fiscal year 1995, BLS maintained regional offices in the following cities: Boston, New York, Philadelphia, Atlanta, Chicago, Dallas, Kansas City, and San Francisco. BLS regional offices (1) issue reports and releases usually presenting locality or regional issues and (2) assist business, labor, academic, and community groups with using the economic statistical data BLS produces. Regional office staff also supervise the work of part-time field staff who (1) collect data for the Consumer Price Index and occupational compensation surveys and (2) survey firms for the Producer Price and Export and Import Price programs. These “outstationed” staff performed their BLS duties in over 70 locations throughout the United States. BLS employed only about 9 percent of all Labor on-board field office staff in fiscal year 1995, but had the largest proportion of part-time staff among Labor components with field offices—34 percent of BLS staff worked part time. Part-time staff in the other components represented less than 10 percent of these components’ on-board staffs. BLS staff represented over 15 different professional and administrative job categories. Economists and economic assistants made up approximately 80 percent of BLS’s field office workforce (see fig. III.6). The remaining staff included statisticians, computer specialists, public affairs assistants, and clerical support staff. Approximately 46 percent of BLS’ field office staff were GS-11s, –12s, and –13s. Staff at the GS-5 and –6 pay levels made up about 23 percent of BLS’ field office workforce (see fig. III.7). From one to five BLS staff persons worked in 84 percent of the U.S. localities with BLS staff. Nine of these localities had over 30 BLS employees. Generally, economic assistants in grades GS-5 through –7 provided the BLS presence in those localities with only one staff person. In several cases, a GS-11 or –12 economist represented BLS in the locality. In fiscal year 1995, BLS field offices occupied space in 84 buildings throughout the United States, totaling 219,324 square feet. Over 83,600 square feet was owned by GSA and 135,659 was leased from private sources. (We were unable to determine how much space, if any, BLS occupied in state-owned buildings.) BLS used 195,663 square feet—or about 89 percent—of this space for offices and the remainder for storage and other uses (see fig. III.8). At 50 of the 84 buildings BLS occupied in fiscal year 1995, other Labor components were also located at the same address. Field costs for BLS totaled $51.1 million in fiscal year 1995. These costs included rent and utilities; staff salaries and benefits; and other costs, such as equipment, supplies, and materials. Rent and utility costs were $4.8 million, which was 9 percent of BLS’ total field office costs. Costs for staff salaries and benefits totaled $36.5 million and other costs totaled $7.9 million, which were about 71 and 15 percent, respectively, of BLS’ total field office costs. None. The Employment Standards Administration (ESA) is responsible for administering and directing programs dealing with minimum wage and overtime standards; registration of farm labor contractors; determining prevailing wage rates to be paid on federal government contracts and subcontracts; family and medical leave; nondiscrimination and affirmative action for minorities, women, veterans, and government contract and subcontract workers with disabilities; and workers’ compensation programs for federal and certain private sector employers and employees. The field structure for ESA—a total of 396 field offices—supports three program areas—the Wage and Hour Division, the Office of Federal Contract Compliance Programs, and the Office of Workers’ Compensation Programs (see fig. III.9). The largest division within ESA is the Wage and Hour Division (WHD) with its 8 regional offices, 54 district offices, 45 area offices, and 192 field offices. According to Labor, in order to enforce federal standards for working conditions and wages, WHD focuses its investigative efforts mainly on industries that employ large numbers of workers in low-wage jobs because this is where wage, overtime, and child labor violations often occur. WHD field staff respond to complaints alleging violations and target their enforcement efforts at employers with a high likelihood of repeated and egregious violations. WHD field staff also detect and remedy violations of overtime, child labor, and other labor standards. With over 280 offices nationwide, WHD supports its mission by providing a local presence in most of the metropolitan areas of the country. According to Labor, WHD’s streamlining plan will make its mission more challenging because having fewer offices will increase travel costs and possibly impede access to some geographic areas. The Office of Federal Contract Compliance Programs (OFCCP), with its 10 regional offices, 45 district offices, and 10 area offices, conducts compliance reviews of supply, service, and construction companies with federal contracts and federally assisted programs for construction, alteration, and repair of public works. OFCCP ensures that prevailing wages are paid and overtime standards achieved in accordance with the provisions of the Davis-Bacon Act (40 U.S.C. 276a) as well as the Service Contract Act (41 U.S.C. 351), Public Contracts Act, and Contract Work Hours and Safety Standards Act. According to Labor, OFCCP’s field structure provides a local contact for representatives of federal contractors to obtain information and technical assistance when establishing their affirmative action programs. It also provides local contacts and local offices that help provide women and minorities with more employment opportunities as well as a place to file complaints against federal contractors. Labor maintains that these local offices decrease travel costs because OFCCP staff make less frequent overnight trips. The Office of Workers’ Compensation Programs (OWCP) is supported by 10 regional offices, 34 district offices, and 7 field offices that are staffed on a part-time basis. OWCP’s primary responsibilities are to administer compensation programs that pay federal employees, miners, longshore, and other workers for work-related injuries, disease, or death. These compensation programs are authorized by the Federal Employees Compensation Act, Longshore and Harbor Workers Compensation Act and its various extensions, and the Black Lung Benefits Act. OWCP also administers the Black Lung Disability Trust Fund and provides budget, automated data processing, and program technical support for the compensation programs. OWCP’s field structure, according to Labor, gives claimants and employers easier access to assistance when processing claims and provides faster and more efficient service. Field office locations are necessary to be near the homes and work places of the parties involved in claims to ensure timely reconciliation of claims and to minimize staff travel costs. Table III.5 provides key information about the 28 regional offices, 133 district offices, 199 field offices, and 55 area offices that make up ESA’s field office structure. ESA’s various field offices generally perform the following functions: Regional offices. WHD, OFCCP, and OWCP regional offices generally provide the executive direction and administrative support for all other respective field offices operating in a particular region. District offices. A WHD district office provides the day-to-day management and supervision of selected area and field offices. WHD district office staff provide education outreach and investigate alleged violations of the Fair Labor Standards Act (29 U.S.C. 201) and other labor standards laws. OFCCP district offices supervise and manage selected area offices. Within OWCP, district office staff process either Longshore and Harbor Workers, Coal Mine Workers, or Federal Employees Compensation Act claims. OWCP district offices work with all parties involved in a claim to secure the information needed to disallow or accept the claim. OWCP district offices serve as information repositories for employers and employees about the various disability compensation programs that Labor administers. Area offices. WHD area offices staff investigate alleged violations of the Fair Labor Standards Act and other labor standards laws. Labor considers WHD area office staff “frontline” employees because they inspect work sites and interview employers and employees as part of their investigatory and enforcement activities. WHD area offices also make available to employers and workers information about the Fair Labor Standards Act, other laws, and their rights and responsibilities under the law. Staff at OFCCP area offices investigate allegations of unfair bidding and hiring practices involving minority construction contractors and suppliers. OFCCP area offices also work with employers to ensure compliance with applicable federal contract laws and procedures. Field offices. WHD field offices are usually staffed by one or two compliance specialists who are also considered frontline workers by Labor. They perform the same investigatory and enforcement activities as the WHD area offices but in many more locations. OWCP’s field offices are maintained on a part-time basis by the Black Lung program and provide a local point of contact for claimants and other interested parties. ESA employed about 28 percent of all Labor on-board field office staff in fiscal year 1995. ESA staff represented over 30 different professional and administrative job categories. Wage/hour compliance specialists, workers’ compensation claims examiners, and equal opportunity specialists made up the largest proportion of ESA’s field office workforce (see fig. III.10). The remaining staff included wage analysts, management and program analysts, and clerical and other support staff. Less than 2 percent of ESA’s staff worked part time. Approximately 64 percent of ESA’s field office staff were at the GS-11, –12, and –13 grade levels. Staff at the GS-5 and –6 pay levels constituted about 12 percent of ESA’s field office workforce (see fig. III.11). From one to five ESA staff worked in almost 70 percent of the 280 U.S. localities with ESA staff (see table III.6). GS-11 and –12 wage/hour compliance specialists primarily represented ESA in those localities with only one ESA staff person. Seventeen localities had over 30 ESA employees—they generally were associated with an ESA regional office. In fiscal year 1995, ESA field offices occupied space in 335 buildings throughout the United States, totaling 769,237 square feet. About 272,200 square feet was owned by GSA and about 497,000 square feet was leased from privately owned sources. ESA used about 671,000 square feet of this space for offices and the remainder for storage and other activities (see fig. III.12). At 138 of the 335 buildings ESA occupied in fiscal year 1995, other Labor components were also located at the same address. Field costs for ESA totaled $179.2 million in fiscal year 1995. These costs included rent and utilities; staff salaries and benefits; and other costs, such as equipment, supplies, and materials. Rent and utility costs were $14.9 million, which was about 8 percent of ESA’s total field office costs. Costs for staff salaries and benefits totaled $156 million and other costs totaled $8.3 million, which were about 87 and 5 percent, respectively, of ESA’s total field office costs. By fiscal year 1999, Labor plans to have completed the reorganization of ESA’s WHD and OFCCP field operations. WHD’s eight regional offices will be reduced to five through the consolidation of its current (1) Philadelphia, New York, and Boston regional offices into a northeast regional office and (2) Chicago and Kansas City regional offices into a single office. Labor also plans to reduce the number of WHD district offices and increase its area offices. This will essentially involve redefining the duties of about 10 district offices to provide more frontline services and fewer management-related activities. Also, through employee attrition, management/supervisory staff buyouts, and the conversion of supervisory positions to senior technical positions, Labor plans to reduce its WHD staff and management-to-staff ratios to increase the proportion of frontline WHD employees to better serve its many customers. Four of OFCCP’s regional offices will be combined into two. Its current Chicago and Kansas City regional offices will be merged to form one new office, and its Dallas and Denver regional offices will be combined to form the other. Also, Labor plans to eliminate at least two OFCCP district offices. OFCCP will continue to review additional district offices to determine whether more can be converted into area offices by fiscal year 1999. The Employment and Training Administration (ETA) fulfills responsibilities assigned to Labor that relate to employment services, job training, and unemployment insurance. ETA administers, among others, the following: Federal Unemployment Insurance System, U.S. Employment Service, federal activities under the National Apprenticeship Act, Adult and Youth Training Programs (title II of the Job Training Partnership Act), the dislocated worker program under the Economic Dislocation and Worker Adjustment Assistance Act (title III of the Job Training Partnership Act), Job Corps (title IV of the Job Training Partnership Act), federal activities under the Worker Adjustment and Retraining Notification Act, the Trade Adjustment Assistance Program, and the Senior Community Service Employment Program (title V of the Older Americans Act). ETA’s 146 field offices (see fig. III.13) help to administer the nation’s federal-state employment security system; fund and oversee programs to provide job training for groups having difficulty entering or returning to the workforce; formulate and promote apprenticeship training standards and programs; promote school-to-work initiatives, one-stop career centers, and labor market information; and conduct continuing programs of research, development, and evaluation. According to Labor, several reasons exist for the field structure of ETA. To fulfill its mission, many of ETA’s regional and field offices are located in the same area so as to reduce overhead and administrative costs. Their locations facilitate direct and more frequent contact on site with states and local entities and the provision of timely information and feedback. Field office staff can provide on-site technical assistance, which would be more costly, infrequent, and less efficient if staff were more centralized. The close proximity of ETA staff to its state and local grantees and contractors is essential to the agency’s ability to oversee and maximize program integrity while minimizing travel costs. Table III.7 provides key information about the 10 regional, 50 state, 8 area, and 78 local offices that constituted ETA’s field office structure. ETA’s various field offices generally support its major program activities—training and employment services, Job Corps, unemployment insurance, and apprenticeship training through the Bureau of Apprenticeship and Training (BAT). The regional offices perform activities related to the Job Training Partnership Act and several other programs. The balance of ETA’s field offices—state, area, and local offices—are part of the BAT program. BAT is unique to ETA in that it provides consultant services to employers, employer groups, unions, employees, and related business and trade associations using private-sector resources to improve the skills of the workforce. The staff develop voluntary standards and agreements between the parties and work to ensure that the standards for work, training, and pay are mutually achieved for apprentices and their sponsors. ETA’s field offices perform the following functions: Regional offices. Regional office staff ensure the efficient administration of the training and employment services operated by state grantees under the Job Training Partnership Act, Wagner-Peyser Act, Trade Act, and North American Free Trade Agreement; supports state and local one-stop career center and school-to-work system building efforts; and provide consultation and guidance to state grantees for the planning and operation of state and federal unemployment insurance and related wage-loss compensation programs. The BAT regional offices are responsible for directing, planning, and administering effect BAT programs and ensure that ETA’s school-to-work initiatives are incorporated in training programs when feasible. Job Corps regional offices ensure that centers are safe learning and living environments for students; implement program policies; and coordinate with schools and training programs to support Job Corps programs. State offices. State office staff develop, coordinate, promote, and implement apprenticeship and allied employment and training programs in industry on a statewide basis. They also provide technical assistance to industry, management, labor, education, and other groups concerned with economic development within a state. Area and local offices. Staff in these offices perform the same basic functions done by state office staff, except on a less-than-statewide basis. ETA staff represented 24 different professional and administrative job categories. Most of ETA’s field office workforce was composed of manpower development specialists, apprenticeship training representatives, unemployment insurance program specialists, and secretaries (see fig. III.14). The remaining staff included job categories such as alien certification clerk and apprenticeship training assistant, computer specialist, executive assistant, and program analyst. Approximately 62 percent of ETA’s field office staff were middle managers: GS-11s, –12s, and –13s. Staff at the GS-5 and –6 pay levels constituted about 15 percent of ETA’s field office workforce (see fig. III.15). From one to five ETA staff persons worked in 87 of the 98 localities with ETA staff (see table III.8). Ten localities—representing the locations of ETA’s regional offices—had over 30 ETA employees. Generally, apprenticeship training representatives in grades GS-11, –12, and –13 provided the ETA presence in those localities with only one staff person. In fiscal year 1995, ETA field offices occupied space in 127 buildings throughout the United States, totaling 226,649 square feet. About 81,600 square feet was owned by GSA and 145,046 square feet was leased from privately owned sources. ETA used about 93 percent of this space for offices and the remainder for storage and other activities (see fig. III.16). At 98 of the 127 buildings ETA occupied in fiscal year 1995, other Labor components were also located at the same address. ETA’s field office costs totaled $66.4 million in fiscal year 1995. These costs included rent and utilities; staff salaries and benefits; and other costs, such as equipment, supplies, and materials. ETA paid more for these costs than five of the other nine Labor components. Rent and utility costs were about $5 million, which was 7 percent of total rent and utility costs for all ETA field offices. Costs for staff salaries and benefits totaled $51.4 million and other costs totaled $10.1 million, which were about 77 and 15 percent, respectively, of ETA’s total field office costs. ETA has begun to reassess its field structure and is considering realigning and/or consolidating certain programs, functions, services, and field offices. ETA is currently reevaluating its operations in the 10 federal region cities with a view to locating them in the same area or building where feasible. ETA has reduced its total staff by 20 percent, well above its streamlining goal of a 12-percent reduction in total staffing by fiscal year 1999. The primary mission of the Mine Safety and Health Administration (MSHA) is to protect the safety and health of the nation’s miners who work in coal, metal, and nonmetal mines. MSHA’s 155 field offices (see fig. III.17) develop and enforce mandatory safety and health standards, ensure compliance with the standards, conduct inspections, assess civil penalties for violations, and investigate accidents. In addition, MSHA field offices provide assistance in the development of safety programs, and improve and expand training programs in cooperation with the states and the mining industry. In conjunction with the Department of the Interior, MSHA contributes to the expansion and improvement of mine safety and health research and development. MSHA primarily performs its enforcement and assessment functions through a complement of offices known within the component as district, subdistrict, and field offices, not regional offices. According to MSHA, the mine community as well as Labor benefits from these offices. The geographical distribution of MSHA’s field offices facilitates the efficient and effective operation of MSHA’s safety and health programs. The distribution of the field offices minimizes the travel time and costs of the inspection and technical staff, which increases the time available for inspection and compliance assistance activities. Also, the proximity of the field offices to the nation’s mines allows MSHA to be more accessible to the mining community and respond quickly to mine emergencies. Table III.9 provides key information about the 16 district offices, 17 subdistrict offices, 108 field offices, 11 field duty stations, and one training center that compose MSHA’s field structure. MSHA’s various offices generally perform the following functions: District offices. A district office is responsible for keeping its fingers on the pulse of all active mining. One set of MSHA district offices monitors coal mines, while the other oversees the activities of mines that produce metals and nonmetals. A district office provides the managerial oversight and administrative support for the subdistrict and field offices. Subdistrict offices. These offices provide the direct technical supervision of the field offices and field duty stations. Field offices. A field office is under the direct supervision of a subdistrict office. Field office staff generally inspect coal or metal/nonmetal mines or supervise those who do. Field duty stations. These offices generally perform the same functions as field offices, except no supervisors are on site. One or two mine inspectors staff a field duty station and are supervised by a field office. Training center. The National Mine Health and Safety Academy in Beckley, West Virginia, is responsible for providing training services and training programs for miners and MSHA employees. Other offices. The Safety and Health Technology Center in Bruceton, Pennsylvania, provides engineering and scientific capability to assist MSHA, states, and the mining industry in identifying and solving technological mine safety and health problems. MSHA’s Approval and Certification Center in Triadelphia, West Virginia, approves, certifies, and accepts machinery, instruments, materials, and explosives for underground surface mines. Both centers report to MSHA headquarters. Because most of the nation’s coal mines are located in the Appalachian area, 8 of the 10 district offices for Coal Enforcement were located in Pennsylvania, Virginia, West Virginia, and Kentucky in fiscal year 1995. The district offices for coal mines west of the Mississippi and in the north central part of the nation were in Colorado and Indiana. However, the district offices for metal/nonmetal mines were more widely distributed because these mines are more widely dispersed throughout the country. According to MSHA, it continually assesses its field structure to best ensure the safety and health of U.S. mine workers and, when necessary, adjusts its office locations to match shifts in mining activity. According to Labor, district offices are generally staffed by district managers, technical staff and assistants, and administrative workers, while field offices are generally staffed by inspectors. Larger field offices have a supervisor inspector as well as a clerk. MSHA employed nearly 20 percent of all Labor on-board field office staff in fiscal year 1995. MSHA staff represented 50 different professional and administrative job categories. Mine safety and health inspectors and engineers made up over 60 percent of MSHA’s field office workforce (see fig. III.18). The remaining staff supported these professionals and included job categories such as mine assessment/health clerk, office automation clerk, engineer technician, computer specialist, and financial management specialist. Approximately 71 percent of MSHA’s field office staff were at the GS-11, –12, and –13 levels, with half of all MSHA field office staff at the GS-12 level. Staff at the GS-5 and –6 pay levels composed about 14 percent of MSHA’s field office workforce (see fig. III.19). From 6 to 20 staff persons worked in 60 percent of the U.S. localities with MSHA staff (see table III.10). The 15 localities with over 30 staff generally supported MSHA’s coal and metal/nonmetal district offices. GS-11 and –12 coal mine safety and health inspectors primarily provided the MSHA presence in the seven localities with one person each. In fiscal year 1995, MSHA field offices occupied space in 123 buildings throughout the United States, totaling 575,865 square feet. About 78,900 square feet was owned by GSA, and about 496,919 was leased from privately owned sources. MSHA used 429,938 square feet for offices and the remainder for storage and other uses such as training, laboratory testing, and conferences (see fig. III.20). At 20 of the 123 buildings MSHA occupied in fiscal year 1995, other Labor components were also located at the same address. MSHA field office costs totaled $173.3 million in fiscal year 1995. These costs included rent and utilities; staff salaries and benefits; and other costs, such as equipment, supplies, and materials. Rent and utility costs were about $8.8 million, which was 5 percent of total field costs for MSHA. Costs for staff salaries and benefits totaled $135.3 million and other costs totaled $29.2 million, which were about 78 and 17 percent, respectively, of MSHA’s total field office costs. During fiscal year 1995, MSHA began eliminating coal mine safety and health subdistrict offices as part of a multi-year effort to restructure the field structure to eliminate a managerial level. Elimination of the metal and nonmetal subdistrict offices was completed in previous years. In July 1993, Labor Secretary Reich created the Office of the American Workplace (OAW) to provide a national focal point for encouraging the creation of high-performance work place practices and policies. During fiscal year 1995, OAW’s mission was implemented by three major subunits: the Office of Work and Technology Policy, the Office of Labor- Management Programs, and the Office of Labor-Management Standards (OLMS). Of these three subunits, OLMS is the only one supported by field offices (see fig. III.21). OAW’s 34 field offices help to administer and enforce provisions of the Labor-Management Reporting and Disclosure Act of 1959 (LMRDA), as amended, that establish standards for labor union democracy and financial integrity and require reporting and public disclosure of union reports. They also help to administer related laws, which affect labor organizations composed of employees of most agencies of the federal executive branch and certain other federal agencies subject to similar standards of conduct. To protect the rights of members in approximately 48,000 unions nationwide, OAW provides for public disclosure of reports required by the LMRDA, particularly labor organization annual financial reports; conducts compliance audits to ensure union compliance with applicable standards; conducts civil and criminal investigations, particularly in regard to union officer elections and union funds embezzlement; and provides compliance assistance to union officials and union members to promote knowledge of and conformity with the law. According to Labor, several factors affected its decision to establish OLMS field offices, such as the number and size of labor unions located in a geographic area and the level of statutorily mandated work historically performed in the area. Field offices allow staff to be within close proximity to the work and generally reduce travel costs. Table III.11 provides key information about the 10 regional offices, 18 district offices, and 5 resident investigator offices. OAW’s various field offices generally perform the following functions: Regional offices. A regional office directly supervises the operations of specific district and/or resident offices. A regional office also is staffed with investigators who conduct (1) civil and criminal investigations, particularly with regard to union officer elections and union funds embezzlement, and (2) investigative audits of unions. District offices. A district office is responsible for conducting OLMS’ investigative work and providing public disclosure of reports that are in accordance with statutory requirements and guidance and assistance to labor organizations and others to promote compliance with the laws and requirements of the agency and the LMRDA. Resident investigative offices. Investigators in these 1- to 2-person offices carry out OAW’s activities performed at the regional and district offices, but in selected locations. The offices typically have no on-site manager or clerical support person. OAW employed 2.3 percent of all Labor on-board field office staff in fiscal year 1995. OAW staff represented six different professional and administrative job categories. Investigations analysts made up over 80 percent of OAW’s field office workforce (see fig. III.22). The remaining staff included auditors, computer clerks, and management assistants. Almost 80 percent of OAW’s field office staff were frontline workers: GS-11s, –12s, and –13s. Staff at the GS-5 and –6 pay levels made up about 11 percent of OAW’s field office workforce (see fig. III.23). About 2 percent of OAW’s field staff were part-time employees. From 6 to 10 staff worked in 39 percent of the 33 U.S. localities with OAW staff (see table III.12). Generally, GS-12 investigations analysts provided the OAW presence in those localities with only one staff person. According to GSA, OAW field offices occupied space in 38 buildings throughout the United States, totaling 67,465 square feet in fiscal year 1995. Of this total, 28,953 square feet was owned by GSA, and 38,512 square feet was leased from privately owned sources. OAW used 78 percent of this space for offices and the remainder for storage and other activities (see fig. III.24). At 31 of the 38 buildings OAW occupied in fiscal year 1995, other Labor components were also located at the same address. OAW field costs totaled $18.6 million in fiscal year 1995. These costs included rent and utilities; staff salaries and benefits; and other costs, such as equipment, supplies, and materials. Rent and utility costs were $1.3 million, which was 7 percent of total field office costs for OAW. Costs for staff salaries and benefits totaled $14.1 million and other costs totaled $3.2 million, which were about 76 and 17 percent, respectively, of OAW’s total field office costs. OAW is in the process of reorganizing to streamline field office management and operations. The target field structure consists of 20 field offices, some with resident investigative offices, divided into five geographic regions. The reorganization is expected to eliminate two and, in some instances, three layers of program review, significantly expand supervisory spans of control, and increase the number of resident investigative offices. A GM-15 regional manager with redefined responsibilities will oversee each region. Consolidation and restructuring will eliminate 5 GM-15 regional director, all 10 GM-14 deputy regional director, and 22 GM-13 supervisory investigator or district director positions. District offices will be headed by a single manager, a GM-13 or GM-14 office director, except that the Washington, D.C., and New York offices will have two office managers—a district director and a supervisory investigator— because of the large numbers of international unions in those office jurisdictions and the resulting level of complex casework, including International Compliance Audit program cases. All but those two GM-13 supervisory investigator positions will be eliminated. Most GM-13 supervisory investigator positions and GM-13 district director positions in small offices will be converted to GS-13 senior investigator positions, and a number of additional such positions will be established. Senior investigators primarily will have case-related duties and will serve as team leaders and resource persons to other investigators. In offices without on-site managers, senior investigators will also serve as the local OAW representative. No senior investigator will have managerial functions. On-site manager positions will be eliminated in the Minneapolis district office and the Kansas City regional office. The Puerto Rico and Honolulu offices will retain small investigator staffs without supervisory or clerical staff, but because of their relative geographic isolation, will continue to maintain statutorily required reports for public disclosure. Without eliminating OAW’s presence in areas where offices now exist, including all Labor regional cities, the number of full-service regional and district offices will be reduced by converting a number of small offices to resident status without public report disclosure responsibilities. OAW will convert full-service offices in Houston, New Haven, Tampa, Miami, and Newark to resident investigative offices. OAW will continue to consider whether additional resident investigative offices are needed on the basis of workload, customer service needs, and travel cost reductions. These types of offices will be staffed with one or two investigators and will have no on-site mangers or clerical support, as is typical now among investigative resident offices. The Office of Inspector General (OIG) is responsible for providing comprehensive, independent, and objective audits and investigations to identify and report program deficiencies and improve the economy, efficiency, and effectiveness of Labor operations. The OIG is also responsible for ensuring employee and program integrity through prevention and detection of criminal activity, unethical conduct, and program fraud and abuse. The OIG provides Labor participation in investigations under the Department of Justice’s Organize Crime Strike Force Program. The OIG fulfills its responsibilities through two major offices—Audit and Investigation—that are supported by 44 field offices (see fig. III.25). The primary mission of the Office of Audit is to conduct and supervise audits of (1) programs administered by Labor and (2) internal operations and activities. Two divisions within the Office of Investigation—Program Fraud and Labor Racketeering—carry out the mission of this office. The primary responsibility of the Division of Program Fraud is to investigate allegations of fraud, waste, and abuse reported by any citizen or Labor program participant or employee. The Division of Labor Racketeering conducts investigations regarding employee benefit plans, labor- management relations, and internal union affairs. The OIG conducts many of its mission-related activities at its field offices for several reasons. According to Labor, the Office of Audit’s field structure provides the greatest oversight of Labor programs because it mirrors the Department’s decentralized structure and minimizes travel costs. The field structure of the Division of Program Fraud was set up to be compatible with Labor’s regional cities so that Program Fraud staff could have immediate access to Labor program managers. Because travel is substantial for Program Fraud staff due to the large geographic areas covered by Labor’s many field offices and programs, Labor believes that establishing central field office locations in major cities provides the most economic travel possible. The Division of Labor Racketeering has offices in those cities that have historically had serious organized crime problems. Labor Racketeering agents, therefore, travel little because most of their work is in the cities where offices have been established. Table III.13 provides key information about the 9 operating offices, 23 resident offices, and 11 field offices that support the OIG’s operations. OIG’s various field offices generally perform the following functions: Operating offices (Office of Audit). These offices lead and conduct economy and efficiency audits of Labor programs and assess the (1) financial management and performance measures of Labor programs, (2) program and financial results, and (3) organizations and operations of Labor grantees and contractors. Resident offices. Resident office staff examine fraud complaints reported on the hotline or in person. These types of offices are also staffed with labor racketeering investigators. Field offices. Field office staff develop and investigate labor racketeering cases in the largest organized crime centers in the United States and supervise the activities of investigators in selected resident offices. OIG staff represented 11 different professional and administrative job categories. Criminal investigators made up almost half of OIG’s field office workforce (see fig. III.26). The remaining staff were auditors and other clerical and support staff. GS-11s, –12s, and –13s represented almost 66 percent of the OIG’s field office workforce. Staff at the GS-5 and –6 pay levels constituted less than 6 percent of the OIG’s field staff (see fig. III.27). Less than 2 percent of the OIG’s total on-board staff worked part time. From 1 to 10 Labor staff represented the OIG in over 75 percent of the 28 U.S. localities with OIG staff (see table III.14). A GS-12 or –13 criminal investigator and a GS-7 investigator assistant provided the OIG presence in four localities with only one staff person. Four localities had over 30 OIG employees—these localities generally corresponded with the locations of the OIG’s Office of Audit operating offices. In fiscal year 1995, the OIG maintained five field offices each in Washington, D.C., and New York. According to GSA data, OIG field offices occupied space in 32 buildings throughout the United States in fiscal year 1995, totaling 79,977 square feet. About 36,500 square feet of space was owned by GSA and 42,522 was leased from privately owned sources. OIG used 67,867 square feet for offices and the remainder for storage and other uses (see fig. III.28). At 24 of the 32 buildings OIG occupied in fiscal year 1995, other Labor components were also located at the same address. Field office costs for the OIG totaled $28.9 million in fiscal year 1995. These costs included rent and utilities; staff salaries and benefits; and other costs, such as equipment, supplies, and materials. Rent and utility costs were $1.8 million, which was 6 percent of total field office costs for the OIG. Costs for staff salaries and benefits totaled $23.8 million and other costs totaled $3.1 million, which were about 82 and 11 percent, respectively, of the OIG’s total field office costs. Plans to restructure OIG’s entire field structure were in process in fiscal year 1995 resulting in the elimination of eight field offices in fiscal year 1996 and a realignment of management functions and fewer GM-15 positions. OIG will evaluate its Washington, D.C., field offices. In fiscal year 1996, OIG reorganized the five New York field offices and has not replaced any losses at one-person offices. The primary mission of the Occupational Safety and Health Administration (OSHA) is to ensure a work environment for American workers that is free from safety and health hazards. Staff at the 107 field offices that support OSHA (1) inspect work places to ensure compliance with health and safety standards and (2) provide advice, assistance, and services to employers and employees to prevent work place injuries and illnesses. OSHA field offices also provide technical assistance as needed to the 25 states with their own—yet federally approved—occupational safety and health programs. The field offices also monitor work place activities not covered by the state plans. Figure III.29 shows the locations of OSHA field offices. Among OSHA’s field offices are a training facility, two laboratories, and five resource centers. Figure III.29: Locations of OSHA Field Offices, Fiscal Year 1995 States With Occupational Safety and Health Programs (Programs in New York and Connecticut cover only state and local government employees.) OSHA conducts most of its mission-related activities at its field offices for several reasons. According to OSHA officials, the field offices provide greater visibility and access to employers and employees and allow OSHA to locate staff with the necessary expertise near specific industries (such as the petrochemical companies in Houston, Texas). As part of its responsibility to monitor state occupational safety and health programs, OSHA maintains area offices in the state capitals of the 25 states with their own programs. In those states with no state occupational safety and health programs, OSHA attempts to establish field offices that are centrally located near large concentrations of industrial and other work sites. The location of OSHA area offices near industrial concentrations not only permits OSHA to effectively schedule and use staff and travel resources but also enables its staff to respond rapidly to accidents and imminent danger notifications. Finally, federal policy and other considerations have dictated that field offices be placed in certain central city locations. Table III.15 provides key information about the 10 regional offices, 83 area offices, 6 district offices, 5 resource centers, 2 technical centers, and 1 training facility that compose OSHA’s field office structure. OSHA’s various field offices generally perform the following functions: Regional offices. A regional office provides the guidance and administrative support for all of the other OSHA field offices operating in a particular region. Area offices and resource centers. An area office is organized geographically to serve as OSHA’s primary link to employers and employees at local work sites. Staff stationed at these types of offices perform safety and health activities, such as routine work place inspections, and provide technical assistance to employers. They also document complaints about unsafe work place practices and respond to accidents and imminent danger notifications. Offices in OSHA’s San Francisco region serve the same purpose but are identified as “resource centers” because they are organized functionally rather than geographically. District offices. A district office is a small outstation reporting to an area office. District offices provide safety and health services in geographic areas that are remote from an area office but have a concentration of work places. Technical centers. OSHA maintains these centers in Salt Lake City, Utah, and Cincinnati, Ohio. Their primary function is to analyze air and substance samples taken during work place inspections and to calibrate the equipment that the inspectors use. Training institute. This is a centrally located facility in Des Plaines, Illinois, used to train occupational safety and health personnel from OSHA, its state counterparts, and other federal safety and health professionals, as well as the public on a space-available basis. In fiscal year 1995, every state and territory had at least one OSHA field office except South Dakota, Vermont, Wyoming, and Guam (see fig. III.29). OSHA’s field offices with the largest numbers of staff were in the federal region cities of Boston, New York, Philadelphia, Atlanta, Chicago, Dallas, Kansas City, Denver, San Francisco, and Seattle. OSHA employed about 17 percent of all Labor on-board field office staff in fiscal year 1995. OSHA staff represented almost 50 different professional and administrative job categories. Occupational safety and health managers/specialists and industrial hygienists made up approximately 66 percent of OSHA’s field office workforce (see fig. III.30). The remaining staff included safety engineers; chemists; computer specialists; program analysts; accountants; and clerical workers, such as safety/health assistants and clerks, program analysts, and secretaries. Approximately 70 percent of OSHA’s field office staff were at the GS-11, –12, and –13 grade levels. Staff at the GS-5 and –6 pay levels constituted about 13 percent of OSHA’s field office workforce (see fig. III.31). Less than 1 percent of OSHA’s on-board staff in fiscal year 1995 worked part time. From 11 to 30 staff persons worked in 59 percent of the 97 U.S. localities with an OSHA presence (see table III.16). Thirteen localities—which generally represented the locations of OSHA’s regional offices—had over 30 OSHA employees. In fiscal year 1995, OSHA field offices occupied space in 115 buildings throughout the United States, totaling 550,535 square feet. Almost a third of the space (or 115,804 square feet) was owned by GSA, and almost 80 percent (or 434,731 square feet) was leased from privately owned sources. OSHA used about 72 percent of this space for offices and the remainder for storage and other activities (see fig. III.32). At 61 of the 115 buildings OSHA occupied in fiscal year 1995, other Labor components were located at the same address. Field office costs for OSHA totaled $146 million in fiscal year 1995. These costs included rent and utilities; staff salaries and benefits; and other costs, such as equipment, supplies, and materials. Rent and utility costs were $10.7 million, which was 7 percent of OSHA’s total field office costs. Costs for staff salaries and benefits totaled $104.4 million and other costs totaled $31 million, which were about 72 and 21 percent, respectively, of OSHA’s total field office costs. None. The primary mission of the Pension and Welfare Benefits Administration (PWBA) is to help protect the retirement and benefit security of America’s workers as required under the Employee Retirement Income Security Act of 1974 (ERISA) (29 U.S.C. 1000 note) and the Federal Employees’ Retirement System Act. PWBA is charged with ensuring the responsible management of nearly 1 million pension plans and 4.5 million health and welfare plans. It also manages a vast private retirement and welfare benefit system. PWBA’s major activities include evaluating and monitoring the operations of private sector pensions. PWBA conducts many of its mission-related activities through its 15 field offices (see fig. III.33). PWBA’s field structure facilitates customer assistance to pension plan participants and beneficiaries in major metropolitan areas. Decisions about the number and location of PWBA field offices are based on several factors: the number of employee benefit plans in a locality, the locations of major financial centers, and the existing Labor administrative support structure. Table III.17 provides key information about PWBA’s 10 regional offices and 5 district offices. PWBA’s field offices generally perform the following functions: Regional offices. These offices conduct investigations of employee benefit plans. When civil violations of title I of ERISA are found, the regional office staff seek voluntary corrections and or recommend and support litigation by SOL. Criminal investigations are conducted by staff at the direction of U.S. Attorneys’ offices which litigate the criminal cases. Regional staff also provide assistance to employee benefit plan participants and professionals who contact the office with questions or complaints. District offices. A district office carries out the same enforcement and customer service functions as a regional office. District office staff are directly supervised by an affiliated regional office. District offices, which have smaller staffs, provide a physical presence in select regions that may be larger geographically. According to Labor, this minimizes the travel time of investigators who conduct on-site investigations as well as provide a presence in additional metropolitan areas. PWBA staff represented 11 different professional and administrative job categories. Over 80 percent of PWBA’s field office workforce was composed of investment/pension specialists and auditors (see fig. III.34). The remaining staff were in job categories that included employee benefit plan clerk or assistant, secretary, and computer specialist. Sixty-two percent of PWBA’s field office staff were in grades GS-11 through –13. Staff at the GS-5 and –6 pay levels constituted about 10 percent of PWBA’s field office workforce (see fig. III.35). Less than 3 percent of PWBA total on-board staff worked part time. Table III.18 shows that six or more staff persons provided a PWBA presence in 15 U.S. localities. Localities with 21 or more PWBA staff generally represented the component’s regional offices in these areas. In fiscal year 1995, PWBA field offices occupied space in 17 buildings throughout the United States, totaling 75,129 square feet. GSA owned 9,068 square feet of this space, and 66,061 square feet were leased from private sources. According to GSA, PWBA used 65,321 square feet of its space in the field for offices and the remainder for storage and other purposes—such as conference and training activities and food service (see fig. III.36). At 12 of the 17 buildings PWBA occupied in fiscal year 1995, other Labor components were also located at the same address. PWBA field office costs totaled $27.5 million in fiscal year 1995. These costs included rent and utilities; staff salaries and benefits; and other costs, such as equipment, supplies, and materials. Rent and utility costs were $1.6 million, which was about 6 percent of total field office costs for PWBA. Costs for staff salaries and benefits totaled $21.8 million, and other costs totaled $4.1 million, which were about 79 and 15 percent, respectively, of PWBA’s total field office costs. None. The Veterans’ Employment and Training Service (VETS) is responsible for administering veterans’ employment and training programs and activities to ensure that legislative and regulatory mandates are accomplished. Its primary mission is to help veterans, reservists, and National Guard members to secure employment and their associated rights and benefits through existing programs and the coordination and implementation of new programs. VETS strives to ensure that these programs are consistent with the changing needs of employees and the eligible veteran population. VETS conducts much of its mission-related activities from 108 field offices (see fig. III.37) for several reasons. According to Labor, the field offices are strategically located to minimize travel costs as well as to facilitate interagency liaisons and communications. With VETS’ field offices located in 80 percent of America’s 100 largest cities, field staff are close to employers, which helps to prevent reemployment rights claims and, when claims are made, facilitates their resolution. Field offices also allow VETS staff to perform monitoring and technical assistance activities more effectively and efficiently with reduced travel costs. Table III.19 provides key information about the 10 regional offices and 98 state offices that compose VETS’ field structure. In fiscal year 1995, VETS maintained regional offices in each of the federal region cities: Boston, New York, Philadelphia, Atlanta, Chicago, Dallas, Kansas City, Denver, San Francisco, and Seattle. In addition, VETS had a field office presence in every state—sometimes with as many as seven offices per state, such as Texas. VETS’ field offices generally perform the following functions: Regional offices. Regional office staff primarily (1) resolve claims made by veterans, reservists, and National Guard members when their reemployment rights have been denied by their civilian employers, (2) evaluate compliance by state employment security agency offices with veterans’ services requirements as dictated by federal regulations through on-site visits; (3) and monitor the performance of VETS’ grantees. State offices. State office staff work closely with and provide technical assistance to state employment security agencies and Job Training Partnership Act grant recipients to ensure that veterans are provided the priority services required by law. They also coordinate with employers, labor unions, veterans service organizations, and community organizations through planned public information and outreach activities. In addition, they give federal contractors management assistance in complying with their veterans affirmative action and reporting obligations. VETS staff represented five different professional and administrative job categories. Veterans employment representatives and program specialists made up approximately 70 percent of VETS’ field office work force (see fig. III.38). The remaining staff included veterans reemployment rights compensation specialists, clerks, and other support staff. Approximately 42 and 25 percent of VETS’s field office staff were GS-12s and –13s, respectively. Staff at the GS-5 and –6 pay levels constituted about 24 percent of VETS’ field office workforce (see fig. III.39). Less than 1 percent of VETS’ on-board staff worked part time. From one to five VETS staff were located in 83 localities, and about 38 percent of these locations were staffed by one person. Generally, GS-12 veterans employment representatives provided the VETS presence in the localities with only one person. No single locality had more than 10 VETS staff stationed there (see table III.20). In fiscal year 1995, VETS field offices occupied space in 13 buildings throughout the United States, totaling 12,811 square feet. GSA owned 5,634 square feet of VETS field office space, and 7,177 square feet were leased from private sources. VETS used 12,423 square feet of its total field space for offices and the remainder for other uses (see fig. III.40). At 11 of the 13 buildings VETS occupied in fiscal year 1995, other Labor components were also located at the same address. Field office costs for VETS totaled $16.7 million in fiscal year 1995. These costs included rent and utilities; staff salaries and benefits; and other costs, such as equipment, supplies, and materials. Rent and utility costs were about $289,839, which was 2 percent of VETS’ total field office costs. Costs for staff salaries and benefits totaled $13.4 million, and other costs totaled $3 million, which were 80 and 18 percent, respectively, of VETS’ total field office costs. VETS is awaiting congressional approval to reduce the number of field offices that support its operations. VETS has also reduced staff through attrition. South Carolina (continued) Table V.1: Total Labor Field Offices and Staff by Federal Region 835 10,655 2 2 2 1 1 (continued) 10 (continued) 00 1 (continued) 0 (continued) No official field office. Employee supervised out of another office. 2 (continued) 1 (continued) 1 1 1 (continued) 1 1 2 32 2 (continued) 1 2 1 (continued) Outstationed staff working out of home. OSHA: Potential to Reform Regulatory Enforcement (GAO/T-HEHS-96-42, Oct. 17, 1995). Federal Reorganization: Congressional Proposal to Merge Education, Labor, and EEOC (GAO/HEHS-95-140, June 7, 1995). Department of Education: Information on Consolidation Opportunities and Student Aid (GAO/T-HEHS-95-130, Apr. 6, 1995). Departent of Labor: Rethinking the Federal Role in Worker Protection and Workforce Development (GAO/T-HEHS-95-125, Apr. 4, 1995). Workforce Reductions: Downsizing Strategies Used in Selected Organizations (GAO/GGD-95-54, Mar. 13, 1995). Labor’s Regional Structure and Trust Funds (GAO/HEHS-95-82R, Feb. 10, 1995). Department of Labor: Opportunities to Realize Savings (GAO/T-HEHS-95-55, Jan. 18, 1995). Multiple Employment Training Programs: Major Overhaul Needed to Reduce Costs, Streamline the Bureaucracy, and Improve Results (GAO/T-HEHS-95-53, Jan. 10, 1995). Department of Education: Long-Standing Management Problems Hamper Reforms (GAO/HRD-93-47, May 28, 1993). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO provided information about the field offices supporting the Departments of Education and Labor, focusing on the field offices': (1) locations; (2) functions, staffing, space and operating costs; and (3) proposed structural changes. GAO found that: (1) in fiscal year 1995, the Department of Education had 72 field offices and the Department of Labor had 1,074 field offices; (2) Labor and Education spent a total of $867 million dollars in support of their field office operations; (3) about 94 percent of Education's field staff and 42 percent of Labor's field staff were located in ten regional cities; (4) Labor had a high concentration of staff in its field offices, reflecting the agency's general responsibilities; (5) the majority of the amount spent in supporting field offices operations was for staff salaries and benefits; and (6) Labor and Education are planning to make changes in their field office structures to improve efficiency and contain administrative costs. |
MPP Gilles Bisson (Timmins, James Bay) wants the Ministry of Transportation to look into its use of Agent Orange to clear roadside brush across the province. TORONTO STAR FILE PHOTO
Tanya Talaga and Diana Zlomislic Staff Reporters
Premier Dalton McGuinty chastised past Conservative governments for failing to warn people about the province’s widespread use of Agent Orange as controversy surrounding the cancer-causing herbicide spread to the Ministry of Transportation.
“Why is it they didn’t disclose that during those years in terms of danger and who it was might have been affected?” the Liberal leader asked Thursday in Barrie.
McGuinty said his government will make every effort to discover why the issue was buried for decades. The Progressive Conservatives, meanwhile, say blaming them for something that occurred years ago is “offensive.”
A Toronto Star investigation revealed the poisonous chemicals were widely used by the Ontario government’s lands and forests department and by timber companies to clear massive plots of Crown land during the 1950s, ’60s and ’70s.
The Ministry of Natural Resources is now leading a cross-government probe.
McGuinty pointed the finger squarely at former Tory governments for allowing spraying to occur in the first place.
“We are going to make efforts to ensure we uncover what happened during those years of Conservative government, when there was the use of this harmful chemical,” McGuinty said.
Also Thursday, Transportation Minister Kathleen Wynne admitted her ministry also used Agent Orange for decades until 1980 to clear brush and trees near roadsides.
“We’d like to uncover some more information about that as well,” McGuinty said.
The premier’s comments outraged Tories.
“At the end of the day, I don’t believe any government would attempt to cover something like this up,” said PC MPP Frank Klees (Newmarket-Aurora).
“This issue shouldn’t be used as a political or partisan opportunity,” he told the Star.
“I do think the premier, in commenting on this, is stooping to a partisan level which is highly inappropriate and offensive.”
At least one high-profile member of the Progressive Conservatives knew about the use of Agent Orange in Northern Ontario prior to the Star’s exposé.
Progressive Conservative Leader Tim Hudak received an email last October alerting him to the issue from a former supervisor in charge of an aerial herbicide spraying program in Kapuskasing, Ont.
Don Romanowich, 63, worked for Spruce Falls Power and Paper Company during the 1960s and 1970s, which operated on more than 4 million acres of Crown land in Northern Ontario and routinely used Agent Orange — with the government’s blessing — to kill unwanted “weed trees.”
Recently diagnosed with a form of non-Hodgkin lymphoma, which has been associated with herbicide exposure, Romanowich felt a duty to inform his former co-workers, many of whom where just high school students or junior rangers at the time. He hoped Hudak might help.
“Any effort that you could make to see that full disclosure is made to those who may have been exposed would be appreciated,” he wrote in his email dated Oct. 29, 2010.
Hudak’s office never responded. A spokesperson for the Conservative MPP confirmed Thursday night the email was received but not acted upon because the email “did not request any follow-up from our office.”
New Democratic Party MPP Gilles Bisson (Timmins-James Bay) has pounced on the issue since the Star’s investigation.
Bissson told the Legislature that former Ministry of Transportation workers contacted him after to say Agent Orange was used to clear roadsides until 1980, prompting the Liberal minister’s admission.
Wynne told reporters she didn’t have “specific information” on how much of the toxin was used by the Ministry of Transportation, but an independent panel will be created by the Ministry of Natural Resources to investigate fully.
NDP Leader Andrea Horwath wants to know why a politician in her own party is breaking this news instead of the ministry, which, she said, is supposed to be leading a “transparent” investigation.
“Where else is there documentation or information buried that the government hasn’t come clean on?” Horwath said.
“This is a shameful commentary of the government’s handling of the situation,” she added. “They are the ones that said, ‘We are going to dig into this, that we will be transparent and provide all information that we have.’”
Agent Orange — an equal parts mixture of herbicides 2,4-D and 2,4,5-T — was the most widely used toxin during the Vietnam War. It was employed by the U.S. military to strip the country’s triple-canopy jungles, exposing Viet Cong troops.
Exposure to this chemical mix has been associated with more than 50 medical conditions by the U.S. Department of Veterans Affairs.
Until now, most Ontarians had no idea these same cancer-causing chemicals were being used in a big way much closer to home.
The Star obtained historic spraying reports on file with the provincial archive showing that the government authorized and endorsed the use of Agent Orange to clear broad-leaf trees like maple, birch and alder from government land across the province, leaving more sunlight and soil nutrients for the more profitable spruce trees to flourish.
A timber company in Northern Ontario that operated on more than 4 million acres of Crown land during these decades employed high school students and junior rangers to act as “balloon men” holding red, helium-filled balloons on fishing lines to guide the low-flying spraying planes above them. The planes sprayed hundreds of gallons of the chemical mixture on these workers for weeks at a time.
The Star so far has received hundreds of emails and calls from former balloon men and other forestry workers who participated in the aerial spraying program.
Many wonder if their exposure to the chemicals has contributed to their health problems, such as low-sperm count, various cancers and curious skin conditions. ||||| This is the second page of a 1964 aerial herbicide spraying report from Spruce Falls Power and Paper Company, which was copied to the Department of Lands and Forests at the time. The report shows the company used a combined mixture of 2,4-D and 2,4,5-T in equal parts, which is more commonly known as Agent Orange. The spray covered more than 3,000 acres in Kapuskasing District. The report identifies the forestry workers involved with the spray campaign.
REPORT2 This is the second page of a 1964 aerial herbicide spraying report from Spruce Falls Power and Paper Company, which was copied to the Department of Lands and Forests at the time. The report shows the company used a combined mixture of 2,4-D and 2,4,5-T in equal parts, which is more commonly known as Agent Orange. The spray covered more than 3,000 acres in Kapuskasing District. The report identifies the forestry workers involved with the spray campaign.
This is a copy of an aerial herbicide spraying report from Spruce Falls Power and Paper Company, which was copied to the Department of Lands and Forests at the time. The report shows the company used a combined mixture of 2,4-D and 2,4,5-T in equal parts, which is more commonly known as Agent Orange. The spray covered more than 3,000 acres in Kapuskasing District.
REPORT1 Archives of Ontario This is a copy of an aerial herbicide spraying report from Spruce Falls Power and Paper Company, which was copied to the Department of Lands and Forests at the time. The report shows the company used a combined mixture of 2,4-D and 2,4,5-T in equal parts, which is more commonly known as Agent Orange. The spray covered more than 3,000 acres in Kapuskasing District.
Don Romanowich has been diagnosed with a type of cancer common in people exposed to harmful herbicides.
na-romanowich Glenn Lowson/TORONTO STAR Don Romanowich has been diagnosed with a type of cancer common in people exposed to harmful herbicides.
Diana Zlomislic
Staff Reporter
Cancer-causing toxins used to strip the jungles of Vietnam were also employed to clear massive plots of Crown land in Northern Ontario, government documents obtained by the Toronto Star reveal.
Records from the 1950s, 60s and 70s show forestry workers, often students and junior rangers, spent weeks at a time as human markers holding red, helium-filled balloons on fishing lines while low-flying planes sprayed toxic herbicides including an infamous chemical mixture known as Agent Orange on the brush and the boys below.
“We were saturated in chemicals,” said Don Romanowich, 63, a former supervisor of an aerial spraying program in Kapuskasing, Ont., who was recently diagnosed with a slow-growing cancer that can be caused by herbicide exposure. “We were told not to drink the stuff but we had no idea.”
A Star investigation examined hundreds of boxes of forestry documents and found the provincial government began experimenting with a powerful hormone-based chemical called 2,4,5-T — the dioxin-laced component of Agent Orange — in Hearst, Ont., in 1957.
The documents, filed at the Archives of Ontario, describe how WWII-era Stearman biplanes were kitted with 140-gallon tanks containing the chemicals, which were usually diluted in a mix of fuel oil and water.
Less than 10 years later, the Department of Lands and Forests (now the Ministry of Natural Resources) authorized the use of a more potent mixture of 2,4-D and 2,4,5-T for aerial spraying. The combination of those two herbicides in equal parts comprised Agent Orange — the most widely used chemical in the Vietnam War.
Over the years, spraying was done by both the province and timber companies. Hundreds of forestry workers were involved, but the documents do not give an exact number.
After the Star presented its findings to the natural resources ministry — including copies of the government’s own records and research based on interviews with ailing forestry workers now scattered across Canada — a spokesperson said the government is investigating and has notified Ontario’s Chief Medical Officer of Health.
“We can acknowledge that a mixture of 2,4-D and 2,4,5-T under various brand names were used in Ontario,” ministry spokesman Greg MacNeil wrote the Star in an email. Though he confirmed the use of a mixture known commonly as Agent Orange, MacNeil said the government never used a “product” called “Agent Orange.”
Dr. Wayne Dwernychuk, a world-renowned expert on Agent Orange, said the government is “throwing up a smokescreen.”
“There was no categorical brand called Agent Orange,” said Dwernychuk, who for more than 15 years conducted extensive research on the impact of toxic defoliants in Vietnam. “There was nothing coming out of any of the chemical companies in a barrel that had Agent Orange written on it. That’s laughable.
“If it’s got 2,4,5-T and 2,4-D as a mixture, it’s Agent Orange and it has dioxin — I guarantee it,” said Dwernychuk, who recently retired as chief scientist from Vancouver-based Hatfield Consultants.
Medical studies have determined the type of dioxin found in Agent Orange latches on to fat cells and can remain in the body for decades. Exposure may lead to skin disorders, liver problems, certain types of cancers and impaired immune, endocrine and reproductive functions.
Agent Orange may have been employed earlier than 1964 in Northern Ontario but the Star was told access to additional records is guarded by privacy legislation. The ministry said it does not have centralized spraying records prior to 1977 and suggested the newspaper “follow the procedures set up in the freedom of information act” to get a “complete picture of the data.”
The Star’s investigation exposes the first widespread use of these chemicals in Canada outside of a military spraying operation.
The Ministry of Natural Resources said it is working with the ministries of Health, Labour and Environment “to ensure this matter is thoroughly investigated and that worker health and safety is protected.”
The only other case on record of Agent Orange and other toxic defoliants being used en masse in Canada occurred in New Brunswick.
The U.S. military tested defoliants including Agent Orange at Canadian Forces Base Gagetown in 1966 and 1967, according to a federal government inquiry that occurred 40 years later.
As of Dec. 22, 2010, the Canadian government has issued 3,137, $20,000 tax-free, compensation payments to people who lived or worked at CFB Gagetown during the years when spraying occurred and were diagnosed with of one of 12 medical conditions associated with exposure as identified by the Institute of Medicine. The federal government expects to approve thousands of additional applications for compensation before the June 30 deadline.
The U.S. military began spraying “hormone herbicides” like Agent Orange in South Vietnam in 1961.
Agent Orange was one of a rainbow of poisonous warfare chemicals that got its name from a band of colour painted on the barrels it was shipped in. The mixture itself was colourless.
“The U.S. military called it orange herbicide,” Dwernychuk said. “It was the American press that labelled it ‘Agent Orange’ because it was more sexy.”
The mixture ate through vast swaths of jungle, exposing Viet Cong strongholds.
Nearly 20,000 kilometres away in Northern Ontario, toxic herbicides were employed to disable a different kind of enemy.
The chemicals targeted what forestry reports described as “weed trees” — including birch, maple, poplar and shrubs — which stole sunlight and soil nutrients from young, profitable spruce species. The hormones in the defoliants caused the broad leaves on these weed trees to grow so quickly they starved to death.
In 1956, with the government’s blessing, Spruce Falls Power and Paper Company in Kapuskasing pioneered the aerial spraying of herbicides in Northern Ontario. The New York Times, which co-owned Spruce Falls with Kimberly-Clark and the Washington Star, printed its Sunday edition on black spruce, renowned for its tough fibres. (Tembec, a company that purchased Spruce Falls in 1991, did not respond to interview requests).
Aerial spraying programs were considered a cheap, fast and effective way to alter the landscape of Ontario’s forests for maximum profit. Timber companies and the government worked together to increase the output of money-making trees like white and black spruce while culling nearly everything else that got in their way.
In the mid-1960s, Spruce Falls held about 4 million acres of forest land under lease from the Ontario government and owned an additional 180,000 acres. The incomplete documents don’t provide a total number of acres sprayed.
After a bone marrow test confirmed he had non-Hodgkin lymphoma, Romanowich, who worked for Spruce Falls during the 1960s and 1970s, said his first thought was to track down former colleagues.
“My oncologist asked me about heavy exposure to herbicides before I mentioned my work at Spruce Falls,” said the retired maintenance manager who lives in the Niagara region. “There is no absolute confirmation of this type of exposure being the cause but a very strong correlation that should be taken seriously. I am fortunate in that I will now be monitored on a regular basis with CAT scans and blood tests to watch for the inevitable flare-ups that can be treated with chemotherapy.”
He wants others who worked on these spraying programs to have the same chance to receive thorough medical exams based on their exposure.
He contacted the Ministry of Natural Resources in October with no response until late last month, nearly four weeks after the Star began its own investigation.
The government records list the names of five supervisors who worked on spraying programs in Northern Ontario during the 1950s and 1960s. Four of the five have either been diagnosed with or died of cancer. Their job included mixing chemicals and standing in the fields supervising spray campaigns. Teenaged workers are also listed in the records and the Star is working to track them down.
One of them on the list, David Buchanan always wondered what was inside the 45-gallon oil drums he worked with as a 15-year-old at Spruce Falls Power and Paper Company in 1964.
“Even then, it didn’t seem right,” said Buchanan, now a 61-year-old dentist in Sackville, N.S., who has suffered from a series of illnesses doctors couldn’t diagnose. Body-covering hives. Persistent bouts of dizziness. A sperm count so low he couldn’t have children.
“I have had every test known to mankind,” he said.
“I often wondered if some of my symptoms were related to something that happened in my childhood.”
His job as a summer student was to hand-pump vats of brush-and-tree-killing chemicals into the airplane sprayer.
“We got soaked,” Buchanan said. “I can’t remember what we did with our clothes but we stayed in the bush camp during spraying for weeks on end.” He does recall wearing a black rubber apron, brown rubber gloves and rubber boots while mixing and pumping the chemicals.
One document from 1962 recommended keeping an extra supply of rubber balloons handy because “the balloons do deteriorate from the spray mixture.”
As a college student, Paul Fawcett, now 62, also worked on Spruce Falls’ aerial spraying program. He was a 21-year-old “balloon man” during the summer of 1969. His father Don worked for the ministry as a district forester in Kapuskasing.
There was no uniform, Fawcett said, just jeans and a shirt — usually long-sleeves because of mosquitoes and flies. He recalls being covered in a fine mist or droplets from the spray plane.
“It was a lot of fun,” he said. “We would walk from station to station with red helium-filled balloons on fishing lines and the planes would swoop down.”
He recalled researchers from University of Toronto dropping in on his camp to survey how much spray was getting to the ground.
“They had us lay down ridged, filter papers on the ground or brush while the plane sprayed. We laid them down in a row covering four or five feet.”
Fawcett, now a welder in Hamilton, said he never heard about the results of that study.
Government forestry documents refer to extensive studies that were being conducted on spraying programs at a research facility in Sault Ste. Marie, Ont., but these reports are either missing or misfiled.
Fawcett, whose doctor recently ordered an ultrasound to look into bladder problems, said he had no idea he was working with anything toxic. Aside from the bladder issues, Fawcett said he feels fine.
“It did a good job — what we wanted it to do,” said Clifford Emblin, a former government forestry manager who oversaw chemical spraying programs. “They were using those chemicals in Vietnam, too, for defoliation. Yeah, it was the same stuff. I don’t think anybody knew about the long-term effects.”
The U.S. military stopped using Agent Orange in 1970 after a study for the National Institutes of Health showed that the dioxin-tainted 2,4,5-T caused birth defects in laboratory animals. The U.S. Department of Veterans Affairs now recognizes more than 50 diseases and medical conditions
associated with exposure.
Emblin, a former district manager for the Hearst and Hornepayne areas during the 1960s, recalled one of his forestry employees throwing a fit after his truck got caught directly beneath a spray plane’s flight line.
“The truck got sprayed and the paint came off the truck,” Emblin said, chuckling.
Emblin said his ministry didn’t know it was using Agent Orange until “four or five years after we quit using it, I guess, in the 70s.
“We had five sawmills that were depending on the growth of the (spruce) forest in Hearst to make a living,” he said. “That’s why we were doing it. We managed the land and they paid.”
Diana Zlomislic can be reached by email at dzlomislic@thestar.ca or by phone at 416-869-4472 | Canada used Agent Orange, the Vietnam War-era chemical linked to genetic defects, as a means of clearing roadside brush until about 1980. A Toronto Star investigation found that Canada’s forestry workers faced exposure to the chemical, poured from planes, starting in the 1950s; the government promises an inquiry, amid allegations that successive administrations covered up the use, notes the Star. “We were saturated in chemicals,” says one former supervisor. “We were told not to drink the stuff but we had no idea.” |
Our computers and smartphones might seem clean, but the digital economy uses a tenth of the world's electricity — and that share will only increase, with serious consequences for the economy and the environment
Y.C via Getty Images A server room at a data center. One data center can use enough electricity to power 180,000 homes
Which uses more electricity: the iPhone in your pocket, or the refrigerator humming in your kitchen? Hard as it might be to believe, the answer is probably the iPhone. As you can read in a post on a new report by Mark Mills — the CEO of the Digital Power Group, a tech- and investment-advisory firm — a medium-size refrigerator that qualifies for the Environmental Protection Agency’s Energy Star rating will use about 322 kW-h a year. The average iPhone, according to Mills’ calculations, uses about 361 kW-h a year once the wireless connections, data usage and battery charging are tallied up. And the iPhone — even the latest iteration — doesn’t even keep your beer cold. (Hat tip to the Breakthrough Institute for noting the report first.)
[UPDATE: You can see the calculations behind the specific iPhone comparison, which was done by Max Luke of the Breakthrough Institute, at the bottom of the post. It’s important to note that the amount of energy used by any smartphone will vary widely depending on how much wireless data the device is using, as well as the amount of power consumed in making those wireless connections—estimates for which vary. The above examples assumes a relatively heavy use of 1.58 GB a month—a figure taken from a survey of Verizon iPhone users last year. (Details at bottom of post.) That accounts for the high-end estimate of the total power the phone would be consuming over the course of a year. NPD Connected Intelligence, by contrast, estimates that the average smartphone is using about 1 GB of cellular data a month, and in the same survey that reported high data use from Verizon iPhone users, T-Mobile iPhone users reported just 0.19 GB of data use a month—though that’s much lower than any other service. Beyond the amount of wireless data being streamed, total energy consumption also depends on estimates of how much energy is consumed per GB of data. The top example assumes that every GB burns through 19 kW of electricity. That would be close to a worst-case model. The Centre for Energy-Efficient Communications (CEET) in Melbourne assumes a much lower estimate of 2 kWh per GB of wireless data, which would lead to a much lower electricity consumption estimate as well—as little as 4.6 kWh a year with the low T-Mobile data use. In the original version of the post, I should have noted that there is a significant range in estimates of power use by wireless networks, and that this study goes with the very high end.]
The iPhone is just one reason why the information-communications-technologies (ICT) ecosystem, otherwise known as the digital economy, demands such a large and growing amount of energy. The global ICT system includes everything from smartphones to laptops to digital TVs to — especially — the vast and electron-thirsty computer-server farms that make up the backbone of what we call “the cloud.” In his report, Mills estimates that the ICT system now uses 1,500 terawatt-hours of power per year. That’s about 10% of the world’s total electricity generation or roughly the combined power production of Germany and Japan. It’s the same amount of electricity that was used to light the entire planet in 1985. We already use 50% more energy to move bytes than we do to move planes in global aviation. No wonder your smartphone’s battery juice constantly seems on the verge of running out.
As our lives migrate to the digital cloud — and as more and more wireless devices of all sorts become part of our lives — the electrons will follow. And that shift underscores how challenging it will be to reduce electricity use and carbon emissions even as we become more efficient.
Here’s an example: the New Republic recently ran a story arguing that the greenest building in New York City — the Bank of America Tower, which earned the Leadership in Energy and Environmental Design’s (LEED) highest Platinum rating — was actually one of the city’s biggest energy hogs. Author Sam Roudman argued that all the skyscraper’s environmentally friendly add-ons — the waterless urinals, the daylight dimming controls, the rainwater harvesting — were outweighed by the fact that the building used “more energy per square foot than any comparably sized office building in Manhattan,” consuming more than twice as much energy per square foot as the 80-year-old (though recently renovated) Empire State Building.
Why did an ultra-green tower need so much electricity? The major culprit was the building’s trading floors, full of fields of energy-thirsty workstations with five computers to a desk:
Assuming no one turns these computers off, in a year one of these desks uses roughly the energy it takes a 25-mile-per-gallon car engine to travel more than 4,500 miles. The servers supporting all those desks also require enormous energy, as do the systems that heat, cool and light the massive trading floors beyond normal business hours. These spaces take up nearly a third of the Bank of America Tower’s 2.2 million total square feet, yet the building’s developer and architect had no control over how much energy would be required to keep them operational.
I think — and others agree — that the TNR article was unfair. There’s lots of silliness in the LEED ratings system — see this Treehugger post for evidence — but it’s not the Bank of America building itself that’s responsible for that massive carbon footprint. It’s what’s being done inside the building, as those hardworking computers suck electricity 24 hours a day, seven days a week. The fact that a skyscraper with so many cutting-edge, energy-efficient features can still use so much energy because it needs to play a full-time role in the cloud underscores just how electricity-intensive the digital economy can be.
That’s because the cloud uses energy differently than other sectors of the economy. Lighting, heating, cooling, transportation — these are all power uses that have rough limits. As your air conditioner or lightbulb becomes more efficient, you might decide to then use them more often — in energy efficiency, that is what’s known as the rebound effect. But you can only heat your home so much, or drive so far before you reach a period of clearly diminishing returns. Just because my Chevy Volt can get 100 miles per gallon doesn’t mean I’m going to drive back and forth to Washington each day. So it stands to reason that as these appliances become more efficient, we can potentially limit and even reduce energy consumption without losing value — which is indeed what’s happened in recent years in the U.S. and other developed nations.
But the ICT system derives its value from the fact that it’s on all the time. From computer trading floors or massive data centers to your own iPhone, there is no break time, no off period. (I can’t be the only person who keeps his iPhone on at night for emergency calls because I no longer have a home phone.) That means a constant demand for reliable electricity. According to Mills, efficiency improvements in the global ICT system began to slow around 2005, even as global data traffic began to spike thanks to the emergence of wireless broadband for smartphones and tablets. As anyone who has ever tried to husband the battery of a dying smartphone knows, transmitting wireless data — whether via 3G or wi-fi — adds significantly to power use. As the cloud grows bigger and bigger, and we put more and more of our devices on wireless networks, we’ll need more and more electricity. How much? Mills calculates that it takes more electricity to stream a high-definition movie over a wireless network than it would have taken to manufacture and ship a DVD of that same movie.
Look at our smartphones: as they become more powerful, they also use more power. Slate’s Farhad Manjoo called this the “smartphone conundrum” in a piece earlier this year:
Over the next few years, at least until someone develops better battery technology, we’re going to have to choose between smartphone performance and battery life. Don’t worry — phones will keep getting faster. Chip designers will still manage to increase the speed of their chips while conserving a device’s power. The annual doubling in phone performance we’ve seen recently isn’t sustainable, though. Our phones are either going to drain their batteries at ever increasing rates while continuing to get faster — or they’re going to maintain their current, not-great-but-acceptable battery life while sacrificing huge increases in speed. It won’t be possible to do both.
And that’s just our phones. What’s unique about the ICT system is that companies keep introducing entirely new product lines. In 1995, you might have had a desktop computer and perhaps a game system. In 2000, maybe you had a laptop and a basic cell phone. By 2009, you had a laptop and a wireless-connected smartphone. Today you may well have a laptop, a smartphone, a tablet and a streaming device for your digital TV. The even more connected might be wearing a Fitbit tracker, writing notes with a wi-fi-enabled Livescribe pen and tracking their runs with a GPS watch. And there will certainly be more to come, as the best minds of our generation design new devices for us to buy. In a piece yesterday, Manjoo reviewed the Pebble, the first — but almost certainly not the last — major “smartwatch.” At a moment when young people are buying fewer cars and living in smaller spaces — reducing energy needs for transportation and heating/cooling — they’re buying more and more connected devices. Of course the electricity bill is going to go up.
None of this is to argue that energy efficiency isn’t important in the ICT sector. Just as the Bank of America Tower’s green features keep its gigantic electricity demand from ballooning even more, efficient smartphones and laptops can slow the growth of the cloud’s carbon footprint. But grow it will. Energy efficiency has never been a big part of the sales strategy for digital devices, probably because electricity is still cheap in the U.S. and it’s something we pay for in bulk at the end of the month. Compare the feeling of paying your utility bill to the irritation of forking out $3.50 a gallon to fill up your car. The costs of electricity are hidden in our society.
That includes the environmental costs. The full title of Mills’ report is The Cloud Begins With Coal: Big Data, Big Networks, Big Infrastructure and Big Power, and it’s sponsored by the National Mining Association and the American Coalition for Clean Coal Electricity. Unsurprisingly, the report argues that coal — still the single biggest source of electricity in the U.S. — essentially powers our wonderful cloud. (And it is wonderful! The cloud generates a lot of value for all the electricity it uses.) Coal is hardly the only source of electricity that can keep the ICT system going — cleaner natural gas is already gaining, nuclear provides carbon-free base-load power, and renewables are growing fast. Certain aspects of the ICT system will also help reduce energy use, as smart grids and smart meters promote conservation. But users of the wireless cloud are likely to grow from 42.8 million people in 2008 to nearly 1 billion in 2014 — and that’s just the beginning, as smartphones spread from the developed to the developing world. We already have a gigantic digital cloud, and it’s only going to get bigger. What we need is a cleaner one.
[Update: Along those lines, digital companies have been taking steps to clean the cloud by procuring more of their energy from low-carbon sources. Apple’s data centers, for instance, are 100% powered by renewable energy, and is working to increase renewable energy use overall. Google gets 34% of its energy for operations from renewable sources. Smart companies are looking to cite power-hungry data centers near reliable sources of renewable energy: large hydro plants, like the ones near the new data center Facebook recently opened in Sweden, or utility-scale wind farms. Ultimately, though, it’s less the responsibility of the companies themselves then the economy as a whole to make the shift to cleaner energy. As more and more people buy more and more cloud-connected devices—and as electric cars and other forms of electrified transport replace petroleum-powered vehicles—the demand for electricity will grow. It’s up to us to push to make it cleaner.]
*A note on the calculations on smartphone energy use. This comes from an email by Max Luke, a policy associate at the Breakthrough Institute, which posted on Mills’ study:
Last year the average iPhone customer used 1.58 GB of data a month, which times 12 is 19 GB per year. The most recent data put out by a ATKearney for mobile industry association GSMA (p. 69) says that each GB requires 19 kW. That means the average iPhone uses (19kw X 19 GB) 361 kwh of electricity per year. In addition, ATKearney calculates each connection at 23.4 kWh. That brings the total to 384.4 kWh. The electricity used annually to charge the iPhone is 3.5 kWh, raising the total to 388 kWh per year. EPA’s Energy Star shows refrigerators with efficiency as low as 322 kWh annually.
Breakthrough ran the numbers on the iPhone specifically—the Mills’ endnotes (see page 44 in the report) refer to smartphones and tablets more generally—but Luke notes that Mills confirmed the calculations.
As I noted in the update at the top of the post, these estimates are at the very high end—other researchers have argue that power use by smartphones is much lower. And the Mills study itself has come in for strong criticism from other experts, as this MSN post notes:
Gernot Heiser, a professor at the University of New South Wales in Sydney and co-author of a 2010 study on power consumption in smartphones, echoed Koomey’s sentiments that Mills’ work was flawed. Writing to MSN News, Heiser said Mills’ work “seems blatantly wrong.” He said Mills overestimates the amount of power used by a modern smartphone, in this case a Galaxy S III, by more than four times. “I’d have to have a quick look to see how they arrive at this figure, but it certainly looks like baloney to me,” Heiser said. Gang Zhou, an associate professor of computer science at the College of Williams and Mary, was less direct in attacking Mills’ claims, but nonetheless said his measurements for the power consumption of smartphones was at least “one or two magnitude” higher than they should be. Nonetheless, Zhou said the subject of data center electricity usage is an important issue and it “should raise concern.”
Still, I think the takeaway from this isn’t about the energy use of individual brands or even whole classes of devices. The point is that as our always-on digital economy grows more extensive—and it will—we need to be more aware of the energy demands that will follow. The study from CEET in Melbourne that I noted in the update at the top of the post assumes much lower power consumption by individual devices than Mills’ work, but it still raises the alarm about the growing energy demand from cloud services.
As I write above, the nature of a smartphone or a tablet makes it hard to realize how much energy it may be using—especially given the fact that the electricity is often produced at plants far away from our outlets. At a gas station, for instance, the immediate cost and the smell of petrol is a potent reminder that we’re consuming energy. The digital economy is built on the sensation of seamlessness—but it still comes with a utility bill. ||||| They weigh less than five ounces, but according to recent data, when you count everything that matters, the average iPhone consumed more energy last year than a medium-sized refrigerator. By the numbers, a refrigerator from the Environmental Protection Agency’s Energy Star ratings list uses about 322 kWh per year. In contrast, the average iPhone used 361 kWh of electricity when you add up its wireless connections, data usage, and battery charging. Considering that a smart phone represents just one device in the ocean of the world’s Information-Communications-Technologies (ICT) ecosystem, it seems superfluous to say that the digital economy is poised to consume massive amounts of energy.
The argument bears repeating, however. Recent media coverage of the cloud suggests that improvements in energy efficiency will curb energy consumption. In his new report titled The Cloud Begins With Coal: Big Data, Big Networks, Big Infrastructure, and Big Power, August 2013, Mark Mills, CEO of Digital Power Group, argues that, despite suggestions otherwise, these applications should have very little impact on overall IT power consumption. The study, which was sponsored by the National Mining Association and the American Coalition for Clean Coal Electricity, concurs (perhaps unexpectedly) with earlier studies undertaken by Greenpeace, and further illustrates that the rapid growth of the global digital era is transforming our energy ecosystem in unprecedented ways.
According to Mills, the global ICT system is now approaching 10 percent of the world’s electricity generation. By current calculations, the cloud uses about 1,500 TWh of electricity annually, which is equal to the combined electrical generation of Japan and Germany. In the near future, hourly Internet traffic will exceed the Internet’s annual traffic in the year 2000. The global ICT ecosystem now also consumes as much electricity as global lighting did circa 1985 (seen below).
Graph taken from The Cloud Begins With Coal
To ascertain how much energy the global ICT system will require is difficult, largely because the information age constitutes a ‘blue-whale’ economy in which energy use is largely invisible to the public. In fact, according to Mills, current estimates probably understate global ICT energy use by as much as 1,000 TWh since up-to-date data remains undisclosed, and as he asserts, many recent trends (such as wireless broadband) have yet to be added to the energy accounting. What we do know, however, is that much of the future electric demand growth – the EIA forecasts a 15 percent aggregate rise in US electric demand in over the next two decades – will come from new ICT.
According to Mills, the rapid growth of ICT will mean that the type of electricity demanded over the next few decades will be significantly different than in the past. Unlike many of the types of energy services that have driven past growth – lighting, heating and cooling, transportation – the ICT ecosystem consists of always-on electricity-consuming devices and infrastructure. In his Forbes column earlier this year, Mills argued: “Demand for resilient and reliable delivery is rising far faster than the absolute demand for power. The future will not be dominated by finding ways to add more renewables to a grid, but by ways to add more resiliency and reliability.” The fundamentally different nature of ICT is demonstrated by comparing its energy density to conventional services. As Mills writes:
The average square foot of a [cloud] data center uses 100 to 200 times more electricity than does a square foot of a modern office building. Put another way, a tiny few thousand square foot data room uses more electricity than lighting up a 100,000-square-foot shopping mall.
Previous studies have looked into different aspects of the digital universe. A 2012 report by Greenpeace International called How Clean Is Your Cloud argued that data centers are a primary driver of electricity demand growth. Researchers estimated that one data center could require the amount of electricity used to power nearly 180,000 homes. Many more data centers (the largest one the size of seven football fields) are popping up across the globe in remote, suburban towns, and, combined, are expected to need upwards of 1,000 TWh – more than the total used for all purposes by Japan and Germany.
Graph taken from The Cloud Begins With Coal; Source: Microsoft Global Foundation Services
But data centers alone are not responsible for the surge in ICT electricity use. A 2013 study by the Centre for Energy-Efficient Telecommunications (CEET) argued that much of the growth comes from wireless networks, such as Wi-Fi and 3G, used to access cloud services. According to the authors’ calculations, by 2015 the “wireless cloud” will consume up to 43 TWh, compared to only 9.2 TWh in 2012, representing a 460 percent increase. Wireless cloud users worldwide will grow from 42.8 million in 2008 to just over 998 million in 2014, representing a 69 percent annual growth rate. And Mills’ study extends the ICT energy accounting to include the much broader universe of wireless network connectivity beyond just the cloud.
Graph taken from The Cloud Begins With Coal; Data Source: Ericsson Mobility Report, June 2013
These growth trends can be seen at the individual level as well. Take the iPhone example given at the outset. Based on the most recent data from NPD Connected Intelligence, the average Verizon Wireless iPhone user consumed about 1.58 GB of data per month in 2012, which equals about 19 GB per year. Multiply 19 GB by 19.1 kW, which is the amount of energy ATKearney reports is needed to power one GB, and you find that the average iPhone uses 361 kWh of electricity per year. Add to this the amount of electricity used to charge your phone annually (3.5 kWh) and the amount of electricity needed for each connection (23.4 kWh) and you have a grand total of 388 kWh per year. In Mills’ calculations, to watch an hour of video weekly on your smart phone or tablet consumes annually more electricity in the remote networks than two new refrigerators use in a year.
As Mills’ analysis notes, the information sector is now the fastest growing sector in the US economy: over $1 trillion of our economy is associated with information and data, more than twice the share of the GDP related to transportation (including vehicle manufacturing). The rapid rise in digital traffic is the driving force for the enormous growth in global investment in the ICT infrastructure (up $8 trillion within a decade). And, as Mills points out – both in the analysis and inherent in the report’s title – coal has been, and is forecast to continue to be the dominant source of global electricity supply.
“In every credible forecast – including from the EIA, IEA, BP, Exxon – coal continues to be the largest single source of electricity for the world,” says Mills. “Coal’s dominance arises from the importance of keeping costs down while providing ever-greater quantities of energy to the growing economies, and as the IEA recently noted, the absence of cost-effective alternatives at the scales the world needs.”
Google said as much in 2011 in its white paper “Google’s Green PPAs: What, How, and Why”:
Neither the wind nor the sun are constantly available resources. They come and go with the weather, while Google’s data centers operate 24x7. No matter what, we’d need to be connected to the grid to access “conventional” power to accommodate our constant load. The plain truth is that the electric grid, with its mix of renewable and fossil generation, is an extremely useful and important tool for a data center operator, and with current technologies, renewable energy alone is not sufficiently reliable to power a data center.
The company scorecard produced by Greenpeace in 2012 further demonstrates coal’s dominance. Of the 14 studied, Apple, Amazon, and Microsoft were given the poorest ratings in terms of their lack of clean energy sources and for not being transparent about their cloud infrastructure.
Scorecard taken from Greenpeace report How Clean Is Your Cloud?
If Mills is right that ICT will fundamentally change the way we use electricity -- by putting a premium on reliable, round-the-clock power generation -- we need to be thinking seriously about how we can power the information sector with cheaper, cleaner alternatives to coal. This will require making technologies that can provide reliable, baseload power cheaper and more readily available.
Photo Credit: Gadgetadda.com | Is your iPhone running? Better shut it off, because that device is using more energy than your refrigerator. A new report says that a fridge uses just 322 kWh per year, compared with the 361 kWh for an iPhone, if you include its wireless connections, data usage, and battery charges, the Breakthrough Institute reports. But that's nothing compared to information and communications technology worldwide, which uses 10% of global electricity—and that's a low estimate. New trends like wireless broadband could make the figure even higher. The information sector relies heavily on coal power, and differs from other energy leeches because the cloud is never turned off, making it hard to reduce electricity use and carbon emissions. The study, sponsored by the coal and mining industry, notes that change is unlikely in the near future. But the Breakthrough Institute notes we badly need cleaner alternatives, and Bryan Walsh at Time agrees: "We already have a gigantic digital cloud, and it's only going to get bigger," he writes. "What we need is a cleaner one." |
NEW YORK— One of the great unresolved questions of Barack Obama’s presidency is whether he can peacefully resolve America’s conflict with Iran over its nuclear weapons program. An encounter between Obama and Iran’s new president at the United Nations on Tuesday would be the most important—or at least the most analyzed—handshake since the historic grip between Rabin and Arafat (or, if you prefer, Nixon and Elvis). It would only be a symbolic act, to be sure. But when it comes to international diplomacy, symbolism can go a long way.
“Everyone understands that this week in New York is all about stagecraft and setting the tone for future interactions,” says one administration official.
Obama came to New York on Monday to push agenda items ranging from civil society to Syria’s chemical weapons. But those topics have been overshadowed by possibility—discussed endlessly on Monday here in Manhattan by everyone from diplomats to reporters to senior government officials—of whether he might make the first contact between a U.S. president and an Iranian leader since Iran’s 1979 revolution. On Monday, White House officials would only say that Obama was open to such a meeting, but that none was planned.
(VIDEO: The Iranian President-Elect: TIME Explains)
The very idea would have been ludicrous as recently as last year’s U.N. confab, when Iran was still led by its last president, the combative Mahmoud Ahmadinejad. He turned his visits to the U.N.’s annual gatherings in New York as an opportunity for noxious Holocaust denialism and anti-American broadsides that sent western diplomats marching out of his speeches in protest. More importantly, the Iranian regime—whose agenda is set by its Supreme Leader, Ali Khamenei—showed little interest in striking a deal with the west over its steady progress towards nuclear weapons capability.
Since Iran’s June presidential election, however, Tehran’s tone has changed. The country’s new president, Hassan Rouhani, has not-so-implicitly contrasted himself with Ahmadinejad, and speaks of “a path for negotiations and moderation” that might relieve international sanctions that are crippling Iran’s economy. He has tweeted kind words about Jews at a time of fear that Iran’s nuclear program—which its leaders deny is meant for military purposes—poses a mortal threat to Israel. And his government has released dozens of political prisoners in an apparent signal of reform and openness. Even more important, Khamenei, successor to the infamous Ayatollah Ruhollah Khomeini, has shown clear support for a new diplomatic push; last week he endorsed “heroic leniency” in Iran’s diplomacy.
“There’s something really interesting going on here,” says Kenneth Pollack of Brookings Institute, a former CIA Middle East analyst and author of a new book on Iran. “We should’t dismiss this as just words.”
Pollack notes that Bill Clinton came close to an encounter with then-Iranian president Mohammed Khatami at the United Nations in 2000. At the time, Khatami was seeking better relations with the west, and Clinton was open to the meeting. Elaborate scheduling changes allowed the two men to wind up in the same room, but an actual encounter never happened—thanks to hardliners in Iran who opposed it. If Rouhani is able to pull off some direct contact with Obama, it would be a sign not only of his own thinking but of the domestic political climate in Iran.
(MORE: A Unifying Theme for Obama’s Foreign Policy)
The world may never see it, however. Even if Obama presses the flesh with Rouhani, it could happen out of the view of reporters and photographers, limiting its symbolic impact. Even good intentions could be foiled by the complex logistics of the mass diplomatic gathering. (And a substantive sit-down between the two men appears highly unlikely.)
So the main event may wind up being the speeches that will be delivered by Obama on Tuesday morning and by Rouhani late in the afternoon. Obama officials will parse Rouhani’s address for conciliatory statements—perhaps some equivalent to then-Secretary of State Madeleine Albright’s 2000 apology to Iran for America’s participation in a 1953 coup there. “Does he go beyond all the statements that he’s made already?” Pollack asks.
Also significant will be a meeting later this week involving foreign ministers involved in international talks with Iran about its nuclear program, which will include Secretary of State John Kerry and Iran’s new foreign minister and chief nuclear negotiator, Mohammad Zarif. That encounter is more likely to produce the photo-op that could drive media coverage of a U.S.-Iranian detente.
Ultimately, however, the theatrics in Turtle Bay will only tell us so much. Skeptics like Israeli Prime Minister Benjamin Netanyahu warn that Rouhani is a wolf in sheep’s clothing—hoping that soothing words will relieve sanctions and buy more time for his country’s nuclear program. Handshake or no, Iran will soon have to demonstrate that it’s willing to halt, or at least slow down, its nuclear program in return for an easing of international sanctions.
“The acid test remains whether the Iranian government is prepared to take actions that constrain, limit, and roll back its existing nuclear program in a manner that provides confidence it is peaceful nature,” says the Obama official. One fleeting encounter won’t answer that question. But it could be a promising start.
PHOTOS: A Nation Eager to Be Heard: Iran by Newsha Tavakolian ||||| Iran to hold key nuclear talks at UN
Iran insists its nuclear programme is solely for peaceful purposes
Iran's foreign minister will meet six major world powers at the UN this week to discuss Tehran's nuclear programme, US and EU officials say.
The talks with Iranian FM Mohammed Javad Zarif will include US Secretary of State John Kerry - the highest level US-Iran meeting for more than 30 years.
Talks will take place on the sidelines of the UN General Assembly in New York.
Iran's new President Hassan Rouhani has said he is ready to restart stalled nuclear talks without preconditions.
Continue reading the main story Analysis The meeting between John Kerry and Mohammed Zarif will be the highest level face-to-face contact between the Americans and the Iranians since the 1979 Islamic Revolution. This in itself is a sign of the potential significance of the opening signalled by the new Iranian President Hassan Rouhani, whose charm offensive towards the West will be tested out by the ministers meeting in the margins of the UN General Assembly. They clearly want to see an Iranian willingness to make concessions on its nuclear programme if there is to be any lifting or lightening of sanctions. But Iran too comes to this meeting with concerns. It wants a clear indication that the US is willing to treat Iran with the respect it believes it deserves as a significant regional player. Indeed it is only the US that can really address some of Iran's fundamental strategic concerns and that too makes this meeting so interesting.
In a separate development, Tehran said it had pardoned 80 prisoners.
Among them are a number of those who were arrested over protests following the disputed presidential election in 2009.
The move comes a few days after a pardon was issued to 11 inmates.
'Charm offensive'
EU foreign policy chief Catherine Ashton said Mr Zarif, who is also Iran's chief nuclear negotiator, would this week meet foreign ministers from the five permanent UN Security Council members - Britain, China, France, Russia and the US - and also Germany (the P5+1 group).
"We had a good and constructive discussion," Baroness Ashton said after her talks with Mr Zarif at the UN on Monday.
She said she had been struck by the "energy and determination" that she had seen from the Iranians ahead of this week's UN General Assembly.
And the EU official added that her team would hold talks with Mr Zarif again in October in Geneva to assess the progress.
A US State Department official quoted by AFP news agency cautioned that "no-one should have the expectation that we're going to resolve this decades-long issue during the P5+1 meeting later this week".
Continue reading the main story Historic US-Iran meetings Direct contact between Mr Obama and Mr Rouhani would be first between US and Iranian leaders for 36 years
Jimmy Carter and Shah Mohammad Reza Pahlavi were last US-Iranian leaders to meet, in 1977
US Secretary of State Condoleezza Rice and Iranian Foreign Minister Manouchehr Mottaki exchanged "hellos" at Sharm el-Sheikh in 2007 but did not hold talks
The official said the "ball was firmly in Iran's court".
The meetings at the sidelines of the UN General Assembly are part of a charm offensive by Mr Rouhani, the BBC's Bridget Kendall in New York reports.
In 2007, then-Secretary of State Condoleezza Rice was supposed to be seated next to then-Foreign Minister Manouchehr Mottaki at a summit dinner in Egypt, but Mr Mottaki stayed away from the dinner.
Last week, Mr Rouhani said that his country would never build nuclear weapons.
In an interview with the US broadcaster NBC, the president stressed that he had full authority to negotiate with the West over Tehran's uranium enrichment programme.
And he described a recent letter sent to him by US President Barack Obama as "positive and constructive".
President Hassan Rouhani has promised to introduce reforms
A White House spokesman said Mr Obama's letter "indicated that the US is ready to resolve the nuclear issue in a way that allows Iran to demonstrate that its nuclear programme is for exclusively peaceful purposes".
"The letter also conveyed the need to act with a sense of urgency to address this issue because, as we have long said, the window of opportunity for resolving this diplomatically is open, but it will not remain open indefinitely," the spokesman added.
The latest moves come amid suggestions that Mr Obama and Mr Rouhani may meet on the sidelines of the General Assembly and shake hands.
In his election campaign earlier this year, Mr Rouhani pledged a more moderate and open approach in international affairs.
Iran is under UN and Western sanctions over its controversial nuclear programme.
Tehran says it is enriching uranium for peaceful purposes but the US and its allies suspect Iran's leaders of trying to build a nuclear weapon. ||||| Of the 34 world leaders set to address the United Nations in New York Tuesday, all eyes are on President Obama and newly elected Iranian President Hassan Rouhani. U.S. and Iranian leaders have not met in more than three decades. NBC's Andrea Mitchell reports.
It would be the handshake watched around the world.
President Barack Obama and his Iranian counterpart, Hassan Rouhani, will be at the United Nations together on Tuesday — their speeches to the General Assembly book-ending a luncheon for heads of state.
If both attend the luncheon — reports that Rouhani may skip circulated Monday night — they may break bread in the same room. But any gesture beyond that would be historic for two countries whose leaders have not met in three decades.
"It would be unprecedented for the Iranian president to even shake hands with the U.S. president and vice versa," said Hooman Majd, an Iranian-American author and commentator.
"It's possible that will happen this time around. Somebody would have to seek out the other party."
The White House has said no meetings are planned, but hasn't ruled out a spin on the diplomatic dance floor between Obama and Rouhani. And Rouhani sounds, on Twitter at least, like he doesn't intend to be a wall flower at his first mixer.
Pres Rouhani leaving for #NYC."Ready for constructive engagement w/ world to show real image of great Iranian nation" pic.twitter.com/7XS91fXHTl — Hassan Rouhani (@HassanRouhani) September 23, 2013
Even a brief, muted interaction, after years of avoidance, would "suggest that both the White House and the Islamic Republic feel confident this is not just a charm offensive and something more substantial," said Suzanne Maloney, a fellow in the Saban Center for Middle East Policy at the Brookings Institution.
"At this point, I almost worry that on the Iranian side there will be disappointment and frustration if we don't have any direct contact," she added. "We've just not had this kind of a meeting with this much hype swirling around it."
There are risks for both sides, she noted.
If Obama, who speaks in the morning, reaches out to Rouhani at the luncheon — assuming he attends — he'll be extending himself before he finds out what the Iranian has to say to the world.
President Hassan Rouhani arrived in the U.S. Monday on his first trip to address the U.N. Both he and President Obama will be speaking Tuesday but it's still unknown as to whether the two leaders will meet. NBC's Andrea Mitchell reports.
If Rouhani "gets up and gives anything other than the most forward-leaning speech," it would be a major embarrassment for Obama, Maloney said.
For the Iranians, who have traditionally refused to meet the Americans, a face-to-face with Obama would be a "major step away from one of the primary ideological pulls of the regime — this rejection of Washington's insolence," Maloney said.
And the hard-liners that Rouhani has to answer to at home won't be happy "if they come away with nothing more than a handshake, having abandoned something so central to the revolution."
Whatever takes place between the two presidents is likely to be choreographed, said Donald Ensenat, who served as the U.S. chief of protocol at the White House and the U.S. State Department from 2001 to 2007.
The seating at the luncheon is set out by the United Nations, and the heads of states and their foreign ministers begin drifting in at the appointed time to find their places.
In an exclusive interview with TODAY's Ann Curry, newly elected Iranian President Hassan Rouhani talks about Israel, his viewpoints on previous president Mahmoud Ahmadinejad, and the Iranian people's access to the Internet.
"You could walk up to someone and start a conversation but there's a very short window to do it until they are seated," he said.
Between the various addresses, there are alcoves that could be used for what Ensenat called a "pull-aside," a quick chat between two heads of state that rarely lasts more than 15 minutes.
"Those are always pre-arranged," he said.
Veteran diplomat Dennis Ross said if something spontaneous does happen, the U.S. needs to be careful that it doesn't appear to be snubbing Iran.
"If it became an issue that Rouhani was prepared to to shake the president's hand but the president wasn't prepared to shake his hand, it would look like the United States was contriving reasons not to get something done," he told "Andrea Mitchell Reports."
Mary Mel French, who was chief of protocol during the Clinton Administration and author of "United States Protocol," predicted that any exchange will be congenial and fleeting, and that no one will get the cold shoulder.
"There are a lot of nuances for both countries and everyone will be aware that people will be watching," she said.
Related:
Slideshow: Memorable moments in UN history Mario Tama / Getty Images Strong personalities and strong words have dotted the United Nations' history with scenes that won't soon be forgotten. Launch slideshow
This story was originally published on ||||| (Jewel Samad/)
Following a week of speculation and rising expectations, White House officials today seemed to downplay the prospect of a one-on-one meeting between President Obama and his Iranian counterpart.
"We are hoping to engage with the Iranian government at a variety of levels, provided they will follow through on their commitment to address the international community's concerns over their nuclear program," Deputy National Security Adviser Ben Rhodes told reporters on Air Force One.
The caveat regarding its nuclear program is a new twist in the administration's rhetoric after days of suggesting openness to meet without hinting at a prerequisite.
The White House has not ruled out a less formal encounter between Obama and Rouhani, including a handshake, on the sidelines of the U.N. General Assembly during the day on Tuesday. The most likely opportunity for such a greeting would be a midday luncheon hosted by Secretary General Ban Ki-moon.
"I don't think anything will happen by happenstance on a relationship on an issue that is this important," Rhodes said. No American president has met one-on-one with an Iranian head of state since 1977."
Meanwhile, the administration quietly announced that Secretary of State John Kerry will meet with Iran's Foreign Minister Javad Zarif in New York - a meeting that would be the highest-level contact between the two governments on the nuclear issue.
The State Department said the meeting will take place Thursday afternoon as part of a larger group - the so-called P5+1 - seeking to reach an agreement over Iran's contested nuclear program.
"This opportunity with the Iranian foreign minister will give our (P5+1) ministers a sense of their level of seriousness and whether they are coming with concrete new proposals and whether this charm offensive actually has substance to it," one senior State Department official said of the meeting.
"No one should go into Thursday with the expectation that we're going to resolve the decades long discussion over their nuclear program," another official said.
This post has been updated. | Today could bring what Time describes as the most historic handshake since "Nixon and Elvis" (or, more seriously, "Rabin and Arafat"): President Obama and new Iranian President Hasan Rouhani will meet for the first time at the UN General Assembly today, and everyone from government officials to the media is swirling over the possibility that leaders of the two nations could make their first "contact" since the country's 1979 Revolution. Amping up the situation: the setting, which previous president Mahmoud Ahmadinejad liked to use as a platform for questioning the Holocaust and the 9/11 attacks. If it does occur, ABC News pegs the luncheon hosted by Secretary-General Ban Ki-moon as the most likely time and place, and a former White House official tells NBC News the opportunity will be a short-lived one: "You could walk up to someone and start a conversation but there's a very short window to do it until they are seated." At Time, Michael Crowley points out that the handshake could happen "out of the view of reporters and photographers, limiting its symbolic impact." If it doesn't occur, expect the analysis to shift to Rouhani's afternoon address. In the wings: John Kerry will meet later this week with other world powers and Iran's foreign minister; the BBC describes it as "the highest level US-Iran contact for more than 30 years." |
Established in 1971 at the request of the SEC, the Nasdaq stock market is an all-electronic trading facility, which, unlike traditional exchanges like the New York Stock Exchange (NYSE) and the American Stock Exchange (AMEX), has no trading floors and facilitates the trading of over-the-counter (OTC) stocks through a network of market makers connected by telephone and computer. Nasdaq stock market was originally a wholly-owned for-profit subsidiary of the nonprofit NASD, which also served as its direct regulator or self-regulatory organization (SRO). In the mid-1990s, NASD's integrity as a self regulator was called into question when Nasdaq market makers were accused of manipulating stock prices. After a federal investigation, the NASD Regulation (NASDR) was established in 1996 as an independent subsidiary of the NASD. The main purpose was to separate the regulation of the broker/dealer profession from the operation of the Nasdaq. The NASDR became the primary regulator of broker-dealers and of the Nasdaq. All broker-dealers who are registered with the SEC, except those doing business exclusively on a securities exchange, are required to join the NASD. The NASDR's regulatory budget is derived solely from fees and fines imposed on NASD member firms. When it began, Nasdaq was regarded as a technological innovator because it did not rely on a physical trading floor. But over the last decade, both Nasdaq and traditional exchanges have faced growing competition from two principal sources: First, global stock markets that compete with U.S. markets for multinational corporate listings have grown dramatically. Second, continuous technological change has led to automated, computer-matching, trading platforms called electronic communication networks (ECNs). Indeed, Nasdaq has developed its own ECN, the SuperMontage and has acquired another one, Brut. To help themselves remain competitive, the world's major stock markets are reexamining their governance and capital structures with an eye toward changes that would enable them to react more deftly to the rapidly changing securities marketplace. Conversion from privately-held (mutual) status to shareholder-owned status known as demutualization, has become an increasingly attractive strategic response to the changing market dynamics. Many international and domestic stock exchanges have demutualized over the last decade or so, including the London, Tokyo, Philadelphia, and the New York Stock Exchange (in early 2006 after merging with Archipelago, the electronic communication trading network). Key reasons for demutulization have included that (1) it enables exchanges to more immediately raise capital and provide better regular access to capital markets; (2) it makes exchanges better able to align their interests with those of their key participants; and (3) it provides exchanges with greater flexibility and speed in adapting to changing market conditions. In the summer of 1999, the Nasdaq announced its intent to demutualize. This change raised a number of policy concerns that largely involved demutualized stock markets' ability to effectively discharge their SRO duties. Among the key questions raised by the prospect of demutulization were (1) Is there a cause for concern when a for-profit, shareholder-owned SRO regulates entities like broker-dealers who in turn have ownership stakes in competitive rivals such as electronic communication networks? and (2) Would the altered economics of being a for-profit, shareholder-owned exchange affect an exchange's ability to effectively regulate itself? After announcing its interest in pursuing demutualization, the NYSE cited other pressing concerns and put the process on hold. In April 2000, however, the NASD membership approved spinning off the for-profit Nasdaq from the non-profit NASD and converting it into a shareholder-owned market. The process was initially envisioned to have three broad stages: (1) issuing privately placed stock; (2) converting to technical exchange status; and (3) issuing public stock. The private placement took place in two sub-stages. In the initial sub-stage, the private placement, which was completed in June 2000, the NASD sold shares and issued warrants on shares of Nasdaq that it owned, and Nasdaq also issued and sold additional shares. The NASD's ownership interest in Nasdaq was reduced from 100% to 60%. The second sub-phase of the private placement was completed on January 18, 2001, with NASD's ownership interest then falling to 40% or about 77 million Nasdaq shares. The NASD, however retained 51% of the actual voting interest in Nasdaq. On February 21, 2002, Nasdaq acquired 13.5 million shares held by the NASD. On March 8, 2001, Nasdaq acquired 20.3 million shares from the NASD, leaving 43.2 million shares still owned by the NASD in the form of underlying warrants that had been issued during Nasdaq's private placements. Concurrently, a new series of preferred voting stock was issued to the NASD, allowing it to continue to have majority voting interest in Nasdaq. The second stage, conversion to exchange status, was a requirement for the third stage—sale of Nasdaq shares to the public. Although from a practical standpoint it has little significance, Nasdaq currently is exempt from the definition of an "exchange" under Rule 3a1-1 of the Securities and Exchange Act of 1934 because it is operated by the NASD. Before the NASD could relinquish control of it, Nasdaq was required to register as a national securities exchange. With approval of Nasdaq's exchange application, the preferred shares that provide the NASD with its majority vote interest over Nasdaq will expire and it will no longer have effective control over Nasdaq. The exchange's ultimate goal has been to conduct an initial public offering (IPO). On March 15, 2001, Nasdaq submitted an initial application for exchange status to the SEC, an application that the agency published for comment on June 14, 2001. It later made several amendments to the application in late 2001 and early 2002. After the initial application, the foremost regulatory concern for the SEC and a number of securities market participants was that, as written, the application would have continued to allow Nasdaq to operate without a trade execution protocol known as intra-market price and time priority, which is required of exchanges. This protocol is described below. Nasdaq processes limit orders, orders to buy or sell a stock when it hits a specified price. The NYSE centrally posts limit orders, which permits better-priced orders to receive priority execution there or on the various other interlinked market centers that trade NYSE-listed stocks. This is known as price and time priority and all exchanges abide by it. (Both the Nasdaq and the NYSE are markets in which brokers are required to exercise their duty of best execution when they route their customer's orders. The concept is inexplicit but is often interpreted to means that an order should be sent to the market center providing the best prevailing price.) But a significant fraction of Nasdaq market makers match buyer and seller orders from their own order books. Known as internalization, this can result in well priced limit orders outside of a market maker's book being ignored. Nasdaq officials have argued that their market permits competing dealers to add liquidity to the markets by interacting with their own order flow but SEC officials have concerns about the formal absence of price priority. This was a major sticking point in the agency's delay in approving the exchange application, concerns that Nasdaq attempted to address through subsequent amendments to its exchange application. On January 13, 2006, the SEC approved Nasdaq's application to become a registered national securities exchange. As a registered exchange, Nasdaq will become a self-regulatory organization (SRO) with ultimate responsibilty for its own and its members compliance with the federal securities laws. Several years ago, Nasdaq entered into a Regulatory Services Agreement with the NASD to perform certain key regulatory functions for it, an arrangement that should continue. Nasdaq is now officially a registered an exchange, but the SEC will not permit Nasdaq to begin operations as an exchange and to fully relinquish its independence from ongoing control by the NASD until various conditions, including the following key ones, are satisfied: Nasdaq must join the various national market system plans and the Intermarket Surveillance Group; The NASD must determine that its control of Nasdaq through its Preferred Class D share is no longer necessary because NASD can fulfill through other means its obligations with respect to non-Nasdaq exchange-listed securities under the Exchange Act; The SEC must declare certain regulatory plans to be filed by Nasdaq to be effective; and Nasdaq must file, and the Commission must approve, an agreement pursuant to Section 17d-2 of the Securities Exchange Act of 1934 that allocates to NASD regulatory responsibility with respect to certain activities of common members. Nasdaq's exchange application limits the exchange to transactions in the Nasdaq Market Center, previously known as SuperMontage and Brut, which will adhere to rules on intramarket priorities. However, orders that are internalized by NASD broker dealers that may not adhere to intra-market priority rules would be reported through the new Trade Reporting Facility (TRF), which must go through a separate regulatory review process and which will be administered by the NASD. Nasdaq will receive revenues from TRF trades (a contentious point for a number of its rivals). | Traditionally, the Nasdaq stock market was a for-profit, but wholly-owned subsidiary of the nonprofit National Association of Securities Dealers, Inc. (NASD), the largest self-regulatory organization (SRO) for the securities industry. In 2000, in a strategic response to an increasingly competitive securities trading market, the NASD membership approved spinning off the for-profit NASD-owned Nasdaq and converting it into a for-profit shareholder-owned market that later planned to issue publicly traded stock. For Nasdaq, this process has involved three basic stages: (1) issuing privately placed stock; (2) converting to technical exchange status; and (3) issuing publicly-held stock. Stage one, the private placement stage has been completed. In March 2001, Nasdaq submitted an application for exchange status to the Securities and Exchange Commission (SEC), an application that has been amended several times to address certain criticisms. Obtaining exchange status is necessary for Nasdaq to proceed to stage three, the issuance of publicly held stock. Realization of that stage became much closer on January 13, 2006, when after more than a half decade, the SEC approved Nasdaq's application to become a registered national securities exchange. |
Perhaps the single most important element of successful management improvement initiatives is the demonstrated commitment of top leaders to change. This commitment is most prominently shown through the personal involvement of top leaders in developing and directing reform efforts. Organizations that successfully address their long-standing management weaknesses do not “staff out” responsibility for leading change. Top leadership involvement and clear lines of accountability for making management improvements are critical to overcoming organizations’ natural resistance to change, marshalling the resources needed in many cases to improve management, and building and maintaining the organizationwide commitment to new ways to doing business. Commissioner Rossotti’s efforts at IRS provide a clear example of leadership’s commitment to change. The Commissioner has articulated a new mission for the agency, together with support for strategic goals that balance customer service and compliance with tax laws. Moreover, the Commissioner has initiated a modernization effort that touches virtually every aspect of the agency, including business practices, organizational structure, management roles and responsibilities, performance measures, and technology. Commissioner Rossotti has assigned clear executive ownership of each of IRS’ major initiatives and is using executive steering committees to provide oversight and accountability for driving the change efforts. Sustaining top leadership commitment to improvement is particularly challenging in the federal government because of the frequent turnover of senior agency political officials. As a result, sustaining improvement initiatives requires commitment and leadership by senior career executives, as well as political leaders. Career executives can help provide the long-term focus needed to institutionalize reforms that political executives’ often more limited tenure does not permit. In addition, the other elements of successful management improvement initiatives that we shall turn to shortly are important for institutionalizing reform initiatives. Traditionally, the danger to any management reform is that it can become a hollow, paper-driven exercise where management improvement initiatives are not integrated into the day-to-day activities of the organization. Thus, successful organizations recognize—and implement reform efforts on the basis of—the essential connection between sound management and the programmatic results those organizations hope to achieve. The Results Act provides a ready-made statutory mechanism for making this essential connection, engaging Congress in a discussion of how and when management problems will be addressed, and helping to pinpoint additional efforts that may be needed. We have found that annual performance plans that include precise and measurable goals for resolving mission-critical management problems are important to ensuring that agencies have the institutional capacity to achieve their more results- oriented programmatic goals. Moreover, by using annual performance plans to set goals to address management weaknesses, agencies provide themselves and Congress with a vehicle—the subsequent agency performance reports—for tracking progress in addressing management problems and considering what, if any, additional efforts are needed. Unfortunately, we found that agencies do not consistently address major management challenges and program risks in their fiscal year 2000 performance plans. In those cases where challenges and risks are addressed, agencies use a variety of approaches, including setting goals and measures directly linked to the management challenges and program risks, establishing goals and measures that are indirectly related to the challenges and risks, or laying out strategies to address them. Figure 1 shows the distribution of the 24 agencies covered by the Chief Financial Officers Act and their different approaches to addressing management challenges and program risks in their annual performance plans. controls do not adequately reduce vulnerability to inappropriate disclosure.); and weaknesses in internal controls over taxpayer receipts. Similarly, the General Services Administration’s (GSA) fiscal year 2000 annual performance plan does not address several long-standing problems identified by the GSA Inspector General. These problems include top management’s lack of emphasis on ensuring that the internal controls are in place to deter fraud, waste, and abuse. GSA’s plan also does not fully address issues raised by the Inspector General related to developing new management information systems and ensuring that automated information systems have the proper controls and safeguards. These omissions are significant because GSA’s governmentwide oversight and service-provider role, its extensive interaction with the private sector, and the billions of taxpayer dollars involved in carrying out its activities, make it especially important that GSA’s operations be adequately protected. The magnitude of the challenges that many agencies face in addressing their management weaknesses necessitates substantive planning be done to establish (1) clear goals and objectives for the improvement initiative, (2) the concrete management improvement steps that will be taken, (3) key milestones that will be used to track the implementation status, and (4) the cost and performance data that will be used to gauge overall progress in addressing identified weaknesses. Our work across the federal government has found the effective use of human capital and information technology—both separately and, importantly, as they relate to one another—are areas where thoughtful and rigorous planning is needed if fundamental management improvements are to be made. late 1997. As a result, the agencies were struggling to achieve their efficiency and service improvement objectives. On a more positive note, we recently reviewed the efforts of three agencies (the Postal Service, the Department of Veterans Affairs (VA), and the Park Service) to more strategically manage their facilities and assets by forming business partnerships with the private sector. In each of the six partnerships that we reviewed, the agency built the expertise to engage in the partnership and make it successful. For example, the Department of Veterans Affairs established a separate organizational unit staffed with professionals experienced in management, architecture, civil engineering, and contracting to manage its partnerships. With regard to planning for major technology projects, IRS has historically lacked disciplined and structured processes for developing and managing information technology. We reported in February 1998 that IRS had not clearly defined system modernization phases, nor had it adequately specified organizational roles, making it unclear who was to do what. IRS’ systems modernization challenges include completing a modernization blueprint to define, direct, and control future modernization efforts and establishing the management and engineering capability to build and acquire modernized systems. The key to effectively addressing these challenges is to ensure that long-standing modernization management and technical weaknesses are corrected before IRS invests large sums of modernization funds. As we have reported, IRS recently initiated appropriate first steps to address these weaknesses via its initial modernization expenditure plan that represents the first step in a long- term, incremental modernization program. materials, reach a wider audience, and provide its clients with information in a format that better meets their needs. The Bureau reports that its customers are responding positively to the shift, with significant growth in the number of customer hits on the Census Internet site, from about 10,000 per day in 1994 to more than 850,000 per day in 1999. The Bureau plans to use the Internet as its principal medium for releasing data from the 2000 Census. Successful management improvement efforts require the active involvement of managers and staff throughout the organization to provide ideas for improvements and supply the energy and expertise needed to implement changes. Employees at all levels of high-performing organizations participate in--and have a stake in--improving operational and program performance to achieve results. Our work has shown that high-performing organizations use a number of strategies and techniques to effectively involve employees, including (1) fostering a performance- oriented culture, (2) working to develop a consensus with unions on goals and strategies, (3) providing the training that staff need to work effectively, and (4) devolving authority while focusing accountability on results. Fostering a performance-oriented culture requires agency management to communicate with staff throughout the organization to involve them in the process of designing and implementing change. Setting improvement goals is an important step in getting organizations across the government to engage seriously in the difficult task of change. The central features of the Results Act—strategic planning, performance measurement, and public reporting and accountability—can serve as powerful tools to help change the basic culture of government. Involving employees in developing and implementing these goals and measures can help direct a diverse array of actions to improve performance and achieve results. However, our survey of federal managers, conducted in late 1996 and 1997, indicates there is substantial room for improvement in this area. This survey found that only one-third of non-SES managers (as opposed to nearly three-fourths of the SES managers) reported they had been involved in establishing long-term strategic goals for their agencies. success. The failure to constructively involve staff in an organization’s improvement efforts means running the risk that the changes will be more difficult and protracted than necessary. For example, in the fall of 1997, the Nuclear Regulatory Commission’s (NRC) Office of Inspector General surveyed NRC staff to obtain their views on the agency’s safety culture. In its June 1998 report, the Inspector General noted that the staff had a strong commitment to protecting public health and safety but expressed high levels of uncertainty and confusion about the new directions in regulatory practices and challenges facing the agency. Employees who are confused about the direction their agency is taking will not be able to effectively focus on results or make as full a contribution as they might otherwise. One way high-performing organizations can enhance employee involvement and gain agreement on an organization’s goals and strategies is by developing partnerships with employee unions. The U.S. Postal Service’s long-standing challenges in labor-management relations illustrate the importance of having a shared set of long-term goals and strategies agreed upon by managers, employees, and unions. As we have reported, labor-management relations at the Postal Service have been characterized by disagreements that have, among other things, hampered efforts to automate some postal systems that could have resulted in savings and helped the Service reach its performance goals. Although there has been some progress, problems persist and continue to contribute to higher mail processing and delivery costs. To help the Postal Service resolve its problems, we have long recommended that the Service and its unions and management associations establish a framework agreement to outline common goals. We have also noted that the Results Act can provide an effective framework for union and management representatives to discuss and agree upon goals and strategies. employees to handle return processing workload during the annual filing season, IRS plans to increase the number of permanent employees and expand their job responsibilities to include compliance work that they can do after the filing season. Those employees will have to be cross-trained so that they can handle both their return processing and compliance responsibilities. Training is expected to be a key factor in IRS’ efforts to provide top-quality customer service. Further, given the dynamic environment agencies face, employees need incentives, training, and support to help them continually learn and adapt. Our 1996/97 survey found that about 60 percent or more of the supervisors and managers reported that their agencies had not provided them with the training necessary to accomplish critical, results-oriented management tasks. High-performing organizations also seek to involve and engage employees by devolving authority to lower levels of the organization. Employees are more likely to support changes when they have the necessary amount of authority and flexibility--along with commensurate accountability and incentives--to advance the agency’s goals and improve performance. Allowing employees to bring their expertise and judgement to bear in meeting their responsibilities can help agencies capitalize on their employees’ talents, leading to more effective and efficient operations and improved customer service. Some federal agencies, such as the Social Security Administration (SSA), are exploring new ways to involve employees by devolving decisionmaking authority. Although the efficacy of this initiative has not been fully assessed, SSA has been implementing a pilot program to establish a “single decision maker” position. This program expands the authority of disability examiners, who currently make initial disability determinations jointly with physicians, and allows the single decision maker to make the initial disability determination and consult with physicians only as needed. surveyed reported that they were being held accountable for program results. Our work has also shown that agencies can do a better job of providing incentives to encourage employees to improve performance and achieve results. Only one-fourth of non-SES managers reported that to a great or very great extent employees received positive recognition from their agencies for efforts to help accomplish strategic goals. At the request of this Subcommittee, we are surveying federal managers again to follow up on whether there have been improvements in these critical areas. Some agencies have explored new ways of devolving decisionmaking authority in exchange for operational flexibility and accountability for results. For example, in fiscal year 1996, the Veterans Health Administration (VHA) management structure was decentralized to form 22 Veterans Integrated Service Networks. VA gave these networks substantial operational autonomy and the ability to perform basic decisionmaking and budgetary duties. VA made the networks accountable for results such as improving patient access, efficiency, and reducing costs. VA also established performance measures, such as increasing the number of outpatient surgeries, reducing the use of inpatient care, and increasing the number of high-priority veterans served to hold network and medical center directors accountable for results. Successful management improvement efforts often entail organizational realignment to better achieve results and clarify accountability. For example, GSA has sought to improve its efficiency and effectiveness by changing its organizational structure to separate its policymaking functions from its operations that provide services. GSA recognized that it suffered from conflicting policymaking and service-providing roles and needed to replace its outmoded methods of delivering service. To address this issue, GSA established the Office of Policy, Planning, and Evaluation in 1995, which it later renamed the Office of Governmentwide Policy, to handle policy decisions separately from functions that deliver supplies or services. GSA believes that this realignment has improved efficiency and reduced the perception of conflict of interest that existed prior to the separation of its policymaking and service-delivery roles. the GSA Inspector General has expressed concerns that GSA’s organization and management structure has not kept pace with GSA’s downsizing, streamlining, and reform efforts. In addition, the Inspector General has said that GSA’s organizational structure does not seem to match the responsibility for managing programs with the authority to do so. As a result, for example, GSA has faced situations where regions (which operate independently) have taken divergent positions on similar issues, according to the Inspector General. IRS’ ongoing efforts provide another example of the importance of aligning organizational structures. As Commissioner Rossotti has stated, IRS’ current cumbersome organizational structure and inadequate technology are the principal obstacles to delivering dramatic improvements in customer service and productivity. The Commissioner is reorganizing IRS with the aim of building an organization designed around taxpayer groups and creating management roles with clear responsibilities. One of the first organizational realignments taking place is in the Office of the Taxpayer Advocate. This office is intended to, among other things, help taxpayers who cannot get their problems resolved through normal IRS channels. Formerly, the Advocate’s Office had to rely on functional groups within IRS, like examination and collection, to provide most of its program resources—including staff, space, and equipment. When functional needs conflicted with Advocate Office needs, there was no assurance that advocate needs would be met. In the new organization, all advocate program resources will be controlled and managed by the Taxpayer Advocate. By organizing this way, IRS hopes to improve both program efficiency and service to taxpayers. identified the management of student financial aid programs, with more than $150 billion in outstanding student loans, as being at high-risk to waste, fraud, abuse, and mismanagement. The PBO structure exemplifies new directions in accountability for the federal government because the PBO’s Chief Operating Officer, who reports to the Secretary of Education, is held directly and personally accountable, through an employment contract, for achieving measurable organizational and individual goals. The Chief Operating Officer is appointed by the Secretary of Education to a minimum 3-year and a maximum 5-year term, and may receive a bonus for meeting the performance goals or be removed for failing to meet them. The Office of Student Financial Assistance was provided with increased flexibility for procurement and personnel management, and key managers are to be held directly accountable for performance objectives that include (1) improving customer satisfaction; (2) providing high quality cost- effective services; and (3) providing complete, accurate, and timely data to ensure program integrity. The Chief Operating Officer is to enter into annual performance agreements containing measurable organization and individual goals with key managers, who can receive a bonus or can also be removed. An additional accountability mechanism is that the Chief Operating Officer and the Secretary of Education are required to agree on, and make public, a 5-year performance plan that establishes the Office’s goals and objectives. To further underscore accountability issues, the PBO’s Chief Operating Officer is to annually prepare and submit to Congress, through the Secretary, a report on the performance of the PBO. The report is to include an evaluation of the extent to which the Office met the goals and objectives contained in the 5-year performance plan. In addition, the annual report is to include (1) an independent financial audit, (2) applicable financial and performance requirements under the Chief Financial Officers Act and the Results Act, (3) the results achieved by the Office relative to its goals, (4) an evaluation of the Chief Operating Officer’s performance, (5) recommendations for legislative and regulatory changes to improve service and program integrity, and (6) other information as detailed by the Director of the Office of Management and Budget. Finally, Congress plays a crucial role in management improvement efforts throughout the executive branch through its legislative and oversight capacities. On a governmentwide basis, Congress, under the bi-partisan leadership of this Committee and the House Government Reform Committee, has established a statutory framework consisting of requirements for goal-setting and performance measurement, financial management, and information technology management, all aimed at improving the performance, management, and accountability of the federal government. Through the enactment of the framework and its efforts to foster the framework’s implementation, Congress has, in effect, served as an institutional champion for improving the management of the federal government, providing a consistent focus for oversight and reinforcement of important policies. On an agency-specific basis as well, support from the Congress has proven to be critical in instituting and sustaining management reforms, such as those taking place at IRS, GSA, and elsewhere across the federal government. Congress, in its oversight role, can monitor management improvement initiatives and provide the continuing attention necessary for reform initiatives to be carried through to their successful completion. Information in agencies’ plans and reports produced under the Results Act, high quality financial and program cost data, and other related information, can help Congress in targeting its oversight efforts and identifying opportunities for additional improvements in agencies’ management. In this regard, we have long advocated that congressional committees of jurisdiction hold augmented oversight hearings on each of the major agencies at least once each Congress. Congress could examine, for example, the degree to which agencies are building the elements of successful management improvement initiatives that we have discussed today into their respective management reform efforts. Such hearings will further underscore for agencies the importance that Congress places on creating high-performing government organizations. Also, through the appointment and confirmation process, the Senate has an added opportunity to make clear its commitment to sound federal management and explore what prospective nominees plan to do to ensure that their agencies are well-managed and striving to be high-performing organizations. improvement initiatives into programmatic decisions, planning to chart the direction the improvements will take, employee involvement in the change efforts, organizational realignment to streamline operations and clarify accountability, and congressional involvement and oversight. Experience has shown that when these elements are in place, lasting management reforms are more likely to be implemented that ultimately lead to improvements in the performance and cost-efficiency of government. Mr. Chairman, this concludes our prepared statement. We would be pleased to respond to any questions that you or other Members of the Subcommittee may have. For further contacts regarding this testimony, please contact J. Christopher Mihm at (202) 512-8676. For information regarding GAO’s work on IRS modernization, please contact James R. White at (202) 512-9110, and for information regarding GAO’s work on GSA, please contact Bernard L. Ungar at (202) 512-4232. Individuals making key contributions to this testimony included Kelsey Bright, Deborah Junod, Susan Ragland, and William Reinsberg. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch- tone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO discussed efforts to improve the management and performance of the federal government. GAO noted that: (1) serious and disciplined efforts are needed to attack the management problems confronting some of the federal government's largest agencies; (2) successful management improvement efforts often contain a number of common critical elements, including top leadership commitment and accountability, the integration of management improvement initiatives into programmatic decisions, planning to chart the direction the improvements will take, employee involvement in the change efforts, organizational realignment to streamline operations and clarify accountability, and congressional involvement and oversight; and (3) experience has shown that when these elements are in place, lasting management reforms are more likely to be implemented that ultimately lead to improvements in the performance and cost-efficiency of government. |
Temperatures
Precipitation
Daylight Graphs
Map January
February
March April
May
June July
August
September October
November
December
Average Temperatures in Base Esperanza, Antarctica
The mean temperature in Base Esperanza, Antarctica is cold at -5.8 degrees Celsius (21.6 degrees Fahrenheit).
The variation of mean monthly temperatures is 11.4 °C (20.5°F) which is a low range.
January is the hottest month (very cool) having an average temperature of 0.5 degrees Celsius (32.9 degrees Fahrenheit).
June is the coolest month (very cold) having an average temperature of -10.9 degrees Celsius (12.38 degrees Fahrenheit).
Average Temperatures Table for Base Esperanza
Jul Aug Sep Oct Nov Dec Jan Feb Mar Apr May Jun Annual Average Temperature °C ( °F) -10.8 (12.6) -10.5 (13.1) -7 (19.4) -3.6 (25.5) -1.7 (28.9) 0.4 (32.7) 0.5 (32.9) -0.5 (31.1) -3.1 (26.4) -7.3 (18.9) -9.6 (14.7) -10.9 (12.4) -5.8 (21.6) Average Min Temperature °C ( °F) -15 (5) -14.7 (5.5) -11.1 (12) -7.4 (18.7) -4.6 (23.7) -2.1 (28.2) -1.8 (28.8) -2.9 (26.8) -6.4 (20.5) -11.1 (12) -13.4 (7.9) -15.1 (4.8) -8.8 (16.2)
Base Esperanza Average Temperatures Chart
The average temperature charts use a fixed scale so that you can easily compare temperatures between two or more locations. Simply line up the charts in separate tabs in your browser and toggle between tabs to visualise the differences. The charts have major grid lines at intervals of 10 °C on the left axis corresponding with intervals of 18 °F on the right axis. Minor gridlines mark intervals of 2.5 °C and 4.5 °F. The charts show the relationship between the Celsius and Fahrenheit measuring scales. Locations in the northern hemisphere run from January to December and in the southern hemisphere from July to June so that the middle of the chart always corresponds with the high sun period (for the hemisphere).
Average Temperatures Nearby Base Esperanza ||||| The 20th anniversary of the discovery of the first “hole” in the ozone layer on Tuesday had many climate observers focused on the Arctic, where a study published last week found that polar bears were eating more birds’ eggs, perhaps due to lost hunting grounds with the disappearance of summer ice.
But equally significant climate news was playing out in Antarctica, where two climate stations registered ominous new potential measurements of accelerating climate change.
A weather station on the northern tip of the Antarctic peninsula recorded what may be the highest temperature ever on the continent, while a separate study published in the journal Science found that the losses of ice shelf volume in the western Antarctic had increased by 70% in the last decade.
Helen A Fricker of the Scripps Institution of Oceanography at the University of California, San Diego, a co-author of the Science report, said that there was not necessarily a correlation between recent temperature fluctuations and disappearing ice.
“While it is fair to say that we’re seeing the ice shelves responding to climate change, we don’t believe there is enough evidence to directly relate recent ice shelf losses specifically to changes in global temperature,” Fricker said in an interview with Reuters.
What was incontestable were the unprecedentedly high temperature readings on the Antarctic ice mass.
The potential Antarctica record high of 63.5F (17.5C) was recorded on 24 March at the Esperanza Base, just south of the southern tip of Argentina. The reading, first noted on the Weather Underground blog, came one day after a nearby weather station, at Marambio Base, saw a record high of its own, at 63.3F (17.4C).
By any measure, the Esperanza reading this week was unusual. The previous record high at the base, of 62.7F (17.1C), was recorded in 1961.
But whether the recent readings represent records for Antarctica depends on the judgment of the World Meteorological Organization, the keeper of official global records for extreme temperatures, rainfall and hailstorms, dry spells and wind gusts. The WMO has recorded extreme temperatures in Antarctica but not settled the question of all-time records for the continent, according to Christopher Burt of Weather Underground.
One complicating factor is debate about what constitutes “Antarctica”. Both Esperanza and Marambio lie outside the Antarctic circle, though they are attached to the mainland by the frozen archipelago that is the Antarctic peninsula.
A conservative definition of what Antarctica is would seem to award the distinction of hottest-ever temperature to a 59F (15C) reading nearer the South Pole from 1974, according to Burt. ||||| Abstract
The floating ice shelves surrounding the Antarctic Ice Sheet restrain the grounded ice-sheet flow. Thinning of an ice shelf reduces this effect, leading to an increase in ice discharge to the ocean. Using eighteen years of continuous satellite radar altimeter observations we have computed decadal-scale changes in ice-shelf thickness around the Antarctic continent. Overall, average ice-shelf volume change accelerated from negligible loss at 25 ± 64 km3 per year for 1994-2003 to rapid loss of 310 ± 74 km3 per year for 2003-2012. West Antarctic losses increased by 70% in the last decade, and earlier volume gain by East Antarctic ice shelves ceased. In the Amundsen and Bellingshausen regions, some ice shelves have lost up to 18% of their thickness in less than two decades. ||||| Possible New Continental Heat Record for Antarctica
Possible New Continental Heat Record for Antarctica
On March 24th Base Esperanza (under Argentinean administration) located near the northern tip of the Antarctic Peninsula reported a temperature of 17.5°C (63.5°F). Although this is the warmest temperature ever measured since weather stations became established on the southern continent, it is complicated by what the very definition of ‘Antarctica’ is. Here’s a brief review.
Argentina’s Esperanza Base on the northern tip of the Antarctic Peninsula. It is located near 63°S latitude. Image from Wikipedia.
METAR tables for Base Esperanza (top) and Base Marambio (bottom) for the days of March 23-24. The 17.5°C (63.5°F) at Esperanza on March 24th and 17.4°C (63.3°F) at Marambio on March 23rd exceed any temperatures yet measured on or very close to the Antarctic landmass. Tables from OGIMET.
The 17.5°C (63.5°F) temperature at Esperanza occurred just one day following a reading of 17.4°C (63.3°F) measured at Base Marambio (also under Argentinean administration) on March 23rd. Marambio is located about 60 miles (100 km) southeast of Esperanza. Both figures surpass any temperature yet measured at either site. Esperanza’s previous record high of 17.1°C (62.7°F) was recorded on April 24, 1961 according to Argentina's met service SMN (Servicio Meteorológico Nacional) and the previous record for Base Marambio was 16.5°C (61.7°F) on December 7, 1992.
More importantly, the temperature at Esperanza exceeds any figure yet observed on the Antarctic landmass or Peninsula. According to the WMO, the all-time warmest temperature yet observed in Antarctica was 15.0°C (59.0°F) at Vanda Station on January 5, 1974. Vanda Station is located near 77°S latitude but was occupied for only brief periods, mostly during the Austral summers, between 1967-1995. It now has an automated weather station and is occasionally visited by researchers. Base Esperanza’s weather records began in 1945 according to data published in The World Survey of Climatology: Vol. 14, The Polar Regions. On page p. 353 there is a table of climate data for Esperanza based upon the POR of 1945-1960. I am not sure if the POR has been continuous since 1960 up to the present. Base Marambio was founded in 1969 and is a relatively large facility with at least 55 year-round personnel, a total that swells to 200 during the summer.
Map of Antarctica showing the locations of the various sites referred to in this blog. A table of what the highest observed temperature on record for each of the four sites discussed is below the map.
Despite the fact that the temperature record from Vanda appears on the list of world weather extremes maintained by the WMO, the WMO has not yet investigated all-time weather records for Antarctica, so the Vanda reading and the recent observations at Esperanza and Marambio remain ‘unofficial’ (so far as continental world-record-temperature extremes are concerned) although the recent temperatures at Esperanza and Marambio are 'official', at least preliminarily, according to SMN.
That being said, and given the recent extraordinary temperatures observed at Esperanza and Marambio, there is a chance that the WMO may wish to launch such (an investigation of Antarctica’s warmest measured temperature).
Defining 'Antarctica'
Should this happen the first issue will be the definition of the region of ‘Antarctica’ for the purpose of weather records relating to the continent. There could be perhaps three (or even four) possible scenarios.
1) The narrowest interpretation might be to include only sites that are south of the Antarctic Circle (near 66°S latitude). In that case, Esperanza would not be part of the record set and the Vanda figure might stand. See map above.
2) A more broadly accepted definition would be that adopted by the Antarctic Treaty System in 1961 which defined ‘Antarctica’ to include all land and ice shelves located south of the 60°S latitude. Should this interpretation be used, then the South Orkney Islands, which lie about 500 kilometers northeast of the northern tip of the Antarctic Peninsula, would fall inside the investigation area. That would mean that the British outpost, Signy Research Station (latitude near 60° 43’S), on Signy Island would have measured the warmest temperature on record in Antarctica with a 19.8°C (67.6°F) on January 30, 1982 according to research by Maximiliano Herrera.
3) The third possible definition would be to include only the landmass of Antarctica (although the Antarctica Peninsula is actually composed of a series of islands connected to one another by glaciers and ice sheets). In that case the recent 17.5°C at Base Esperanza would most likely stand as the warmest temperature yet measured on the continent.
4) Maximiliano adds yet a possible fourth consideration: areas associated with the Antarctic geological shelf. See his note on his Wiki page of continental records here. He adds the following footnote: “If we consider the geological case, Amsterdam Island, (located at 37° 49'S and 77° 33'E), which belongs to the French dependence of the French Southern and Antarctic Lands and associated with Africa, lies on the Antarctic plate and has a highest temperature of 26.4°C (79.4°F) on 30 January 2005”.
More About the Unusual Warmth at Bases Esperanza and Marambio
One surprising aspect of the temperatures measured recently at Esperanza and Marambio are that they occurred in autumn, nearly three months past the usual warmest time of the year in the Antarctic Peninsula. According to NOAA December is typically the warmest month in Esperanza, with an average high temperature of 37.8°F (3.2°C). The March average high temperature is 31.3°F (-0.4°C), so temperatures on Tuesday, March 25th, were more than 30°F (17°C) above average. However, looking at the statistics in the World Survey of Climatology (referred to earlier) it is interesting to note that the warmest temperature observed at Esperanza during the period of 1945-1960 was a 14.6°C reading during an October and the 2nd warmest was 14.2°C during an April (also tied in January). So we can see that record high temperatures for Esperanza are not confined to just the summer months.
Departure of temperature from average for Tuesday, March, 24, 2015, over Antarctica. Temperatures were more than 30°F (17°C) above average. Image credit: University of Main Climate Reanalyzer.
A strong high pressure ridge and a Foehn wind led to the record temperatures as Jeff Masters explains here:
This week's record temperatures were made possible by an unusually extreme jet stream contortion that brought a strong ridge of high pressure over the Antarctic Peninsula, allowing warm air from South America to push southwards over Antarctica. At the surface, west to east blowing winds over the Antarctic Peninsula rose up over the 1,000-foot high mountains just to the west of Esperanza Base, then descended and warmed via adiabatic compression into a warm foehn wind that reached 44 mph (71 km/hr) at 09 UTC on March 24th, near when the maximum temperature was recorded. A similar event also affected Marambio on the 23rd.
Jet stream image for Tuesday, March, 24, 2015, over Antarctica. An unusually extreme contortion of the jet stream allowed a ridge of high pressure to extend far to the south over the Antarctic Peninsula, bringing record-warm air from South America. Image credit: : University of Main Climate Reanalyzer.
KUDOS: Thanks to Maximiliano Herrera for bringing this to our attention and researching the temperature records for Antarctica.
Christopher C. Burt
Weather Historian ||||| NASA's DC-8 flies over the Brunt Ice Shelf in Antarctica October 26, 2010 in this handout photo provided by NASA, March 26, 2015.
WASHINGTON Satellite data from 1994 to 2012 reveals an accelerating decline in Antarctica's massive floating ice shelves, with some shrinking 18 percent, in a development that could hasten the rise in global sea levels, scientists say.
The findings, published on Thursday in the journal Science, come amid concern among many scientists about the effects of global climate change on Earth's vast, remote polar regions.
The study relied on 18 years of continuous observations from three European Space Agency satellite missions and covered more than 415,000 square miles (1,075,000 square km).
During the study period's first half, to about 2003, the overall volume decline around Antarctica was small, with West Antarctica losses almost balanced out by gains in East Antarctica. After that, western losses accelerated and gains in the east ended.
"There has been more and more ice being lost from Antarctica's floating ice shelves," said glaciologist Helen Fricker of the Scripps Institution of Oceanography at the University of California, San Diego.
The Crosson Ice Shelf in the Amundsen Sea and the Venable Ice Shelf in the Bellingshausen Sea, both in West Antarctica, each shrank about 18 percent during the study period.
"If the loss rates that we observed during the past two decades are sustained, some ice shelves in the Amundsen and Bellingshausen seas could disappear within this century," added Scripps geophysics doctoral candidate Fernando Paolo.
The melting of these ice shelves does not directly affect sea levels because they are already floating.
"This is just like your glass of gin and tonic. When the ice cubes melt, the level of liquid in the glass does not rise," Paolo said.
But the floating ice shelves provide a restraining force for land-based ice, and their reduction would increase the flow of the ice from the land into the ocean, which would increase sea levels.
"While it is fair to say that we're seeing the ice shelves responding to climate change, we don't believe there is enough evidence to directly relate recent ice shelf losses specifically to changes in global temperature," Fricker said.
Oceanographer and co-author Laurie Padman of Earth & Space Research in Corvallis, Oregon said for a few Antarctic ice shelves, ice loss can be related fairly directly to warming air temperatures. Much of the increased melting elsewhere is probably due to more warm water getting under the ice shelves because of increasing winds near Antarctica, Padman added.
(Reporting by Will Dunham; editing by Andrew Hay) | A recently published study and two weather station readings suggest that Antarctica may be exhibiting the effects of global warming, the Guardian reports. A March 24 reading at the Esperanza Base south of Argentina registered a balmy 63.5 degrees Fahrenheit, which is more than 30 degrees above average for that time of year and may be the highest temp ever recorded on the continent, the Weather Underground blog notes. The nearby Marambio Base had registered a record high of 63.3 degrees the day before, per the Guardian. Also worrisome, though not conclusive: a study in Science that indicates a 70% increase in the loss of western Antarctic ice-shelf volume over a 10-year period, while ice gains on eastern Antarctic shelves ground to a halt, the Guardian notes. However, researchers aren't ready quite yet to definitively point the finger. "While it is fair to say that we're seeing the ice shelves responding to climate change, we don't believe there is enough evidence to directly relate recent ice-shelf losses specifically to changes in global temperature," a University of California-San Diego glaciologist tells Reuters UK. (Scientists recently made a big find regarding what may be melting the Antarctic ice.) |
Despite some positive developments, U.S. rule of law assistance in the new independent states of the former Soviet Union has achieved limited results, and the sustainability of those results is uncertain. Experience has shown that establishing the rule of law in the new independent states is a complex undertaking and is likely to take many years to accomplish. Although the United States has succeeded in exposing these countries to innovative legal concepts and practices that could lead to a stronger rule of law in the future, we could not find evidence that many of these concepts and practices have been widely adopted. At this point, many of the U.S.-assisted reforms in the new independent states are dependent on continued donor funding to be sustained. Despite nearly a decade of work to reform the systems of justice in the new independent states of the former Soviet Union, progress in establishing the rule of law in the region has been slow overall, and serious obstacles remain. As shown in table 1, according to Freedom House, a U.S. research organization that tracks political developments around the world, the new independent states score poorly in the development of the rule of law, and, as a whole, are growing worse over time. These data, among others, have been used by USAID and the State Department to measure the results of U.S. development assistance in this region. In the two new independent states where the United States has devoted the largest amount of rule of law funding—Russia and Ukraine—the situation appears to have deteriorated in recent years. The scores have improved in only one of the four countries (Georgia) in which USAID has made development of the rule of law one of its strategic objectives and the United States has devoted a large portion of its rule of law assistance funding. I want to emphasize that we did not use these aggregate measures alone to reach our conclusions about the impact and sustainability of U.S. assistance. Rather, we reviewed many of the projects in each of the key elements of U.S. assistance. We examined the results of these projects, assessing the impact they have had as well as the likelihood that that impact would continue beyond U.S. involvement in the projects. The U.S. government funds a broad range of activities as part of its rule of law assistance. This includes efforts aimed at helping countries develop five elements of a modern legal system (see Fig. 1): 1. a post-communist foundation for the administration of justice, 2. an efficient, effective, and independent judiciary, 3. practical legal education for legal professionals, 4. effective law enforcement that is respectful of human rights, and 5. broad public access to and participation in the legal system. In general, USAID implements assistance projects primarily aimed at development of the judiciary, legislative reform, legal education, and civil society. The Departments of State, Justice, and the Treasury provide assistance for criminal law reform and law enforcement projects. A key focus of the U.S. rule of law assistance program has been the development of a legal foundation for reform of the justice system in the new independent states. U.S. projects in legislative assistance have been fruitful in Russia, Georgia, and Armenia, according to several evaluations of this assistance, which point to progress in passing key new laws. For example, according to a 1996 independent evaluation of the legal reform assistance program, major advances in Russian legal reform occurred in areas that USAID programs had targeted for support, including a new civil code and a series of commercial laws and laws reforming the judiciary. Despite considerable progress in a few countries, major gaps persist in the legal foundation for reform. In particular, Ukraine, a major beneficiary of U.S. rule of law assistance, has not yet passed a new law on the judiciary or new criminal, civil, administrative, or procedure codes since a new constitution was passed in 1996. Furthermore, a major assistance project aimed at making the Ukrainian parliament more active, informed, and transparent has not been successful, according to U.S. and foreign officials we interviewed. In Russia, the government has still not adopted a revised criminal procedure code, a key component of the overall judicial reform effort, despite assistance from the Department of Justice in developing legislative proposals. According to a senior Justice official, Russia is still using the autocratic 1963 version of the procedure code that violates fundamental human rights. The second element in the U.S. government’s rule of law program has been to foster an independent judiciary with strong judicial institutions and well-trained judges and court officers who administer decisions fairly and efficiently. The United States has contributed to greater independence and integrity of the judiciary by supporting key new judicial institutions and innovations in the administration of justice and by helping to train or retrain many judges and court officials. For example, in Russia, USAID provided training, educational materials, and other technical assistance to strengthen the Judicial Department of the Supreme Court. This new independent institution was created in 1998 to assume the administrative and financial responsibility for court management previously held by the Ministry of Justice. USAID and the Department of Justice have also supported the introduction of jury trials in 9 of Russia’s 89 regions for the first time since 1917. Although the jury trial system has not expanded beyond a pilot phase, administration of criminal justice has been transformed in these regions—acquittals, unheard of during the Soviet era, are increasing under this system (up to 16.5 percent of all jury trials by the most recent count). However, U.S. efforts we reviewed to help retool the judiciary have had limited impact so far. USAID assistance efforts aimed at improving training for judges have had relatively little long-term impact. Governments in Russia and Ukraine, for example, have not yet developed judicial training programs with adequate capacity to reach the huge numbers of judges and court officials who operate the judiciaries in these nations. In Russia, the capacity for training judges remains extremely low. The judiciary can train each of its 15,000 judges only about once every 10 years. In Ukraine, the two judicial training centers we visited that had been established with USAID assistance were functioning at far below capacity; in fact one center had been dismantled entirely. Courts still lack full independence, efficiency, and effectiveness. Throughout the region, much of the former structure that enabled the Soviet government to control judges’ decisions still exists, and citizens remain suspicious of the judiciary. The third element of the U.S. assistance program has been to modernize the system of legal education in the new independent states to make it more practical and relevant. The United States has sponsored a variety of special efforts to introduce new legal educational methods and topics for both law students and existing lawyers. Notably, USAID has introduced legal clinics into several law schools throughout Russia and Ukraine. These clinics allow law students to get practical training in helping clients exercise their legal rights. They also provide a service to the community by facilitating access to the legal system by the poor and disadvantaged. With the training, encouragement, and financing provided by USAID, there are about 30 legal clinics in law schools in Russia and about 20 in Ukraine. USAID has also provided a great deal of high-quality continuing education for legal professionals, particularly in the emerging field of commercial law. Traditionally, little training of this type was available to lawyers in the former Soviet Union. However, the impact and sustainability of these initiatives are in doubt, as indigenous institutions have not yet demonstrated the ability or inclination to support the efforts after U.S. and other donor funding ends. For example, in Russia, we could not identify any organizations that were engaged in reprinting legal texts and manuals developed with U.S. assistance. In Ukraine, U.S. assistance has not been successful in stimulating law school reforms, and legal education remains rigidly theoretical and outmoded by western standards. Students are not routinely taught many skills important to the practice of law, such as advocacy, interviewing, case investigation, negotiation techniques and legal writing. The United States has largely been unsuccessful at fostering the development of legal associations, such as bar associations, national judges associations, and law school associations, to carry on this educational work in both Russia and Ukraine. U.S. officials had viewed the development of such associations as key to institutionalizing modern legal principles and practices and professional standards on a national scale as well as serving as conduits for continuing legal education for their members. The fourth component of the U.S. government’s rule of law program involves introducing modern criminal justice techniques to local law enforcement organizations. As part of this effort, the United States has provided many training courses to law enforcement officials throughout the new independent states of the former Soviet Union, shared professional experiences through international exchanges and study tours, implemented several model law enforcement projects, and funded scholarly research into organized crime. These programs have fostered international cooperation among law enforcement officials, according to the Department of Justice. U.S. law enforcement officials we spoke to have reported that, as a result of these training courses, there is a greater appreciation among Russians and Ukrainians of criminal legal issues for international crimes of great concern in the United States, such as organized crime, money laundering, and narcotics and human trafficking. They have also reported a greater willingness of law enforcement officials to work with their U.S. and other foreign counterparts on solving international crimes. However, we found little evidence that the new information disseminated through these activities has been routinely applied in law enforcement in the new independent states. In Russia and Ukraine we could not identify any full-scale effort in local law enforcement training institutions to replicate or adapt the training for routine application. Nor could we find clear evidence that the U.S. techniques have been widely embraced by training participants. Furthermore, though the United States has sponsored significant amounts of research on organized crime in Russia and Ukraine, we could not determine whether the results of this research had been applied by law enforcement agencies. The fifth element of the rule of law assistance program is the expansion of access by the general population to the justice system. In both Russia and Ukraine, the United States has fostered the development of a number of nongovernmental organizations that have been active in promoting the interests of groups, increasing citizens’ awareness of their legal rights, and helping poor and traditionally disadvantaged people gain access to the courts to resolve their problems. For example, in Russia, USAID has sponsored a project that has helped trade unions and their members gain greater access to the legal system, leading to court decisions that have bolstered the legal rights of millions of workers. In Ukraine, environmental advocacy organizations sponsored by USAID have actively and successfully sued for citizens’ rights and greater environmental protection. Despite their high level of activity in recent years, these nongovernmental organizations still face questionable long-term viability. Most nongovernmental organizations we visited received very little funding from domestic sources and were largely dependent upon foreign donor contributions to operate. The sustainability of even some of the most accomplished organizations we visited remains to be seen. At least three factors have constrained the impact and sustainability of U.S. rule of law assistance: (1) a limited political consensus on the need to reform laws and institutions, (2) a shortage of domestic resources to finance many of the reforms on a large scale, and (3) a number of shortcomings in U.S. program management. The first two factors, in particular, have created a very challenging climate for U.S. programs to have major, long-term impact in these states, but have also underscored the importance of effective management of U.S. programs. In key areas in need of legal reform, U.S. advocates have met some steep political resistance to change. In Ukraine and Russia, lawmakers have not been able to reach consensus on critical new legal codes upon which reform of the judiciary could be based. In particular, Ukrainian government officials are deadlocked on legislation reforming the judiciary, despite a provision in the country’s constitution to do so by June 2001. Numerous versions of this legislation have been drafted by parties in the parliament, the executive branch, and the judiciary with various political and other agendas. Lack of progress on this legislation has stymied reforms throughout the justice system. In Russia’s Duma (parliament), where the civil and the criminal codes were passed in the mid-1990s, the criminal procedure code remains in draft form. According to a senior Department of Justice official, the Russian prosecutor’s office is reluctant to support major reforms, since many would require that institution to relinquish a significant amount of the power it has had in operating the criminal justice system. While U.S. officials help Russian groups to lobby for legislative reforms, adoption of such reforms remains in the sovereign domain of the host country. In the legal education system as well, resistance to institutional reform has thwarted U.S. assistance efforts. USAID officials in Russia told us that Russian law professors and other university officials are often the most conservative in the legal community and the slowest to reform. A USAID- sponsored assessment of legal education in Ukraine found that there was little likelihood for reform in the short term due to entrenched interests among the school administration and faculty who were resisting change. Policymakers have not reached political consensus on how or whether to address the legal impediments to the development of sustainable nongovernmental organizations. Legislation could be adopted that would make it easier for these organizations to raise domestic funds and thus gain independence from foreign donors. Historically slow economic growth in the new independent states has meant limited government budgets and low wages for legal professionals and thus limited resources available to fund new initiatives. While Russia has enjoyed a recent improvement in its public finances stemming largely from increases in the prices of energy exports, public funds in the new independent states have been constrained. Continuation or expansion of legal programs initially financed by the United States and other donors has not been provided for in government budgets. For example, in Russia, the system of jury trials could not be broadened beyond 9 initial regions, according to a senior judiciary official, because it was considered too expensive to administer in the other 89 regions. In Ukraine, according to a senior police official we spoke to, police forces often lack funds for vehicles, computers, and communications equipment needed to implement some of the law enforcement techniques that were presented in the U.S.- sponsored training. U.S. agencies implementing the rule of law assistance program have not always managed their projects with an explicit focus on achieving sustainable results, that is, (1) developing and implementing strategies to achieve sustainable results and (2) monitoring projects results over time to ensure that sustainable impact was being achieved. These are important steps in designing and implementing development assistance projects, according to guidance developed by USAID. We found that, in general, USAID projects were designed with strategies for achieving sustainability, including assistance activities intended to develop indigenous institutions that would adopt the concepts and practices USAID was promoting. However, at the Departments of State, Justice, and the Treasury, rule of law projects we reviewed often did not establish specific strategies for achieving sustainable development results. In particular, the law enforcement-related training efforts we reviewed were generally focused on achieving short-term objectives, such as conducting training courses or providing equipment and educational materials; they did not include an explicit approach for longer-term objectives, such as promoting sustainable institutional changes and reform of national law enforcement practices. According to senior U.S. Embassy officials in Russia and Ukraine, these projects rarely included follow-up activities to help ensure that the concepts taught were being institutionalized or having long-term impact after the U.S. trainers left the country. We did not find clear evidence that U.S. agencies systematically monitored and evaluated the impact and sustainability of the projects they implemented under the rule of law assistance program. Developing and monitoring performance indicators is important for making programmatic decisions and learning from past experience, according to USAID. We found that the Departments of State, Justice, and Treasury have not routinely assessed the results of their rule of law projects. In particular, according to U.S. agency and embassy officials we spoke to, there was usually little monitoring or evaluation of the law enforcement training courses after they were conducted to determine their impact. Although USAID has a more extensive process for assessing its programs, we found that the results of its rule of law projects in the new independent states of the former Soviet Union were not always apparent. The results of most USAID projects we reviewed were reported in terms of project outputs, such as the number of USAID-sponsored conferences or training courses held, the number and types of publications produced with project funding, or the amount of computer and other equipment provided to courts. Measures of impact and sustainability were rarely used. State has recently recognized the shortcomings of its training-oriented approach to law enforcement reforms. As a result, it has mandated a new approach for implementing agencies to focus more on sustainable projects. Instead of administering discrete training courses, for example, agencies and embassies will be expected to develop longer-term projects. Justice has also developed new guidelines for the planning and evaluation of some of its projects to better ensure that these projects are aimed at achieving concrete and sustainable results. These reform initiatives are still in very early stages of implementation. It remains to be seen whether future projects will be more explicitly designed and carried out to achieve verifiably sustainable results. One factor that may delay the implementation of these new approaches is a significant backlog in training courses that State has already approved under this program. As of February 2001, about $30 million in funding for fiscal years 1995 through 2000 has been obligated for law enforcement training that has not yet been conducted. U.S. law enforcement agencies, principally the Departments of Justice and the Treasury, plan to continue to use these funds for a number of years to pay for their training activities, even though many of these activities have the same management weaknesses as the earlier ones we reviewed. Unless these funds are reprogrammed for other purposes or the projects are redesigned to reflect the program reforms that State and Justice are putting in place, projects may have limited impact and sustainability. | This testimony discusses the U.S. government's rule of law assistance efforts in the new independent states of the former Soviet Union. GAO found that these efforts have had limited impact so far, and results may not be sustainable in many cases. U.S. agencies have had some success in introducing innovative legal concepts and practices in these countries. However, the U.S. assistance has not often had a major, long-term impact on the evolution of the rule of law in these countries. In some cases, countries have not widely adopted the new concepts and practices that the United States has advocated. In other cases, continuation or expansion of the innovations depends on further funding from the U.S. or other donors. In fact, the rule of law appears to have actually deteriorated in recent years in several countries, including Russia and Ukraine, according to the data used to measure the results of U.S. development assistance in the region and a host of U.S. government and foreign officials. This testimony summarizes an April 2001 report (GAO-01-354). |
Get the Bottled Water 2010 widget and many other great free widgets at Widgetbox! Not seeing a widget? (More info) See all 173 bottled waters surveyed. Each "best on transparency" bottled water brand shows a specific geographic location of its water source and treatment method on the label and posts purity testing online. The "worst on transparency" bottled waters list no information on the water's source location, treatment or purity, online or on the label. These lists are drawn from EWG's survey of labels from 173 bottled waters purchased in 2010.
What’s In Your Bottled Water – Besides Water?
Pure, clean water.
That’s what the ads say. But what does the lab say?
When you shell out for bottled water, which costs up to 1,900 times more than tap water, you have a right to know what exactly is inside that pricey plastic bottle.
Most bottled makers don’t agree. They keep secret some or all the answers to these elementary questions:
Where does the water come from?
Is it purified? How?
Have tests found any contaminants?
Among the ten best-selling brands, nine — Pepsi's Aquafina, Coca-Cola's Dasani, Crystal Geyser and six of seven Nestlé brands — don't answer at least one of those questions.
Only one — Nestlé's Pure Life Purified Water — discloses its specific geographic water source and treatment method on the label and offers an 800-number, website or mailing address where consumers can request a water quality test report.
The industry's refusal to tell consumers everything they deserve to know about their bottled water is surprising.
Since July 2009, when Environmental Working Group released its groundbreaking Bottled Water Scorecard, documenting the industry's failure to disclose contaminants and other crucial facts about their products, bottled water producers have been taking withering fire from consumer and environmental groups.
A new EWG survey of 173 unique bottled water products finds a few improvements – but still too many secrets and too much advertising hype. Overall, 18 percent of bottled waters fail to list the location of their source, and 32 percent disclose nothing about the treatment or purity of the water. Much of the marketing nonsense that drew ridicule last year can still be found on a number of labels.
EWG recommends that you drink filtered tap water. You'll save money, drink water that’s purer than tap water and help solve the global glut of plastic bottles.
We support stronger federal standards to enforce the consumer's right to know all about bottled water.
Until the federal Food and Drug Administration cracks down on water bottlers, use EWG's Bottled Water Scorecard to find brands that disclose the water's source location, treatment and quality and that use advanced treatment methods to remove a broad range of pollutants.
Update (Jan. 25, 2011)
California’s Public Health Department appears to interpret S.B. 220’s source-listing requirement narrowly in light of federal law. Although ambiguity remains with regard to that requirement – underscoring the need for more clarity in the current statute – EWG has updated its original “out-of-compliance” findings to reflect that interpretation. In light of that update, EWG also changed its letter grade for the following products: Alhambra Jr. Sport Crystal-Fresh Purified Water (F to D); Good Stuff by AMPM Purified Drinking Water (D to C); Ralphs Purified Drinking Water (F to D); Refreshe Purified Drinking Water (D to C); and Sunny Select Drinking Water (D to C). ||||| Starting in 1996, Alexa Internet has been donating their crawl data to the Internet Archive. Flowing in every day, these data are added to the Wayback Machine after an embargo period. ||||| Environmental Issues > Water Main Page > All Water Documents
Bottled Water Pure Drink or Pure Hype?
Chapter 3
Bottled Water Contamination: An Overview of NRDC's and Others' Surveys
Setting aside the question of whether bottled water is as pure as advertised, is the public’s view that bottled water is safer than tap water correct? Certainly the aggressive marketing by the bottled water industry would lead us to believe so.
NRDC undertook a four-year, detailed investigation to evaluate the quality of bottled water. We reviewed published and unpublished literature and data sources, wrote to and interviewed by phone all 50 states asking for any surveys of bottled water quality they have conducted or were aware of, and interviewed experts from FDA. In addition, through three leading independent laboratories, we conducted "snapshot" testing of more than 1,000 bottles of water sold under 103 brand names.
What NRDC has found is in some cases reassuring and in others genuinely troubling. The results of all testing NRDC conducted is presented in Appendix A; Figure 4 summarizes the results.
The bottled water industry generally has publicly maintained that there are no chemical contaminants in bottled water. For example, as noted in Chapter 2, a widely disseminated fact sheet on bottled water distributed by the International Bottled Water Association (IBWA) -- the industry’s trade association -- states flatly that bottled water contains no chlorine or harmful chemicals. [75]
However, our investigation has found that potentially harmful chemical contaminants are indeed sometimes found in some brands of bottled water. (The box at the end of this chapter highlights a particularly troubling example.) NRDC’s testing of more than 1,000 bottles of water (for about half of FDA-regulated contaminants; see the Technical Report [print report only]), found that at least one sample of 26 of the 103 bottled water brands tested (25 percent) contained chemical contaminants at levels above the strict, health-protective limits of California, the bottled water industry code, or other states [3a] (23 waters, or 22 percent, had at least one sample that violated enforceable state limits). We found only two waters that violated the weaker federal bottled water standards for chemicals (in two repeat samples), and two waters that violated the federal standards for coliform bacteria in one test (though another batch of both of those waters tested clean for bacteria). The Technical Report (print report only) also discusses evidence provided by other investigators who in the past found that chemical contaminants were found in bottled water at levels violating the federal bottled water standards. [76]
Thus, in our limited bottled water testing, while strict health-protective state limits for chemicals sometimes were not met by about one fourth of the waters, the weaker federal bottled water standards generally were not violated. As noted in Table 2, among the chemical contaminants of greatest potential concern in bottled water are volatile organic chemicals, arsenic, certain other inorganic chemicals, and plastic or plasticizing compounds. Although most bottled water contained no detectable levels of these contaminants, or contained levels of the contaminants lower than those found in many major cities’ tap water, we determined that one cannot assume on faith, simply because one is buying water in a bottle, that the water is of any higher chemical quality than tap water.
Table 2: Selected Contaminants of Potential Concern for Bottled Water Contaminant Health Concern with Excess Levels Coliform Bacteria Broad class of bacteria used as potential indicator of fecal contamination; may be harmless of themselves. Harmful types of coliform bacteria (such as certain fecal coliform bacteria or E. coli) can cause infections with vomiting, diarrhea, or serious illness in children, the elderly, and immunocompromised or other vulnerable people. Heterotrophic Plate Count (HPC) Bacteria Potential indicator of overall sanitation in bottling and source water; may be harmless of themselves. In some cases may indicate presence of infectious bacteria; data show sometimes linked to illnesses. Can interfere with detection of coliform bacteria or infectious bacteria. Unregulated by FDA. Pseudomonas aeruginosa bacteria Possible indicator of fecal contamination or unsanitary source water or bottling. Can cause opportunistic infections. Unregulated by FDA. Arsenic Known human carcinogen. Also can cause skin, nervous, and reproductive or developmental problems. Nitrate Causes "blue baby" syndrome in infants, due to interference with blood's ability to take up oxygen. Potential cancer risk. Trihalomethanes (i.e., chloroform, bromodichloromethane, dibromochloromethane, and bromoform) Cancer of the bladder, colorectal cancer, possibly pancreatic cancer. Also concerns about possible birth defects and spontaneous abortions. Phthalate (DEHP) Cancer; possible endocrine system disrupter. Unregulated by FDA. Source: NRDC
NRDC Testing Methodology
NRDC began during the summer of 1997 to test bottled water quality and continued testing or retesting some brands through early 1999. Our testing methodology is summarized in Table 3, and described in greater detail in the accompanying Technical Report (print report only). We conducted a four-pronged testing program, using three of the nation's most respected laboratories: two major independent commercial labs and one academic laboratory. In this four-pronged testing program, we tested water sold in the five states with the highest bottled water consumption in 1994 (California, Florida, Illinois, New York, and Texas), plus bottled water sold in the District of Columbia. [77] We tried to test major brands that held a significant percentage of the national or regional market share (for those brands for which market-share information was available), and we strove to purchase a variety of other brands and types of water, including the major bottled water products offered by some of the leading supermarket chains in the areas where the water was purchased.
The first prong of our survey was a preliminary screening of 37 California bottled waters in the summer and fall of 1997. The second involved detailed testing of 73 California waters in late 1997 and early 1998. The third was a survey of five bottled waters from each of five states other than California (a total of 25 waters) in late 1997 and early 1998. The final prong involved retesting more than 20 in which contamination had been found in earlier tests, which took place in mid- to late-1998 and early 1999.
We sampled the most waters from California, whose residents are by far the greatest consumers of bottled water in the nation. More bottled water is purchased in California than in the next five largest consuming states combined (see Figure 3). California generally has the most stringent standards and warning levels applicable to bottled water in the nation.
All of the labs we contracted with used standard EPA analytical methods for testing water. We conducted "snapshot" testing -- that is, we purchased several bottles of a single type of water, at a single location, and had those bottles tested. If we found a problem, we generally repurchased and then retested the water to confirm the earlier results. [78] Our testing methodology is summarized in Table 3, and described in greater detail in the accompanying Technical Report (print report only).
We asked the labs to use their standard contaminant test packages in order to control the total testing costs. In general, this meant that the labs tested for many of the most commonly found regulated contaminants, plus certain other contaminants that they could readily detect and quantify using the standard EPA methods and the analytical equipment they routinely use. Thus, some labs were able to detect more contaminants than others, though all tested for a core set of more than 30 regulated contaminants.
Table 3: Summary of Lab Testing Protocols Lab # of Brands of Water Tested Number of Contaminants Tested General Testing Protocol Comments Environmental Quality Institute (Univ. N.C.) 37 41 regulated, over 40 unregulated EPA analytical methods, single bottle sampled per contaminant type Initial screening of California waters to determine whether more in-depth testing needed. Sequoia Analytical 73 32 regulated, over 40 unregulated EPA analytical methods, FDA protocol for sampling (test 1 composite sample of 10 bottles for chemical and microbial contaminants; 10 individual bottles tested for microbial follow-up if excess bacteria found in first round) More extensive testing of California waters only. National testing 25 57 regulated, over 200 unregulated EPA analytical methods, FDA protocol for sampling (test 1 composite sample of 10 bottles; 10 individual bottles of all tested for bacteria) Testing of waters from 5 states outside of California (NY, FL, TX, IL, and DC).
Summary of Results of NRDC Testing
NRDC testing: the good news
First, the good news: Most brands of bottled water we tested were, according to our "snapshot" analyses of a subset of regulated contaminants, of relatively good quality (i.e., they were comparable to good tap water). Most waters contained no detectable bacteria, and the levels of synthetic organic chemicals and inorganic chemicals of concern for which we tested were either below detection limits or well below all applicable standards.
Caveats. This is not to say that all of these brands are without risk. One of the key limitations of the testing is that most tests were done just once or twice, so we could have missed a significant but intermittent problem. Numerous studies of source-water quality -- particularly surface-water sources and shallow groundwater sources -- demonstrate that source-water quality may substantially vary over time. [79] Operation, maintenance, or other mishaps at a bottling plant may cause periodic water-contamination problems that would not be detected by such "snapshot" tests. Thus, depending upon the bottler's source water, treatment technology (if any), and manufacturing, operation, and maintenance practices, some bottled waters' quality may vary substantially with time and with different production runs.
In addition, while we did test for dozens of contaminants at a cost of from about $400 to about $1,000 per type of water per round of testing (depending on the intensity of the testing), we were unable to test for many contaminants that may be of health concern. Thus, as is discussed in the accompanying Technical Report (print report only), we were unable to test for many kinds of bacteria, parasites, radioactivity, and toxic chemicals regulated by EPA and FDA in tap water or bottled water because such testing would have been even more expensive or difficult. Still, with those caveats, many bottled waters do appear to be of good quality, based on our limited testing.
NRDC testing: the bad news
For some other bottled waters, the story is quite different. The independent labs that conducted testing for NRDC found high levels of heterotrophic-plate-count bacteria in some samples, and in a few cases coliform bacteria (no coliforms were found in retests of different lots of the same water). The labs also found that some samples contained arsenic (a carcinogen) and synthetic organic chemicals (SOCs, i.e., man-made chemicals containing hydrogen and carbon), such as those contained in gasoline or used in industry. SOCs found included the probable human carcinogen phthalate (likely from the plastic water bottles), and trihalomethanes (cancer-causing by-products of water chlorination, which have been associated with birth defects and spontaneous abortions when found in tap water at high levels). [3b]
A detailed review of all our testing results and those of other investigators is presented in the accompanying Technical Report (print report only), and the actual results for each brand of bottled water we tested are presented in Appendix A. In summary, our testing of 103 types of water found:
Violations of state standards. At least one sample of about one fourth of the bottled waters bought in California (23 waters, or 22 percent) violated enforceable state limits (either bottled water standards or mandatory warning levels).
At least one sample of about one fourth of the bottled waters bought in California (23 waters, or 22 percent) violated enforceable state limits (either bottled water standards or mandatory warning levels). Violations of federal bottled water quality standards (coliform bacteria and fluoride). Based on limited testing, four waters violated the weak federal bottled water standards (two for coliform bacteria that on retest contained no coliforms, and two for fluoride that were confirmed on retest to contain excessive fluoride). Coliform bacteria in water may not be dangerous themselves, but they are widely used as an indicator that may signal the presence of other bacteria or pathogens that could cause illness. Fluoride at excessive levels can cause mottling or dental fluorosis (pitting of teeth), skeletal fluorosis (adverse effects on bones), and cardiovascular and certain other health effects. [80]
Based on limited testing, four waters violated the weak federal bottled water standards (two for coliform bacteria that on retest contained no coliforms, and two for fluoride that were confirmed on retest to contain excessive fluoride). Coliform bacteria in water may not be dangerous themselves, but they are widely used as an indicator that may signal the presence of other bacteria or pathogens that could cause illness. Fluoride at excessive levels can cause mottling or dental fluorosis (pitting of teeth), skeletal fluorosis (adverse effects on bones), and cardiovascular and certain other health effects. Arsenic contamination. Arsenic is a "known human carcinogen" when in drinking water; it also can cause many other illnesses, including skin lesions, nervous-system problems, and adverse reproductive and cardiovascular effects (the precise levels in drinking water necessary to cause these effects are the subject of heated debate). [81] Our testing found that one or more samples of eight waters (8 percent) purchased in California exceeded the 5 ppb warning level for arsenic set under California's Proposition 65, a law requiring public warnings if a company exposes people to excessive levels of toxic chemicals. [3c] (See Figure 5.)
Trihalomethane violations. Trihalomethanes (THMs) are a family of chemicals created when chlorine is used to disinfect water (chlorine reacts with organic matter in the water to form THMs and other byproducts). Studies of people and animals exposed to THMs in their tap water have found elevated risks of cancer [82] and potentially a higher risk of spontaneous abortions and birth defects. [83] California has adopted a 10 ppb total THM limit, a standard recommended by the International Bottled Water Association (IBWA), the bottled water industry trade association. Twelve waters (12 percent) purchased in California had at least one sample that violated the state and IBWA bottled water standard for THMs in the same fashion. (See Figure 6.) Two waters sold in Florida exceeded the IBWA standard (Florida repealed its 10 ppb TTHM standard in 1997), and one sold in Texas violated the IBWA standard (Texas has not made the stricter 10 ppb standard enforceable). Chlorinated tap water also typically contains THMs (generally at levels above 10 ppb if the water is chlorinated), though many people who buy bottled water to avoid chlorine and its taste, odor, and by-products may be surprised to learn THMs are sometimes found in bottled water as well.
Excessive chloroform Chloroform is the most common THM found in tap and bottled water; it is of particular concern because it is listed by EPA as a probable human carcinogen. Twelve waters purchased in California had at least one sample that exceeded the warning level for chloroform (a trihalomethane) set by California under Proposition 65, but they were sold without the required health warning (see Appendix A).
Chloroform is the most common THM found in tap and bottled water; it is of particular concern because it is listed by EPA as a probable human carcinogen. Twelve waters purchased in California had at least one sample that exceeded the warning level for chloroform (a trihalomethane) set by California under Proposition 65, but they were sold without the required health warning (see Appendix A). Excessive bromodichloromethane (BDCM). BDCM is another THM that EPA has listed as a probable human carcinogen. Ten waters we bought in California that contained unlawful TTHM levels also had at least one sample that exceeded the Proposition 65 warning level for bromodichloromethane. These waters all were sold with no health warning that they contained BDCM at a level above the Proposition 65 level.
BDCM is another THM that EPA has listed as a probable human carcinogen. Ten waters we bought in California that contained unlawful TTHM levels also had at least one sample that exceeded the Proposition 65 warning level for bromodichloromethane. These waters all were sold with no health warning that they contained BDCM at a level above the Proposition 65 level. Excessive heterotrophic-plate-count (HPC) bacteria. HPC bacteria are a measure of the level of general bacterial contamination in water. HPC bacteria are not necessarily harmful themselves, but they can indicate the presence of dangerous bacteria or other pathogens and are used as a general indication of whether sanitary practices were used by the bottler. Nearly one in five waters tested (18 waters, or 17 percent) had at least one sample that exceeded the unenforceable microbiological-purity "guidelines" adopted by some states for HPC bacteria (500 colony-forming units, or cfu, per milliliter). (See Figure 7.) These states use unenforceable HPC-bacteria "guidelines" to measure bacterial contamination and sanitation. These state guidelines actually are weaker than voluntary HPC guidelines used by the industry trade association to check plant sanitation. (200 cfu/ml in 90 percent of samples taken five days after bottling), and are weaker than the European Union (EU) standard (100 cfu/ml, at bottling at 22 degrees Celsius).
Elevated nitrates, but at levels below standards. Nitrates can be present in water as a result of runoff from fertilized fields or lawns, or from sewage; nitrates also may occur naturally, generally at lower levels. At elevated levels, nitrates can cause blue-baby syndrome -- a condition in infants in which the blood has diminished ability to take up oxygen, potentially causing brain damage or death; according to some, nitrates may be linked to cancer in adults. [84] The EPA and FDA standard for nitrates is 10 parts per million (ppm). There is spirited debate about whether these standards are sufficient to protect all infants in light of some studies suggesting ill effects at lower levels, [85] but both EPA and the National Research Council maintain that the current standard is adequate to protect health. [86] We found six bottled waters that had at least one sample containing more than 2 ppm nitrates; four of these had at least one sample containing more than 3 ppm nitrates (two contained up to 5.6 ppm nitrates in at least one test). (See Table 4.) Four of the six waters containing higher nitrate levels were mineral waters. The U.S. Geological Survey says that nitrate levels in excess of 3 ppm may indicate human-caused nitrate contamination of the water, [87] although it may be that some mineral waters naturally contain higher nitrate levels. To be safe, babies probably should not be fed with mineral water containing elevated nitrate levels.
Table 4: Selected Nitrate Levels Found in Bottled Waters Bottled Water Brand Nitrate Level
(as Nitrogen, in ppm) (First Test) Nitrate Level
(as Nitrogen, in ppm) (Subsequent Tests, If Any) Fiuggi Natural Mineral Water 2.5
Hildon Carbonated Mineral Water 5.6 5.4 Hildon Still Mineral Water 5.6
Perrier Sparkling Mineral Water 2.8, 2.6 4.3, 4.1 Sahara Mountain Spring Water 2.5
Sparkling Springs 3.1
Source: NRDC, 1997-1999
No fecal coliform bacteria or Pseudomonas aeruginosa. Although, as noted previously, we did find total coliform bacteria in a few samples, no fecal coliform bacteria or E. coli bacteria were found. Earlier studies have found multiple species of the bacteria Pseudomonas in bottled water. [88] However, in an effort to control costs, we looked only for the species Pseudomonas aeruginosa and found none.
Although, as noted previously, we did find total coliform bacteria in a few samples, no fecal coliform bacteria or bacteria were found. Earlier studies have found multiple species of the bacteria in bottled water. However, in an effort to control costs, we looked only for the species and found none. Synthetic organic chemicals at levels below enforceable standards. About 16 percent of the waters (16 of 103) had at least one sample that contained human-made synthetic organic chemicals (SOCs) at levels below state and federal standards. The most frequently found SOCs were industrial chemicals (e.g., toluene, xylene, and isopropyltoluene), and chemicals used in manufacturing plastic (e.g., phthalate, adipate, and styrene). As discussed in the accompanying Technical Report (print report only), some of the chemicals found (such as phthalate) may pose health risks such as potential cancer-causing effects, even if present at relatively low levels. Generally, long-term consumption (over many years) is required to pose such chronic risks. The levels of these contaminants found in our testing are indicated in Table 5.
About 16 percent of the waters (16 of 103) had at least one sample that contained human-made synthetic organic chemicals (SOCs) at levels below state and federal standards. The most frequently found SOCs were industrial chemicals (e.g., toluene, xylene, and isopropyltoluene), and chemicals used in manufacturing plastic (e.g., phthalate, adipate, and styrene). As discussed in the accompanying (print report only), some of the chemicals found (such as phthalate) may pose health risks such as potential cancer-causing effects, even if present at relatively low levels. Generally, long-term consumption (over many years) is required to pose such chronic risks. The levels of these contaminants found in our testing are indicated in Table 5. Overall contamination findings Overall, at least one sample of about one third of the tested waters (34 waters, or 33 percent) contained significant contamination (i.e., contaminants were found at levels in excess of standards or guidelines). This is not simply the sum of the waters that violate enforceable standards plus those that exceeded guidelines, as some waters violated both.
Overall, at least one sample of about one third of the tested waters (34 waters, or 33 percent) contained significant contamination (i.e., contaminants were found at levels in excess of standards or guidelines). This is not simply the sum of the waters that violate enforceable standards plus those that exceeded guidelines, as some waters violated both. The detailed results of our testing for each type of water are presented in the Technical Report (print report only). As is discussed there, testing by states and by academic researchers have also sometimes found the contaminants we studied or other potentially toxic and infectious agents in some brands of bottled water.
TABLE 5
Selected Synthetic Organic Compounds (Other Than THMs) in Bottled Water Bottled Water
(& State of Purchase) Xylene Level
(ppb) Toluene Level
(ppb) Other VOCs Found
(in ppb) Comments Alhambra Crystal Fresh Drinking Water (CA) 2.7 (test 1)
0 (test 2) 12.5 (test 1)
Not Detected (test 2) Not Detected (tests 1 & 2) Xylene and toluene below FDA & CA standards, but presence could indicate treatment standard violation. Black Mountain Spring Water (CA) Not Detected (tests 1-3) 8.9 (test 1)
Not Detected (tests 2 & 3) Not Detected (tests 1 & 2) Toluene below FDA and CA standards, but presence could indicate treatment standard violation. Lady Lee Drinking Water (Lucky, CA) 2.9 (test 1)
Not Detected (test 2) 11.0 (test 1)
0.5 (test 2) Not Detected (tests 1 & 2) Xylene and toluene below FDA & CA standards, but presence could indicate treatment standard violation. Lady Lee Natural Spring Water (Lucky, CA) 3.0 (test 1)
Not Detected (test 2)
0 (test 3) 13.9 (test 1)
Not Detected (test 2)
0.5 (test 3) Not Detected (tests 1 & 2) Xylene and toluene below FDA & CA standards, but could indicate CA treatment standard violation. Lady Lee Purified Water (Lucky, CA) 9.4 (test 1)
Not Detected
(test 2) 9.5 (test 1)
Not Detected (test 2) Ethylbenzene 2.0 ppb (test 1)
Ethylbenzene not detected (test 2)
Ethylbenzene not detected (test 3)
Methylene Chloride 4.1 ppb (test 3) Xylene, toluene, methylene chloride, and ethylbenzene below FDA & CA standards, but could indicate CA treatment standard violation. Methylene chloride standard is 5 ppb. Lucky Sparkling Water (w/raspberry)(CA) Not Detected Not Detected p-isopropyltoluene 5.4 ppb Single test; no standard for
p-isopropyltoluene. Lucky Seltzer Water (CA) Not Detected (tests 1 & 2) Not Detected (test 1)
1.8 (test 2) n-isopropyltoluene at 230 ppb (test 2)
n-butylbenzene at 21 ppb (test 2)
Neither detected in test 1 Source of elevated level of n-isopropyltoluene and of n-butylbenzene contamination unknown; no standards apply. Dannon Natural Spring Water (NY) Not Detected (tests 1-3) Not Detected (tests 1-3) Methylene chloride at 1.5 ppb (test 3)
Methylene chloride not detected in tests 1 & 2 FDA's Methylene chloride (dichlormethane) standard is 5 ppb. Nursery Water (CA) 3.2 (test 1)
Not Detected (test 2) 12.4 (test 1)
0.6 (test 2) Styrene 3.0 (test 1)
Not Detected (test 2) Xylene, toluene, and styrene below FDA & CA standards, but could indicate CA treatment standards violation. Perrier Mineral Water (CA) Not Detected (tests 1-3) Not Detected (tests 1-3) 2-Chlorotoluene 4.6 ppb (test 1)
2-Chlorotoluene 3.7 ppb (test 2)
2-Chlorotoluene Not Detected
(test 3) No standard for 2-chlorotoluene; contamination from unknown source. Polar Spring Water (DC) Not Detected 2.5 Not Detected Toluene detected at level below FDA standard (single test). Publix Drinking Water (FL) Not Detected (tests 1-3) Not Detected (tests 1-3) Acetone 11 ppb (test 1)
Acetone 14 ppb (test 2)
Acetone 16 ppb (test 3)
Styrene 0.6 ppb (test 1)
(No styrene found tests 2-3) Styrene found at level well below EPA Health Advisory level; no standard or Health Advisory for acetone. Publix Purified Water (FL) Not Detected Not Detected Styrene 0.2 ppb Styrene found at level well below EPA Health Advisory level (single test). Safeway Purified Water (CA) Not Detected (tests 1 & 2) 8.4 (test 1)
Not Detected (test 2)
Toluene detected at level below FDA and state standard, but could indicate CAtreatment standard violation. Safeway Spring Water (CA) 3.1 (test 1)
Not Detected (test 2) 14.2 (test 1)
Not Detected (test 2)
Xylene and toluene below FDA & CA standards, but could indicate CAtreatment standard violation. Safeway Spring Water (DC) Not Detected 4.7
Single test, toluene below FDA standard. Source: NRDC 1997-1999
Other Surveys of U.S. Bottled Water Quality
Relatively little information about bottled water quality is readily available to consumers. Few surveys of bottled water quality have been conducted in the United States during the past four years, and fewer still are widely available.
A handful of state governments have done surveys in recent years. Kansas has done a small survey of certain waters sold in the state, [89] Massachusetts prepares an annual summary of industry testing of waters sold in that state, [90] and New Jersey issues an annual summary, primarily of industry testing of water sold there. [91] In addition, Pennsylvania periodically issues a small state survey of waters sold locally, [92] and Wisconsin issues a small annual testing of about a dozen state waters. [93] In general, these states have reached conclusions similar to those we have reached: that most bottled water is of good quality but that a minority of the bottled water tested contains contaminants such as nitrates or synthetic organic chemicals, in a few cases at levels of potential health concern. These surveys are summarized in detail in the Technical Report (print report only).
A few academicians have published papers focusing on bottled water contamination from specific types of contaminants. For example, academic studies have focused on Pseudomonas bacteria in various brands of bottled water, [94] the leaching of chemicals from plastic manufacturing (such as phthalates) [95] from plastic bottles into the water, or contamination of bottled water with certain volatile synthetic organic compounds. [96] The researchers often tested only a relatively small number of brands of water, or failed even to name which bottled water was tested, making the information of limited value to consumers seeking to select a brand of water that is uncontaminated. Comprehensive studies of Canadian bottled waters also have been published -- without naming the brands with problems. The results of many of these studies are in the Technical Report (print report only), which presents in greater detail the evidence of microbiological and chemical contamination of bottled water.
Potential for Disease from Bottled Water
As is discussed in the accompanying Technical Report (print report only), there is no active surveillance for waterborne disease from tap water in the United States, nor is there active surveillance of potential disease from bottled water. There are certain "reportable" diseases, such as measles, which are reportable to CDC and state health departments, and for which there is active surveillance. Most diseases caused by organisms that have been found in bottled water, however, are not reportable, and in any event may come from a variety of sources, so the amount of disease from microbiologically contaminated bottled water (or tap water) is unknown. Thus, since no one is conducting active surveillance to determine if waterborne illnesses are occurring, even if waterborne illness from bottled water were relatively common, it would be unlikely that it would be noticed by health officials unless it reached the point of a major outbreak or epidemic.
There are cases of known and scientifically well-documented waterborne infectious disease from bottled water, but most have occurred outside of the United States (see Technical Report [print report only] and Appendix B). However, there clearly is a widespread potential, according to independent experts, for waterborne disease to be spread via bottled water. [97]
Bottled Water and Vulnerable Populations
Many people who are especially vulnerable to infection (such as the infirm elderly, young infants, people living with HIV/AIDS, people on immunosuppressive chemotherapy, transplant patients, etc.) use bottled water as an alternative to tap water out of concern for their safety. Some leading public-health experts, therefore, argue that bottled water should be of higher microbiological quality than most foods. [98] In fact, health-care providers and other professionals often recommend that people who are immunocompromised or who suffer from chronic health problems drink bottled water. Indeed, FDA's guidance for immunocompromised people (posted on the FDA Web site) recommends that people with lowered immunity should "drink only boiled or bottled water. . . ." [99]
Immunocompromised people often are not aware of the need to ensure that they are drinking microbiologically safe water or are vaguely aware of this issue but simply switch to bottled water on the assumption that it is safer than tap water. As discussed previously and in detail in the accompanying Technical Report (print report only), this may not be a safe assumption.
Bottled Water Storage and Growth of Microorganisms
Bottled water often is stored at relatively warm (room) temperatures for extended periods of time, generally with no residual disinfectant contained in it. As noted in the Technical Report (print report only) and shown in Figure 8, several studies have documented that there can be substantial growth of certain bacteria in bottled mineral water during storage, with substantial increases in some cases in the levels of types such as heterotrophic-plate-count-bacteria and Pseudomonas. [100] Studies also have shown that even when there are relatively low levels of bacteria in water when it is bottled, after one week of storage, total bacteria counts can jump by 1,000-fold or more in mineral water. [101]
FIGURE 8:
Bacterial Growth in Two Bottled Waters
Source: Adapted from P.V. Morais and M.S. Da Costa, "Alterations in the Major Heterotrophic Bacterial Populations Isolated from a Still Bottled Mineral Water," J. Applied Bacteriol, v. 69 pp. 750-757, Figure 1 (1990).
Conclusions Regarding Bottled Water Contaminants
Our limited "snapshot" testing, and that published in a few other recent surveys of bottled water, indicate that most bottled water is of good quality. However, our testing also found that about one fourth of the tested bottled water brands contained microbiological or chemical contaminants in at least some samples at levels sufficiently high to violate enforceable state standards or warning levels. About one fifth of the brands tested exceeded state bottled water microbial guidelines in at least some samples. Overall, while most bottled water appears to be of good quality, it is not necessarily any better than tap water, and vulnerable people or their care providers should not assume that all bottled water is sterile. They must be sure it has been sufficiently protected and treated to ensure safety for those populations.
An Example of Industrial-Solvent Contamination of Bottled Water [102] One particularly troubling case of industrial-chemical contamination of bottled water arose in Massachusetts. Massachusetts Department of Public Health files reveal that the Ann & Hope commercial well in Millis, Massachusetts, for years supplied several bottlers, including Cumberland Farms, West Lynn Creamery, Garelick Farms, and Spring Hill Dairy with "spring water" sold under many brand names. According to state officials and records, this well is located literally in a parking lot at an industrial warehouse facility and is sited near a state-designated hazardous-waste site. Several chemical contaminants were found in the water, including trichloroethylene (an EPA-designated probable human carcinogen). On at least four occasions these chemicals were found at levels above EPA and FDA standards in the well water. Dichloroethane, methylene chloride, and other synthetic organic chemicals (industrial chemicals) were also found, though the source of these contaminants reportedly was not identified. Contamination was found in the water in 1993, 1994, 1995, and 1996, but according to a state memo written in 1996, "at no time did Ann & Hope [the well operating company] do anything to determine the source of the contamination nor treat the source. Rather, they continued to sell water laced with volatile organic compounds, some of which were reported in finished product." The contamination levels depended on pumping rates from the wells. After a state employee blew the whistle on the problem and demanded better protection of bottled water in the state, she was ordered not to speak to the media or bottlers and was reassigned by Massachusetts Department of Public Health supervisors to other duties, in what she alleges was a retaliatory action. State officials deny that her reassignment was due to retaliation. The well reportedly is no longer being used for bottled water after the controversy became public.
Chapter Notes
3a. For cost reasons, we did not test for any radiological contaminants. 3b. Throughout this report and the attached Technical Report (print report only) we refer to two categories of chemicals for which we tested, semivolatile synthetic organic chemicals and volatile organic chemicals (VOCs). Technically, synthetic organic chemicals (SOCs) include any man-made chemicals—including nonvolatile, semivolatile, and volatile—that contain hydrogen and carbon. We, EPA, and FDA refer to VOCs as a shorthand for volatile synthetic organic chemicals, and to semivolatile SOCs as separate types of chemicals, even though many VOCs are also a type of SOC. The reason for differentiating between these two categories of contaminants is that EPA standard methods for testing for them are different, and because both EPA and FDA rules tend to artificially distinguish between VOCs and SOCs—the later being shorthand for semivolatile SOCs. 3c. None of the waters we tested exceeded the FDA and EPA standard for arsenic in water of 50 ppb. That standard originally was set in 1942 and is 2,000 times higher than the level EPA recommends for ambient surface water for public-health reasons; it also is 5 times higher than the World Health Organization and European Union arsenic-in-drinking-water limit. Congress has required that the EPA standard be updated by the year 2001. For reasons discussed in the accompanying Technical Report (print report only), many public health, medical, and other experts believe that the current EPA/FDA standard is far too high.
Report Notes
75. IBWA, "FAQs [Frequently Asked Questions] About Bottled Water," (1998); available at www.bottledwater.org/faq.html#3. 76. See, e.g., "The Selling of H2O," Consumer Reports, p. 531 (September 1980),.(finding excessive arsenic in several waters); "Water, Water Everywhere," Consumer Reports, pp. 42-48 (January 1987), (also finding excessive arsenic in several waters); see also, "Bottled Water Regulation," Hearing of the Subcommittee on Oversight and Investigations of the House Committee on Energy and Commerce, Serial No. 102-36, 102nd Cong., 1st Sess. 5, (April 10, 1991), (noting excessive benzene and other contaminants in bottled water). 77. According to figures for 1994 collected by the Beverage Marketing Corporation, the leading states were, in order, California (about 30% of the market), Florida (about 6%), New York (about 6%), Texas (about 6%) and Illinois (about 4%). Beverage Marketing Corporation, Bottled Water in the U.S. , 1996 Edition (1996), as cited in New Jersey Department of Health & Senior Services, Report to the New Jersey Legislature, Summarizing Laboratory Test Results on the Quality of Bottled Drinking Water for the Period January 1, 1995 through December 31, 1996, p. 6 (July 1997). A more recent survey found "California remains the top market for bottled water, with four times the number of gallons sold as the second-largest market. In fact, Californians drank 893,700 gallons of bottled water in 1997, more than the next four states combined: Florida (221,700 gallons), Texas (218,700), New York (204,400), and Arizona (124,900)." C. Roush, "Bottled Water Sales Booming," The Daily News of Los Angeles, p. B1 (April 16, 1998). 78. In a handful of cases, water was found in a test to contain contamination at levels of potential concern, but not retested -- generally because the water could not be found for retesting or it was logistically impractical to repurchase and reship the water for retesting. (See Appendix A.) 79. For example, the U.S. Geological Survey's (USGS) National Water Summaries (see, e.g. USGS, National Water Summary, 1988-1996), and National Water Quality Assessment Program (see, e.g., USGS National Water Quality Assessment Program--Pesticides in Ground Water (1996), USGS National Water Quality Assessment Program -- Pesticides in Surface Water (1997); see also www.usgs.gov (amply document that water quality measured using pesticides or other indicator contaminants can vary by orders of magnitude in a stream or shallow groundwater in some areas, depending upon the time of year, chemical use, hydrologic events such as precipitation, etc.) 80. See, U.S. Public Health Service, Department of Health and Human Services, Review of Fluoride: Benefits and Risks (February 1991); B. Hileman, "Fluoridation of Water: Questions About Health Risks and Benefits Remain After More than 40 Years," Chemical & Engineering News, pp. 26-42 (August 1, 1988); Robert J. Carton, Ph.D., and J. William Hirzy, Ph.D., EPA, and National Treasury Employees Union, "Applying the NAEP Code of Ethics to the Environmental Protection Agency and the Fluoride in Drinking Water Standard," Proceedings of the 23rd Annual Conference of the National Association of Environmental Professionals; 24 June 1998, San Diego, California, Sponsored by the California Association of Environmental Professionals, available at http://home.cdsnet.net/~fluoride/naep.htm. 81. Smith et al., "Cancer Risks from Arsenic in Drinking Water," Environmental Health Perspectives, vol. 97, pp. 259-67 (1992); Agency for Toxic Substances and Disease Registry, Toxicological Profile for Arsenic, (1993); NRDC, USPIRG, and Clean Water Action, Trouble on Tap: Arsenic, Radioactive Radon, and Trihalomethanes in Our Drinking Water (1995); United States Environmental Protection Agency, Health Assessment Document for Inorganic Arsenic - Final Report (March 1984); M. S. Golub, M.S. Macintosh, and N. Baumrind, "Developmental and Reproductive Toxicity of Inorganic Arsenic: Animal Studies and Human Concerns," J. Toxicol. Environ. Health B. Crit. Rev., vol. 1, no. 3, pp. 199-241 (July 1998). 82. R.D. Morris, "Chlorination, Chlorination By-Products, and Cancer: A Meta Analysis," American Journal of Public Health, vol. 82, no. 7, at 955-963 (1992); EPA, "Proposed National Primary Drinking Water Regulations for Disinfectants and Disinfection By-Products," 59 Fed. Reg. 38668 (July 29, 1994); NRDC, U.S. PIRG, and Clean Water Action, Trouble on Tap: Arsenic, Radioactive Radon, and Trihalomethanes in Our Drinking Water (1995). 83. See, S.H. Swan, et al., "A Prospective Study of Spontaneous Abortion: Relation to Amount and Source of Drinking Water Consumed in Early Pregnancy," Epidemiology, vol. 9, no. 2, pp. 126-133 (March 1998); K. Waller, S. H. Swan, et al. (1998). "Trihalomethanes in Drinking Water and Spontaneous Abortion," Epidemiology, vol. 9, no. 2, pp. 134-40 (1998); F. J. Bove, et al. "Public Drinking Water Contamination and Birth Outcomes," Amer. J. Epidemiol. , vol. 141, no. 9, pp. 850-862 (1995); see also, NRDC, U.S. PIRG, and Clean Water Action, Trouble on Tap: Arsenic, Radioactive Radon, and Trihalomethanes in Our Drinking Water (1995). 84. EPA, "National Primary Drinking Water Regulations, Final Rule," 56 Fed. Reg. 3526, at 3537-38 (January 30, 1991); Environmental Working Group, Pouring it On: Nitrate Contamination of Drinking Water (1996); National Research Council, Nitrate and Nitrite in Drinking Water (1995). 85. Environmental Working Group, Pouring it On: Nitrate Contamination of Drinking Water, p. 11 (1996),(citing P.G. Sattelmacher, "Methemoglobinemia from Nitrates in Drinking Water, Schriftenreiche des Verins fur Wasser Boden und Luthygiene, no. 21 (1962), and Simon, et al. , "Uber Vorkommen, Pathogenese, und Mogliichkeiten sur Prophylaxe der Durch Nitrit Verursachten Methamogloniamie," Zeitschrift fur Kinderheilkunde, vol. 91, pp. 124-138 (1964)). 86. Ibid. 87. R. J. Madison and J.O. Brunett, U.S. Geological Survey, "Overview of Nitrate in Ground Water of the United States," National Water Summary, 1984: USGS Water Supply Paper 2275, p. 93 (1985). 88. D.W. Warburton, "A Review of the Microbiological Quality of Bottled Water Sold in Canada, Part 2: The Need for More Stringent Standards and Regulations," Canadian J. of Microbiology, vol. 39, p. 162 (1993); H. Hernandez-Duquino, and F.A. Rosenberg, "Antibiotic-Resistant Pseudomonas in Bottled Drinking Water," Canadian J. of Microbiology, vol. 33, 286-289 (1987); P.R. Hunter, "The Microbiology of Bottled Natural Mineral Waters," J. Applied Bacteriol., vol. 74, pp. 345-352 (1993); see also, F.A. Rosenberg, "The Bacterial Flora of Bottled Waters and Potential Problems Associated With the Presence of Antibiotic-Resistant Species," in Proceedings of the Bottled Water Workshop, September 13 and 14, 1990, A Report Prepared for the Use of the Subcommittee on Oversight and Investigations of the Committee on Energy and Commerce, U.S. House of Representatives, Committee Print 101-X, 101st Cong., 2d Sess. pp. 72-83 (December, 1990). 89. Kansas Department of Health and the Environment, A Pilot Study to Determine the Need for Additional Testing of Bottled Water in the State of Kansas (undated, 1994?). 90. Commonwealth of Massachusetts, Executive Office of Health and Human Services, Department of Public Health, Division of Food and Drugs, Survey of Bottled Water Sold in Massachusetts (May 22, 1997). See also, annual Surveys of Bottled Water Sold in Massachusetts for 1996, 1995, and 1994. 91. New Jersey Department of Health and Senior Services, Division of Environmental and Occupational Health Services, Report to the New Jersey legislature, Senate Environment & Assembly Environment, Science, and Technology Committees, Summarizing Laboratory Test Results on the Quality of Bottled Drinking Water for the Period January 1, 1995 through December 31, 1996 (July 1997). 92. Pennsylvania Department of Environmental Protection, Bureau of Water Supply and Community Health, Division of Drinking Water Management, Bottled Water Quality Assurance Survey: Summary Report for 1993 through 1995 (1995). 93. Wisconsin Department of Agriculture, Trade, and Consumer Protection, State of Wisconsin Bottled Drinking Water Report & Analytical Results (Fiscal Year 1997); accord, Wisconsin Department of Agriculture, Trade, and Consumer Protection, State of Wisconsin Bottled Drinking Water Sampling and Analysis Test Results (Fiscal Year 1994). 94. See, e.g., H. Hernandez-Duquino and F.A. Rosenberg, "Antibiotic-Resistant Pseudomonas in Bottled Drinking Water," Can. J. Microbiology, vol. 33, p. 286 (1987). 95. R. Ashby, "Migration from Polyethylene Terepthalate Under All Conditions of Use," Food Add. & Contamin., vol. 5, pp. 485-492 (1988); J. Gilbert, L. Castle, S.M. Jickells, A.J. Mercer, and M. Sharman, "Migration from Plastics Into Foodstuffs Under Realistic Conditions of Use," Food Add. & Contamin., vol. 5, pp. 513-523 (1988); S. Monarca, R. De Fusco, D. Biscardi, V. De Feo, R. Pasquini, C. Fatigoni, M. Moretti, and A. Zanardini, "Studies of Migration of Potentially Genotoxic Compounds Into Water Stored In PET Bottles," Food Chem. Toxic. , vol. 32, no. 9, pp. 783-788 (1994). 96. Page, et al., "Survey of Bottled Drinking Water Sold in Canada, Part 2: Selected Volatile Organic Compounds," J. AOAC International, vol. 76, no. 1, pp. 26-31 (1993). 97. See, e.g., D.W. Warburton, "A Review of the Microbiological Quality of Bottled Water Sold in Canada. Part 2. The Need for More Stringent Standards and Regulations." Canadian J. Microbiology, vol. 39, pp. 158-168 (1993); P.R. Hunter, "The Microbiology of Bottled Natural Mineral Waters," J. Applied Bacteriol. , vol. 74 345-52 (1993); L. Moreira, et al., "Survival of Allochthonous Bacteria in Still Mineral Water Bottled in Polyvinyl Chloride and Glass, J. Applied Bacteriol. , vol. 77, pp. 334-339 (1994). 98. D.W. Warburton, "A Review of the Microbiological Quality of Bottled Water Sold in Canada, Part 2: The Need for More Stringent Standards and Regulations," Canadian J. of Microbiology, vol. 39, p. 162 (1993). 99. D. Farley, "Food Safety Crucial for People With Lowered Immunity," FDA Consumer, available at www.fda.gov (printed 8/19/1997). 100. L. Moreira, P. Agostinho, P.V. Morais, and M.S. da Costa, "Survival of Allochthonous Bacteria in Still Mineral Water Bottled in Polyvinyl Chloride (PVC) and Glass," J. Applied Bacteriology, vol. 77, pp. 334-339 (1994); P.V. Morais, and M.S. Da Costa, "Alterations in the Major Heterotrophic Bacterial Populations Isolated from a Still Bottled Mineral Water," J. Applied Bacteriol., vol. 69, pp. 750-757 (1990); P.R. Hunter, "The Microbiology of Bottled Natural Mineral Waters," J. Applied Bacteriol., vol. 74, pp. 345-52 (1993); F.A. Rosenberg, "The Bacterial Flora of Bottled Waters and Potential Problems Associated With the Presence of Antibiotic-Resistant Species," in Proceedings of the Bottled Water Workshop, September 13 and 14, 1990, A Report Prepared for the Use of the Subcommittee on Oversight and Investigations of the Committee on Energy and Commerce, U.S. House of Representatives, Committee Print 101-X, 101st Cong., 2d Sess. pp. 72-81 (December, 1990); D.W. Warburton, B. Bowen, and A. Konkle, "The Survival and Recovery of Pseudomonas aeruginosa and its effect on Salmonellae in Water: Methodology to Test Bottled Water in Canada," Can. J. Microbiol., vol. 40, pp. 987-992 (1994); D.W. Warburton, J.K. McCormick, and B. Bowen, "The Survival and Recovery of Aeromonas hydrophila in Water: Development of a Methodology for Testing Bottled Water in Canada," Can. J. Microbiol., vol. 40, pp. 145-48 (1994); D.W. Warburton, "A Review of the Microbiological Quality of Bottled Water Sold in Canada, Part 2: The Need for More Stringent Standards and Regulations," Canadian J. of Microbiology, vol. 39, p. 162 (1993); A. Ferreira, P.V. Morais, and M.S. Da Costa, "Alterations in Total Bacteria, Iodonitrophenyltetrazolium (INT)-Positive Bacteria, and Heterotrophic Plate Counts of Bottled Mineral Water," Canadian J. of Microbiology, vol. 40, pp. 72-77 (1994). 101. Ibid; see especially A. Ferreira, A., P.V. Morais, and M.S. Da Costa, "Alterations in Total Bacteria, Iodonitrophenyltetrazolium (INT)-Positive Bacteria, and Heterotrophic Plate Counts of Bottled Mineral Water," Canadian J. of Microbiology, vol. 40, pp. 72-77 (1994). 102. The information in this text box is summarized from the Massachusetts Department of Public Health’s (MDPH) Ann & Hope Water Incident Files, 1993-1997, including MDPH, Survey of Massachusetts Bottlers for Source and Finished Product Contamination (1992-1997); Summary of the Amount of Water Withdrawn from the Millis Springs, Inc. Spring #2 (undated); Letter from Dr. Elizabeth Bourque to J. McKinnies, Ann & Hope (August 7, 1996); Memorandum From Dr. Bourke to Paul Tierney, December 13, 1996 (MDPH Memoranda Provided to NRDC Pursuant to Freedom of Information Request); D. Talbot, "Bottled Water Flows from Troubled Well," Boston Herald, p. 1 (December 16, 1996); E. Leuning, "Toxin in Ann & Hope Wells Worries Officials," Middlesex News, p. 1 (September 18, 1996); E. Leuning, and H. Swails, "Water Source has History of Contaminants," Country Gazette (September 18, 1996); Personal Communication with Dr. Bourque, MDPH, August 1997, and January 1999; Personal Communication with Paul Tierney, MDPH, January 1999.
Bottled Water : Pure Drink or Pure Hype?. By Erik D. Olson. April 1999. Print version, $14.00. Order print copies .
last revised 7/15/2013 ||||| Just a couple years ago, when Hurricane Irene flooded our nearby water purification plant, our tap water was no longer safe for drinking, cooking — basically anything besides showering. And I had a newborn baby in the house drinking a bottle of formula every three hours. Needless to say, I got acquainted with the water sold in the grocery store real fast. And the choices were downright overwhelming.
Where were the days of simply picking a few gallons of bottled water off the shelf? Why did I now have to choose whether I wanted drinking water or purified water? And what was the difference anyway? Wasn’t all bottled water the same? Turns out, not so much.
I did what any mother would do in my situation: I bought a half dozen gallons of each kind and lugged them all home. Something was bound to be good enough for my baby and the rest would have to be good enough for me.
The EPA’s website finally answered my questions — after a few quick clicks, I was a water connoisseur. Now I pass that wisdom on to you, my dear readers:
Drinking water — Drinking water is just that: water that is intended for drinking. It is safe for human consumption and comes from a municipal source. There are no added ingredients besides what is considered usual and safe for any tap water, such as fluoride.
Distilled water — Distilled water is a type of purified water. It’s water that has gone through a rigorous filtration process to strip it not only of contaminants, but any natural minerals as well. This water is best for use in small appliances — like hot water urns, or steam irons, because if you use it, you won’t have that mineral buildup that you often get when you use tap water. Though it may seem counterintuitive, this water is not necessarily the best for human consumption, since all of the water’s natural, and often beneficial, minerals are absent.
Purified water — Purified water is water that comes from any source, but has been purified to remove any chemicals or contaminants. Types of purification include distillation, deionization, reverse osmosis, and carbon filtration. Like distilled water, it has its advantages and disadvantages, the advantages being that potentially harmful chemicals may be taken out and the disadvantage being that beneficial minerals may be taken out as well.
Spring water — This is what you often find in bottled water. It’s from an underground source and may or may not have been treated and purified. Though spring water sounds more appealing (like many others, I imagine my spring water coming from a rushing spring at the base of a tall, snow-capped mountain), it’s not necessarily the best water for drinking if you have other options. Studies done by the NRDC (Natural Resources Defense Council) have found contaminants in bottled water such as coliform, arsenic and phthalates. A lot of bottled water is labeled as spring water, but the source of that water is often a mystery, as this Environmental Working Group report makes clear. This topic has been a popular one in recent years, sparking plenty of controversy.
So what did I choose when faced with the myriad of choices? For my family, I chose drinking water, but depending on where you live, you may make a different choice. To check the quality of your local tap water, check with the EPA. To check the water quality of your favorite bottled water, check out the Environmental Working Group’s report on bottled waters. ||||| Each year by July 1 you should receive in the mail a short report (consumer confidence report, or drinking water quality report) from your water supplier that tells where your water comes from and what's in it:
Note: The external links to state web sites and contacts may not be accurate at this time, we are currently reviewing this information. Please check back with us for the updates on these pages.
| Back when Hurricane Irene struck the East Coast, one new mom faced a decision: what kind of bottled water to buy after the nearby water purification plant was flooded? Wanting to keep her newborn healthy, Chanie Kirschner reviewed the EPA's website to get the lowdown on water, she writes at Mother Nature's Network. What she learned: Drinking water: It's from a municipal source, it's safe, and it has no added ingredients beyond what you need (like fluoride). Distilled water: A filtration process has removed both contaminants and natural minerals from purified water. It's best for hand-held appliances like steam irons, which won't build up minerals. But the lack of healthy minerals is a negative if you want drinking water. Purified water: It's been stripped of chemicals and contaminants, but like distilled water, that comes with pluses and minuses. Spring water: Despite the label, it may be more "glorified tap water" than natural aqua from "a tall snow-capped mountain," writes Kirschner. Some studies found it can even contain contaminants like arsenic and coliform. So Kirschner ultimately picked drinking water, because that's what it's meant for: drinking. But in case that solution makes you nervous, you can confirm the quality of your local tap water at the EPA's website. Or you can look up the quality of your preferred bottled water at the Environmental Working Group's site. |
"Capital in the Twenty-First Century," at first glance, seems an unlikely candidate to become a best-seller in the U.S. After all, it's 700 pages long, translated from French, and analyzes centuries of data on wealth and economic growth.
But the book, from economist Thomas Piketty, is now No. 1 on Amazon.com's best-seller list, thanks to rave reviews and positive word of mouth. Beyond that, however, the book has something else going for it: "Capital" has hit a nerve with Americans with its message about income inequality.
An economics book becoming a best-seller is "unusual, and it speaks to the fact that Piketty is addressing a really fundamental issue," said Lawrence Mishel, president of the Economic Policy Institute. "He has his finger on a great dynamic, and is changing the terms of our discussion. Rather than asking why low-wage workers are not doing well, it focuses on the wealthy and the role of capital."
The main thrust of the book is that, in the jargon of economists, the rate of return on capital has far outstripped the rate of economic growth. The book also portrays the post-World War II period of economic progress across all classes as an anomaly, not the norm.
The result: mounting income inequality, as the wealthiest Americans gain a growing share of the nation's economic spoils -- and political power.
"When the rate of return of return on capital exceeds the rate of growth of output and income, as it did in the nineteenth century and seems quite likely to do again in the twenty-first, capitalism automatically generates arbitrary and unsustainable inequalities that radically undermine the meritocratic values on which which democratic societies are based," Piketty writes.
"Capital" has sold about 48,000 hardcover copies and as many as 9,000 e-book versions, Harvard University Press told The Washington Post. But if you want a hardcover copy, you may be temporarily out of luck, at least if you're shopping at Amazon -- the book was out of stock at the retailer as of Wednesday afternoon.
The book may end up as one of those doorstoppers that people like to tote around as a way to demonstrate their seriousness. After all, despite its huge sales, the book only has 57 reviews on Amazon, which could indicate that people are buying it, but potentially either not cracking the cover or finishing it. By comparison, Michael Lewis' "Flash Boys" -- No. 6 on Amazon's best-seller list -- has already drawn more than 600 reviews on Amazon, and it was released about three weeks later than Piketty's.
Nevertheless, Piketty's book is earning serious kudos in the economics and public-policy worlds, with the Nobel Prize-winning economist Paul Krugman calling it a "magnificent, sweeping meditation on inequality" in The New York Review of Books. ||||| Girl, Wash Your Face: Stop Believing the Lies About Who You Are so You Can Become Who You Were Meant to Be ||||| Photo by Charles Platiau/Reuters
It sounds like a bad joke: America’s liberals have fallen for a Marx-referencing, Balzac-loving French intellectual who has proposed a worldwide tax on wealth. If Thomas Piketty (pronounced “Tome-AH PEEK-et-ee”) were not traveling around the United States on a triumphant book tour, you might think Rush Limbaugh had made the man up in one of his more blustery rants.
Jordan Weissmann Jordan Weissmann is Slate’s senior business and economics correspondent.
But no, he is quite real. Capital in the Twenty-First Century, Piketty’s 685-page tome about the history and future of inequality, has improbably climbed to No. 1 on Amazon’s best-seller list. (The book’s title is only its first Marx allusion.) As of this writing, Capital is beating out such fare as the young adult hit The Fault in Our Stars and Michael Lewis’ Flash Boys. “The rock-star economist,” as New York magazine dubbed him, has also grabbed the interest of official Washington. While recently passing through D.C., he took a little time to meet with Treasury Secretary Jack Lew, the Council of Economic Advisers, and the IMF. Even Morning Joe, never exactly on the leading edge of ideas journalism, ran a segment about Capital Tuesday morning. I found out from my mother, who emailed to tell me the book sounded interesting.
That’s a tipping point.
Advertisement
Perhaps this shouldn’t be surprising. Piketty, a professor at the Paris School of Economics, has been perhaps the most important thinker on inequality of the past decade or so. We can thank him and his various collaborators, including Berkeley’s Emmanuel Saez and Oxford’s Anthony Atkinson, for the research that uncovered the rise of the top 1 percent in both the U.S. and Europe. Now, with his book, he’s handed liberals a coherent framework that justifies the discomfort that they probably already felt about the wealth gap.
Capital will change the political conversation by focusing it on wealth, not income.
Plenty of writers have already summarized Capital, but here’s a very quick review. Whereas Piketty’s past work has tended to focus on income—what workers and investors earn—the new book focuses on wealth: what we own. Using data reaching back to the 18th century, in the case of France, he argues that as economic growth slows in a country, the income generated by wealth balloons compared with income generated by work, and inequality skyrockets. This is because the return on wealth, such as a stock portfolio or real estate or even a factory, usually averages about 5 percent. If growth rates fall below that mark, the rich get richer. And over time, those who inherit great fortunes eventually come to dominate the economy. But the rest of us can respond, Piketty argues, by voting for redistributive policies. (That’s where his idea for a global wealth tax comes in. I think most of us Americans would be happy to see a hike on capital gains first.)
Some argue we shouldn’t fret over inequality, because today’s global elite are the working, meritocratic rich: They earn their outsized pay thanks to their enormous technical and business talent. But Piketty’s research offers a simple retort. Today’s rich may have worked for their success, but tomorrow’s won’t have to. Already, Piketty argues, the very richest earn more income from their wealth than their labor. And just as the ruthless robber barons of the late 19th century gave way to F. Scott Fitzgerald’s boozing heirs and heiresses, today’s CEO’s and hedge fund managers will give way to a generation of children who simply won the birth lottery.
In his must-read review of Capital in the New York Review of Books, Paul Krugman writes that Piketty is offering a “unified field theory of inequality, one that integrates economic growth, the distribution of income between capital and labor, and the distribution of wealth and income among individuals into a single frame.” This is part of the inherent appeal. Conservatives have long had an easy framework for their economic ideas: The free market cures all. Liberals, instead of nebulously arguing that they’re fighting for the middle class, now have a touchstone that clearly argues they’re fighting against the otherwise inevitable rise of the Hiltons.
Advertisement
Capital will change the political conversation in a more subtle way as well, by focusing it on wealth, not income. Discussions about income can become very muddy, in part because Americans don’t like to begrudge a well-earned payday, and in part because it can be tricky to decide what should count as income. If you start adding health insurance and government transfers such as food stamps into the equation, as some do, the top 1 percent don’t dominate quite so severely.
Wealth is a different story. Americans don’t like the idea of aristocrats—there’s a reason campaigning politicians bring up family farms and steel mills, not Shelter Island vacation homes, when they run for office. Moreover, you can’t save food stamps or a health plan, and because wealth only includes what you can save, it’s a measure of who wins in the economy over the long term. ||||| Capital in the Twenty-First Century by Thomas Piketty, translated from the French by Arthur Goldhammer Belknap Press/Harvard University Press, 685 pp., $39.95
Emmanuelle Marchadour Thomas Piketty in his office at the Paris School of Economics, 2013
Thomas Piketty, professor at the Paris School of Economics, isn’t a household name, although that may change with the English-language publication of his magnificent, sweeping meditation on inequality, Capital in the Twenty-First Century. Yet his influence runs deep. It has become a commonplace to say that we are living in a second Gilded Age—or, as Piketty likes to put it, a second Belle Époque—defined by the incredible rise of the “one percent.” But it has only become a commonplace thanks to Piketty’s work. In particular, he and a few colleagues (notably Anthony Atkinson at Oxford and Emmanuel Saez at Berkeley) have pioneered statistical techniques that make it possible to track the concentration of income and wealth deep into the past—back to the early twentieth century for America and Britain, and all the way to the late eighteenth century for France.
The result has been a revolution in our understanding of long-term trends in inequality. Before this revolution, most discussions of economic disparity more or less ignored the very rich. Some economists (not to mention politicians) tried to shout down any mention of inequality at all: “Of the tendencies that are harmful to sound economics, the most seductive, and in my opinion the most poisonous, is to focus on questions of distribution,” declared Robert Lucas Jr. of the University of Chicago, the most influential macroeconomist of his generation, in 2004. But even those willing to discuss inequality generally focused on the gap between the poor or the working class and the merely well-off, not the truly rich—on college graduates whose wage gains outpaced those of less-educated workers, or on the comparative good fortune of the top fifth of the population compared with the bottom four fifths, not on the rapidly rising incomes of executives and bankers.
It therefore came as a revelation when Piketty and his colleagues showed that incomes of the now famous “one percent,” and of even narrower groups, are actually the big story in rising inequality. And this discovery came with a second revelation: talk of a second Gilded Age, which might have seemed like hyperbole, was nothing of the kind. In America in particular the share of national income going to the top one percent has followed a great U-shaped arc. Before World War I the one percent received around a fifth of total income in both Britain and the United States. By 1950 that share had been cut by more than half. But since 1980 the one percent has seen its income share surge again—and in the United States it’s back to what it was a century ago.
Still, today’s economic elite is very different from that of the nineteenth century, isn’t it? Back then, great wealth tended to be inherited; aren’t today’s economic elite people who earned their position? Well, Piketty tells us that this isn’t as true as you think, and that in any case this state of affairs may prove no more durable than the middle-class society that flourished for a generation after World War II. The big idea of Capital in the Twenty-First Century is that we haven’t just gone back to nineteenth-century levels of income inequality, we’re also on a path back to “patrimonial capitalism,” in which the commanding heights of the economy are controlled not by talented individuals but by family dynasties.
It’s a remarkable claim—and precisely because it’s so remarkable, it needs to be examined carefully and critically. Before I get into that, however, let me say right away that Piketty has written a truly superb book. It’s a work that melds grand historical sweep—when was the last time you heard an economist invoke Jane Austen and Balzac?—with painstaking data analysis. And even though Piketty mocks the economics profession for its “childish passion for mathematics,” underlying his discussion is a tour de force of economic modeling, an approach that integrates the analysis of economic growth with that of the distribution of income and wealth. This is a book that will change both the way we think about society and the way we do economics.
1.
What do we know about economic inequality, and about when do we know it? Until the Piketty revolution swept through the field, most of what we knew about income and wealth inequality came from surveys, in which randomly chosen households are asked to fill in a questionnaire, and their answers are tallied up to produce a statistical portrait of the whole. The international gold standard for such surveys is the annual survey conducted once a year by the Census Bureau. The Federal Reserve also conducts a triennial survey of the distribution of wealth.
These two surveys are an essential guide to the changing shape of American society. Among other things, they have long pointed to a dramatic shift in the process of US economic growth, one that started around 1980. Before then, families at all levels saw their incomes grow more or less in tandem with the growth of the economy as a whole. After 1980, however, the lion’s share of gains went to the top end of the income distribution, with families in the bottom half lagging far behind.
Historically, other countries haven’t been equally good at keeping track of who gets what; but this situation has improved over time, in large part thanks to the efforts of the Luxembourg Income Study (with which I will soon be affiliated). And the growing availability of survey data that can be compared across nations has led to further important insights. In particular, we now know both that the United States has a much more unequal distribution of income than other advanced countries and that much of this difference in outcomes can be attributed directly to government action. European nations in general have highly unequal incomes from market activity, just like the United States, although possibly not to the same extent. But they do far more redistribution through taxes and transfers than America does, leading to much less inequality in disposable incomes.
Yet for all their usefulness, survey data have important limitations. They tend to undercount or miss entirely the income that accrues to the handful of individuals at the very top of the income scale. They also have limited historical depth. Even US survey data only take us to 1947.
Enter Piketty and his colleagues, who have turned to an entirely different source of information: tax records. This isn’t a new idea. Indeed, early analyses of income distribution relied on tax data because they had little else to go on. Piketty et al. have, however, found ways to merge tax data with other sources to produce information that crucially complements survey evidence. In particular, tax data tell us a great deal about the elite. And tax-based estimates can reach much further into the past: the United States has had an income tax since 1913, Britain since 1909. France, thanks to elaborate estate tax collection and record-keeping, has wealth data reaching back to the late eighteenth century.
Exploiting these data isn’t simple. But by using all the tricks of the trade, plus some educated guesswork, Piketty is able to produce a summary of the fall and rise of extreme inequality over the course of the past century. It looks like Table 1 on this page.
As I said, describing our current era as a new Gilded Age or Belle Époque isn’t hyperbole; it’s the simple truth. But how did this happen?
2.
Piketty throws down the intellectual gauntlet right away, with his book’s very title: Capital in the Twenty-First Century. Are economists still allowed to talk like that?
It’s not just the obvious allusion to Marx that makes this title so startling. By invoking capital right from the beginning, Piketty breaks ranks with most modern discussions of inequality, and hearkens back to an older tradition.
The general presumption of most inequality researchers has been that earned income, usually salaries, is where all the action is, and that income from capital is neither important nor interesting. Piketty shows, however, that even today income from capital, not earnings, predominates at the top of the income distribution. He also shows that in the past—during Europe’s Belle Époque and, to a lesser extent, America’s Gilded Age—unequal ownership of assets, not unequal pay, was the prime driver of income disparities. And he argues that we’re on our way back to that kind of society. Nor is this casual speculation on his part. For all that Capital in the Twenty-First Century is a work of principled empiricism, it is very much driven by a theoretical frame that attempts to unify discussion of economic growth and the distribution of both income and wealth. Basically, Piketty sees economic history as the story of a race between capital accumulation and other factors driving growth, mainly population growth and technological progress.
To be sure, this is a race that can have no permanent victor: over the very long run, the stock of capital and total income must grow at roughly the same rate. But one side or the other can pull ahead for decades at a time. On the eve of World War I, Europe had accumulated capital worth six or seven times national income. Over the next four decades, however, a combination of physical destruction and the diversion of savings into war efforts cut that ratio in half. Capital accumulation resumed after World War II, but this was a period of spectacular economic growth—the Trente Glorieuses, or “Glorious Thirty” years; so the ratio of capital to income remained low. Since the 1970s, however, slowing growth has meant a rising capital ratio, so capital and wealth have been trending steadily back toward Belle Époque levels. And this accumulation of capital, says Piketty, will eventually recreate Belle Époque–style inequality unless opposed by progressive taxation.
Why? It’s all about r versus g—the rate of return on capital versus the rate of economic growth.
Just about all economic models tell us that if g falls—which it has since 1970, a decline that is likely to continue due to slower growth in the working-age population and slower technological progress—r will fall too. But Piketty asserts that r will fall less than g. This doesn’t have to be true. However, if it’s sufficiently easy to replace workers with machines—if, to use the technical jargon, the elasticity of substitution between capital and labor is greater than one—slow growth, and the resulting rise in the ratio of capital to income, will indeed widen the gap between r and g. And Piketty argues that this is what the historical record shows will happen.
If he’s right, one immediate consequence will be a redistribution of income away from labor and toward holders of capital. The conventional wisdom has long been that we needn’t worry about that happening, that the shares of capital and labor respectively in total income are highly stable over time. Over the very long run, however, this hasn’t been true. In Britain, for example, capital’s share of income—whether in the form of corporate profits, dividends, rents, or sales of property, for example—fell from around 40 percent before World War I to barely 20 percent circa 1970, and has since bounced roughly halfway back. The historical arc is less clear-cut in the United States, but here, too, there is a redistribution in favor of capital underway. Notably, corporate profits have soared since the financial crisis began, while wages—including the wages of the highly educated—have stagnated.
A rising share of capital, in turn, directly increases inequality, because ownership of capital is always much more unequally distributed than labor income. But the effects don’t stop there, because when the rate of return on capital greatly exceeds the rate of economic growth, “the past tends to devour the future”: society inexorably tends toward dominance by inherited wealth.
Consider how this worked in Belle Époque Europe. At the time, owners of capital could expect to earn 4–5 percent on their investments, with minimal taxation; meanwhile economic growth was only around one percent. So wealthy individuals could easily reinvest enough of their income to ensure that their wealth and hence their incomes were growing faster than the economy, reinforcing their economic dominance, even while skimming enough off to live lives of great luxury.
And what happened when these wealthy individuals died? They passed their wealth on—again, with minimal taxation—to their heirs. Money passed on to the next generation accounted for 20 to 25 percent of annual income; the great bulk of wealth, around 90 percent, was inherited rather than saved out of earned income. And this inherited wealth was concentrated in the hands of a very small minority: in 1910 the richest one percent controlled 60 percent of the wealth in France; in Britain, 70 percent.
No wonder, then, that nineteenth-century novelists were obsessed with inheritance. Piketty discusses at length the lecture that the scoundrel Vautrin gives to Rastignac in Balzac’s Père Goriot, whose gist is that a most successful career could not possibly deliver more than a fraction of the wealth Rastignac could acquire at a stroke by marrying a rich man’s daughter. And it turns out that Vautrin was right: being in the top one percent of nineteenth-century heirs and simply living off your inherited wealth gave you around two and a half times the standard of living you could achieve by clawing your way into the top one percent of paid workers.
You might be tempted to say that modern society is nothing like that. In fact, however, both capital income and inherited wealth, though less important than they were in the Belle Époque, are still powerful drivers of inequality—and their importance is growing. In France, Piketty shows, the inherited share of total wealth dropped sharply during the era of wars and postwar fast growth; circa 1970 it was less than 50 percent. But it’s now back up to 70 percent, and rising. Correspondingly, there has been a fall and then a rise in the importance of inheritance in conferring elite status: the living standard of the top one percent of heirs fell below that of the top one percent of earners between 1910 and 1950, but began rising again after 1970. It’s not all the way back to Rasti-gnac levels, but once again it’s generally more valuable to have the right parents (or to marry into having the right in-laws) than to have the right job.
And this may only be the beginning. Figure 1 on this page shows Piketty’s estimates of global r and g over the long haul, suggesting that the era of equalization now lies behind us, and that the conditions are now ripe for the reestablishment of patrimonial capitalism.
Given this picture, why does inherited wealth play as small a part in today’s public discourse as it does? Piketty suggests that the very size of inherited fortunes in a way makes them invisible: “Wealth is so concentrated that a large segment of society is virtually unaware of its existence, so that some people imagine that it belongs to surreal or mysterious entities.” This is a very good point. But it’s surely not the whole explanation. For the fact is that the most conspicuous example of soaring inequality in today’s world—the rise of the very rich one percent in the Anglo-Saxon world, especially the United States—doesn’t have all that much to do with capital accumulation, at least so far. It has more to do with remarkably high compensation and incomes.
3.
Capital in the Twenty-First Century is, as I hope I’ve made clear, an awesome work. At a time when the concentration of wealth and income in the hands of a few has resurfaced as a central political issue, Piketty doesn’t just offer invaluable documentation of what is happening, with unmatched historical depth. He also offers what amounts to a unified field theory of inequality, one that integrates economic growth, the distribution of income between capital and labor, and the distribution of wealth and income among individuals into a single frame.
And yet there is one thing that slightly detracts from the achievement—a sort of intellectual sleight of hand, albeit one that doesn’t actually involve any deception or malfeasance on Piketty’s part. Still, here it is: the main reason there has been a hankering for a book like this is the rise, not just of the one percent, but specifically of the American one percent. Yet that rise, it turns out, has happened for reasons that lie beyond the scope of Piketty’s grand thesis.
Piketty is, of course, too good and too honest an economist to try to gloss over inconvenient facts. “US inequality in 2010,” he declares, “is quantitatively as extreme as in old Europe in the first decade of the twentieth century, but the structure of that inequality is rather clearly different.” Indeed, what we have seen in America and are starting to see elsewhere is something “radically new”—the rise of “supersalaries.”
Capital still matters; at the very highest reaches of society, income from capital still exceeds income from wages, salaries, and bonuses. Piketty estimates that the increased inequality of capital income accounts for about a third of the overall rise in US inequality. But wage income at the top has also surged. Real wages for most US workers have increased little if at all since the early 1970s, but wages for the top one percent of earners have risen 165 percent, and wages for the top 0.1 percent have risen 362 percent. If Rastignac were alive today, Vautrin might concede that he could in fact do as well by becoming a hedge fund manager as he could by marrying wealth.
What explains this dramatic rise in earnings inequality, with the lion’s share of the gains going to people at the very top? Some US economists suggest that it’s driven by changes in technology. In a famous 1981 paper titled “The Economics of Superstars,” the Chicago economist Sherwin Rosen argued that modern communications technology, by extending the reach of talented individuals, was creating winner-take-all markets in which a handful of exceptional individuals reap huge rewards, even if they’re only modestly better at what they do than far less well paid rivals.
Piketty is unconvinced. As he notes, conservative economists love to talk about the high pay of performers of one kind or another, such as movie and sports stars, as a way of suggesting that high incomes really are deserved. But such people actually make up only a tiny fraction of the earnings elite. What one finds instead is mainly executives of one sort or another—people whose performance is, in fact, quite hard to assess or give a monetary value to.
Who determines what a corporate CEO is worth? Well, there’s normally a compensation committee, appointed by the CEO himself. In effect, Piketty argues, high-level executives set their own pay, constrained by social norms rather than any sort of market discipline. And he attributes skyrocketing pay at the top to an erosion of these norms. In effect, he attributes soaring wage incomes at the top to social and political rather than strictly economic forces.
Now, to be fair, he then advances a possible economic analysis of changing norms, arguing that falling tax rates for the rich have in effect emboldened the earnings elite. When a top manager could expect to keep only a small fraction of the income he might get by flouting social norms and extracting a very large salary, he might have decided that the opprobrium wasn’t worth it. Cut his marginal tax rate drastically, and he may behave differently. And as more and more of the supersalaried flout the norms, the norms themselves will change.
There’s a lot to be said for this diagnosis, but it clearly lacks the rigor and universality of Piketty’s analysis of the distribution of and returns to wealth. Also, I don’t think Capital in the Twenty-First Century adequately answers the most telling criticism of the executive power hypothesis: the concentration of very high incomes in finance, where performance actually can, after a fashion, be evaluated. I didn’t mention hedge fund managers idly: such people are paid based on their ability to attract clients and achieve investment returns. You can question the social value of modern finance, but the Gordon Gekkos out there are clearly good at something, and their rise can’t be attributed solely to power relations, although I guess you could argue that willingness to engage in morally dubious wheeling and dealing, like willingness to flout pay norms, is encouraged by low marginal tax rates.
Overall, I’m more or less persuaded by Piketty’s explanation of the surge in wage inequality, though his failure to include deregulation is a significant disappointment. But as I said, his analysis here lacks the rigor of his capital analysis, not to mention its sheer, exhilarating intellectual elegance.
Yet we shouldn’t overreact to this. Even if the surge in US inequality to date has been driven mainly by wage income, capital has nonetheless been significant too. And in any case, the story looking forward is likely to be quite different. The current generation of the very rich in America may consist largely of executives rather than rentiers, people who live off accumulated capital, but these executives have heirs. And America two decades from now could be a rentier-dominated society even more unequal than Belle Époque Europe.
But this doesn’t have to happen.
4.
At times, Piketty almost seems to offer a deterministic view of history, in which everything flows from the rates of population growth and technological progress. In reality, however, Capital in the Twenty-First Century makes it clear that public policy can make an enormous difference, that even if the underlying economic conditions point toward extreme inequality, what Piketty calls “a drift toward oligarchy” can be halted and even reversed if the body politic so chooses.
The key point is that when we make the crucial comparison between the rate of return on wealth and the rate of economic growth, what matters is the after-tax return on wealth. So progressive taxation—in particular taxation of wealth and inheritance—can be a powerful force limiting inequality. Indeed, Piketty concludes his masterwork with a plea for just such a form of taxation. Unfortunately, the history covered in his own book does not encourage optimism.
It’s true that during much of the twentieth century strongly progressive taxation did indeed help reduce the concentration of income and wealth, and you might imagine that high taxation at the top is the natural political outcome when democracy confronts high inequality. Piketty, however, rejects this conclusion; the triumph of progressive taxation during the twentieth century, he contends, was “an ephemeral product of chaos.” Absent the wars and upheavals of Europe’s modern Thirty Years’ War, he suggests, nothing of the kind would have happened.
As evidence, he offers the example of France’s Third Republic. The Republic’s official ideology was highly egalitarian. Yet wealth and income were nearly as concentrated, economic privilege almost as dominated by inheritance, as they were in the aristocratic constitutional monarchy across the English Channel. And public policy did almost nothing to oppose the economic domination by rentiers: estate taxes, in particular, were almost laughably low.
Why didn’t the universally enfranchised citizens of France vote in politicians who would take on the rentier class? Well, then as now great wealth purchased great influence—not just over policies, but over public discourse. Upton Sinclair famously declared that “it is difficult to get a man to understand something when his salary depends on his not understanding it.” Piketty, looking at his own nation’s history, arrives at a similar observation: “The experience of France in the Belle Époque proves, if proof were needed, that no hypocrisy is too great when economic and financial elites are obliged to defend their interest.”
The same phenomenon is visible today. In fact, a curious aspect of the American scene is that the politics of inequality seem if anything to be running ahead of the reality. As we’ve seen, at this point the US economic elite owes its status mainly to wages rather than capital income. Nonetheless, conservative economic rhetoric already emphasizes and celebrates capital rather than labor—“job creators,” not workers.
In 2012 Eric Cantor, the House majority leader, chose to mark Labor Day—Labor Day!—with a tweet honoring business owners:
Today, we celebrate those who have taken a risk, worked hard, built a business and earned their own success.
Perhaps chastened by the reaction, he reportedly felt the need to remind his colleagues at a subsequent GOP retreat that most people don’t own their own businesses—but this in itself shows how thoroughly the party identifies itself with capital to the virtual exclusion of labor.
Nor is this orientation toward capital just rhetorical. Tax burdens on high-income Americans have fallen across the board since the 1970s, but the biggest reductions have come on capital income—including a sharp fall in corporate taxes, which indirectly benefits stockholders—and inheritance. Sometimes it seems as if a substantial part of our political class is actively working to restore Piketty’s patrimonial capitalism. And if you look at the sources of political donations, many of which come from wealthy families, this possibility is a lot less outlandish than it might seem.
Piketty ends Capital in the Twenty-First Century with a call to arms—a call, in particular, for wealth taxes, global if possible, to restrain the growing power of inherited wealth. It’s easy to be cynical about the prospects for anything of the kind. But surely Piketty’s masterly diagnosis of where we are and where we’re heading makes such a thing considerably more likely. So Capital in the Twenty-First Century is an extremely important book on all fronts. Piketty has transformed our economic discourse; we’ll never talk about wealth and inequality the same way we used to. | Thomas Piketty's book Capital in the Twenty-First Century may not be a quick read, but it's flying off the (digital) shelves. What makes a 685-page book on economic history, translated from French, Amazon's bestseller? Well, the reviews have been great; Paul Krugman calls it "truly superb" in the New York Review of Books. And it deals with an issue that speaks to Americans, writes Aimee Picchi at CBS News: income inequality. Piketty argues that the problem is tied to return on capital surpassing the economy's growth rate. In other words, as Jordan Weissmann writes at Slate, "as economic growth slows in a country, the income generated by wealth balloons compared with income generated by work, and inequality skyrockets." Piketty, Weissmann notes, "has been perhaps the most important thinker on inequality of the past decade or so." His work has been instrumental in documenting the oft-discussed wealthiest 1%. While the book has been a sales success, however, it hasn't generated many Amazon reviews, Picchi points out; that could mean the readers who are buying it aren't actually getting around to reading the giant thing. (Another weird recent bestseller: Mein Kampf.) |
The rental assistance programs authorized under Section 8 of the United States Housing Act of 1937 (42 U.S.C. §1437f)—Section 8 project-based rental assistance and Section 8 tenant-based vouchers—have become the largest components of the Department of Housing and Urban Development's (HUD) budget, with combined appropriations of $27 billion in FY2013. The rising cost of providing rental assistance is due, in varying degrees, to expansions in the program, the cost of renewing expiring long-term contracts, and rising costs in housing markets across the country. The most rapid cost increases have been seen in the voucher program. Partly out of concern about cost increases, and partly in response to the administrative complexity of the current program, there have been calls for reform of the voucher program and its funding each year since 2002. In response, Congress has enacted changes to the way that it funds the voucher program and the way that PHAs receive their funding. Congress has considered program reforms, but has not enacted them. In order to understand why the program has become so expensive and why reforms are being considered, it is first important to understand the mechanics of the program and its history. This paper will provide an overview of the Section 8 programs and their history. For more information, see CRS Report RL33929, The Section 8 Voucher Renewal Funding Formula: Changes in Appropriations Acts ; CRS Report RL34002, Section 8 Housing Choice Voucher Program: Issues and Reform Proposals ; and CRS Report R41182, Preservation of HUD-Assisted Housing , by [author name scrubbed] and [author name scrubbed]. From 1937 until 1965, public housing and the subsidized mortgage insurance programs of the Federal Housing Administration (FHA) were the country's main forms of federal housing assistance. As problems with the public housing and other bricks and mortar federal housing construction programs (such as Section 235 and Section 236 of the National Housing Act) arose—particularly their high cost—interest grew in alternative forms of housing assistance. In 1965, a new approach was adopted (P.L. 89-117). The Section 23 program assisted low-income families residing in leased housing by permitting a public housing authority (PHA) to lease existing housing units in the private market and sublease them to low-income and very low-income families at below-market rents. However, the Section 23 program did not ameliorate the growing problems with HUD's housing construction programs and interest remained in developing and testing new approaches. The Experimental Housing Allowance Program is one example of such an alternative approach. Due to criticisms about cost, profiteering, and slumlord practices in federal housing programs, President Nixon declared a moratorium on all existing federal housing programs, including Section 23, in 1973. During the moratorium, HUD revised the Section 23 program and sought to make it the main assisted housing program of the federal government. However, at the same time, Congress was considering several options for restructuring subsidized housing programs. After all the debates and discussions that typically precede the passage of authorizing legislation were completed, Congress voted in favor of a new leased housing approach, and the Section 8 program was created. The Section 8 program is named for Section 8 of the United States Housing Act of 1937. The original program, established by the Housing and Community Development Act of 1974 ( P.L. 93-383 ), consisted of three parts: new construction, substantial rehabilitation, and existing housing certificates. The 1974 Act and the creation of Section 8 effectively ended the Nixon moratorium. In 1978, the moderate rehabilitation component of the program was added, but it has not been funded since 1989. In 1983, the new construction and substantial rehabilitation portions of the program were repealed, and a new component—Section 8 vouchers—was added. In 1998, existing housing certificates were merged with and converted to vouchers. Under the new construction and substantial rehabilitation components of the early Section 8 program, HUD entered into long-term (20- or 40-year) contracts with private for-profit, non-profit, or public organizations that were willing to construct new units or rehabilitate older ones to house low- and very low-income tenants. Under those contracts, HUD agreed to make assistance payments toward each unit for the duration of the contract. Those assistance payments were subsidies that allowed tenants residing in the units to pay 25% (later raised to 30%) of their adjusted income as rent. The program was responsible for the construction and rehabilitation of a large number of units. Over 1.2 million units of housing with Section 8 contracts that originated under the new construction and substantial rehabilitation program still receive payments today. By the early 1980s, because of the rising costs of rent and construction, the amount of budget authority needed for the Section 8 rental assistance program had been steadily increasing while the number of units produced in a year had been decreasing. At the same time, studies emerged showing that providing subsidies for use in newly constructed or substantially rehabilitated housing was more expensive than the cost of providing subsidies in existing units of housing. Also, because contracts were written for such long terms, appropriators had to provide large amounts of budget authority each time they funded a new contract (see below for an illustration of the implication of long-term contracts). As the budget deficit grew, Members of Congress became concerned with the high costs associated with Section 8 new construction and substantial rehabilitation, and these segments of the Section 8 program were repealed in the Housing and Urban-Rural Recovery Act of 1983 ( P.L. 98-181 ). The Housing and Community Development Amendments of 1978 ( P.L. 95-557 ) added the moderate rehabilitation component to the Section 8 program, which expanded Section 8 rental assistance to projects that were in need of repairs costing at least $1,000 per unit to make the housing decent, safe, and sanitary. Over the next 10 years, however, this component of the program was fraught with allegations of abuse; the process of awarding contracts was considered unfair and politicized. Calls for reform of the moderate rehabilitation program led to its suspension. It has not been funded since 1989. The existing housing certificate component of the Section 8 program was created in the beginning of the Section 8 program and continued until 1998. Under the existing housing certificate program, PHAs and HUD would enter into an Annual Contributions Contract (ACC) for the number of units that would be available to receive assistance. Contracts were originally written for five years and were renewable, at HUD's discretion, for up to 15 years. In the contract, HUD agreed to pay the difference between the tenant's rental payment and the contract rent of a unit. The contract rent was generally limited to the HUD-set Fair Market Rent (FMR) for the area. After entering into a contract with HUD, PHAs would advertise the availability of certificates for low-income tenants. The existing housing certificate program was primarily tenant-based, meaning that the assistance was attached to the tenant. Families selected to receive assistance were given certificates as proof of eligibility for the program; with their certificates, families could look for suitable housing in the private market. Housing was considered suitable if it rented for the FMR or less and met Housing Quality Standards (HQS). Once the household found a unit, they signed a lease and agreed to pay 30% of their adjusted income for rent. The remainder of the rent was paid by HUD to the landlord on behalf of the tenant. If a family vacated a unit in violation of the lease, HUD had to make rental payments to the landlord for the remainder of the month in which the family vacated, and pay 80% of the contract rent for an additional month. If the family left the unit at the end of their lease, they could take their certificate with them and use it for their next home. HUD also paid the PHA an administrative fee for managing the program. The amount of this administrative fee was set by Congress in appropriations legislation each year. PHAs were permitted to use up to 15% of their Section 8 certificates for project-based housing. In project-based Section 8 existing housing, the subsidy was attached to the unit, which was selected by the PHA, and not to the tenant. This meant that when a tenant vacated a unit, another eligible tenant would be able to occupy it, and HUD would subsidize the rent as long as a contract was in effect between the PHA and the owner. In 1998, the Quality Housing and Work Opportunity Reconciliation Act (QHWRA) ( P.L. 105-276 ) merged the Section 8 existing housing certificate program with the voucher program (see below) and converted all certificates to vouchers, effectively ending the Section 8 existing housing certificate program. The largest component of today's Section 8 program, the voucher program, was first authorized by the Housing and Urban-Rural Recovery Act of 1983 ( P.L. 98-181 ). It was originally a demonstration program, but was made permanent in 1988. Like the Section 8 existing housing certificate program, the voucher program is administered by PHAs and is tenant-based, with a project-based component. However, under the voucher program, families can pay more of their incomes toward rent and lease apartments with rents higher than FMR. Today's Section 8 program is really two programs, which, combined, serve almost 3.5 million households. The first program under Section 8 can be characterized as Section 8 project-based rental assistance. This program includes units created under the new construction, substantial rehabilitation, and moderate rehabilitation components of the earlier Section 8 program that are still under contract with HUD. Although no new construction, substantial rehabilitation, or moderate rehabilitation contracts have been created for a number of years, about 1.3 million of these units are still funded under multiyear contracts that have not yet expired and do not require any new appropriations, or multiyear contracts that had expired and are renewed annually, requiring new appropriations. Families that live in Section 8 project-based units pay 30% of their incomes toward rent. In order to be eligible, families must be low-income; however, at least 40% of all units that become available each year must be rented to extremely low-income families. If a family leaves the unit, the owner will continue to receive payments as long as he or she can move another eligible family into the unit. Owners of properties with project-based Section 8 rental assistance receive a subsidy from HUD, called a Housing Assistance Payment (HAP). HAP payments are equal to the difference between the tenant's payments (30% of income) and a contract rent, which is agreed to between HUD and the landlord. Contract rents are meant to be comparable to rents in the local market, and are typically adjusted annually by an inflation factor established by HUD or on the basis of the project's operating costs. Project-based Section 8 contracts are managed by contract administrators. While some HUD regional offices still serve as contract administrators, the Department's goal is to contract the function out entirely to outside entities, including state housing finance agencies, PHAs, or private entities. When project-based HAP contracts expire, the landlord can choose to either renew the contract with HUD for up to five years at a time (subject to annual appropriations) or convert the units to market rate. In some cases, landlords can choose to "opt-out" of Section 8 contracts early. When an owners terminates an HAP contract with HUD, either through opt-out or expiration—the tenants in the building are provided with enhanced vouchers designed to allow them to stay in their unit (see discussion of " Tenant Protection or Enhanced Vouchers " below). In 2010, about 51% of the households that lived in project-based Section 8 units were headed by persons who were elderly, about 17% were headed by persons who were non-elderly disabled, and about 33% were headed by persons who were not elderly and not disabled. The median income of households living in project-based Section 8 units was a little more than $10,000 per year. When QHWRA merged the voucher and certificate programs in 1998, it renamed the voucher component of the Section 8 program the Housing Choice Voucher program. The voucher program is funded in HUD's budget through the tenant-based rental assistance account. The federal government currently funds more than 2 million Section 8 Housing Choice Vouchers. PHAs administer the program and receive an annual budget from HUD. Each has a fixed number of vouchers that they are permitted to administer and they are paid administrative fees. Vouchers are tenant-based in nature, meaning that the subsidy is tied to the family, rather than to a unit of housing. In order to be eligible, a family must be very low-income (50% or below area median income (AMI)), although 75% of all vouchers must be given to extremely low-income families (30% or below AMI). To illustrate the regional variation in these definitions of low-income and their relationship to federal definitions of poverty, Table 4 compares HUD's income definitions to the Department of Health and Human Services (HHS) poverty guidelines for several geographic areas in 2013. Note that HHS poverty guidelines are uniform in all parts of the country (except for Alaska and Hawaii, not shown in the following table). Families who receive vouchers use them to subsidize their rents in private market apartments. Once an eligible family receives an available voucher, the family must find an eligible unit. In order to be eligible, a unit must meet minimum housing quality standards (HQS) and cost less than 40% of the family's income plus the HAP paid by the PHA. The HAP paid by the PHA for tenant-based vouchers, like the HAP paid for Section 8 project-based rental assistance, is capped; however, with tenant-based vouchers, PHAs have the flexibility to set their caps anywhere between 90% and 110% of FMR (up to 120% FMR with prior HUD approval). The cap set by the PHA is called the payment standard. Once a family finds an eligible unit, the family signs a contract with HUD, and both HUD and the family sign contracts with the landlord. The PHA will pay the HAP (the payment standard minus 30% of the family's income), and the family will pay the difference between the HAP and the rent (which must total between 30% and 40% of the family's income). After the first year, a family can choose to pay more than 40% of their income towards rent. PHAs may also choose to adopt minimum rents, which cannot exceed $50. (See box below for an example.) Once a family is using a voucher, the family can retain the voucher as long as the PHA has adequate funding for it and the family complies with PHA and program requirements. If a family wants to move, the tenant-based voucher can move with the family. Once the family moves to a new area, the two PHAs (the PHA that originally issued the voucher and the PHA that administers vouchers in the new area) negotiate regarding who will continue to administer the voucher. The voucher program does not contain any mandatory time limits. Families exit the program in one of three ways: their own choice, non-compliance with program rules (including non-payment of rent), or if they no longer qualify for a subsidy. Families no longer qualify for a subsidy when their incomes, which must be recertified annually, have risen to the point that 30% of that income is equal to rent. At that point the HAP payment will be zero and the family will no longer receive any subsidy. Unlike the project-based Section 8 program, the majority of households receiving vouchers are headed by a person who is not elderly and not disabled. In 2010, about 19% of the households with Section 8 vouchers were elderly households, about 28% were disabled households, and about 53% were non-elderly, non-disabled households with children. The median income of households with vouchers was just over $10,400 per year. Vouchers, like Section 8 existing housing certificates, can be project-based. In order to project-base vouchers, a landlord must sign a contract with a PHA agreeing to set-aside up to 25% of the units in a development for low-income families. Each of those set-aside units will receive voucher assistance as long as a family that is eligible for a voucher lives there. Families that live in a project-based voucher unit pay 30% of their adjusted household income toward rent, and HUD pays the difference between 30% of household income and a reasonable rent agreed to by both the landlord and HUD. PHAs can choose to project-base up to 20% of their vouchers. Project-based vouchers are portable; after one year, a family with a project-based voucher can convert to a tenant-based voucher and then move, as long as a tenant-based voucher is available. Another type of voucher, called a tenant protection voucher, is given to families that were already receiving assistance through another HUD housing program, before being displaced. Examples of instances when families receive tenant-protection vouchers include when public housing is demolished or when a landlord has terminated a Section 8 project-based rental assistance contract. Families that risk being displaced from project-based Section 8 units are eligible to receive a special form of tenant-protection voucher, called an enhanced voucher. The "enhanced" feature of the voucher allows the maximum value of the voucher to grow to be equal to the new rent charged in the property, as long as it is reasonable in the market, even if it is higher than the PHA's payment standard. They are designed to allow families to stay in their homes. If the family chooses to move, then the enhanced feature is lost and the voucher becomes subject to the PHA's normal payment standard. The voucher program also has several special programs or uses. These include family unification vouchers, vouchers for homeless veterans, and vouchers used for homeownership. Family unification vouchers are given to families for whom the lack of adequate housing is a primary factor in the separation, or threat of imminent separation, of children from their families or in preventing the reunification of the children with their families. HUD has awarded over 38,600 family unification vouchers to PHAs since the inception of the program. Beginning in 1992, through collaboration between HUD and the VA, Section 8 vouchers have been made available for use by homeless veterans with severe psychiatric or substance abuse disorders. Through the program, called HUD-VA Supported Housing (HUD-VASH), PHAs administer the Section 8 vouchers while local VA medical centers provide case management and clinical services to participating veterans. While there are no specifically authorized "homeownership vouchers," since 2000 certain families have been eligible to use their vouchers to help pay for the monthly costs associated with homeownership. Eligible families must work full-time or be elderly or disabled, be first-time homebuyers, and agree to complete first-time homebuyer counseling. PHAs can decide whether to run a homeownership program and an increasing number of PHAs are choosing to do so. According to HUD's website, nearly 13,000 families have closed on homes using vouchers. The Family Self Sufficiency (FSS) program was established by Congress as a part of the National Affordable Housing Act of 1990 ( P.L. 101-625 ). The purpose of the program is to promote coordination between the voucher program and other private and public resources to enable families on public assistance to achieve economic self-sufficiency. Families who participate in the program sign five-year contracts in which they agree to work toward leaving public assistance. While in the program, families can increase their incomes without increasing the amount they contribute toward rent. The difference between what the family paid in rent before joining the program and what they would owe as their income increases is deposited into an escrow account that the family can access upon completion of the contract. For example: If a family with a welfare benefit of $450 per month begins working, earning $800 per month, the family's contribution towards rent increases from $135 per month to $240 per month. Of that $240 the family is now paying towards rent, $105 is deposited into an escrow account. After five years, the family will have $6,300 plus interest in an escrow account to use for whatever purpose the family sees fit. PHAs receive funding for FSS coordinators, who help families with vouchers connect with services, including job training, child care, transportation and education. In 2012, HUD funded the salaries of over 1,100 FSS coordinators in the voucher program, serving nearly 48,000 enrolled families. The Moving to Work Demonstration, authorized in 1996 ( P.L. 104-134 ), was created to give HUD and PHAs the flexibility to design and test various approaches for providing and administering housing assistance. The demonstration directed HUD to select up to 30 PHAs to participate. The goals were to reduce federal costs, provide work incentives to families, and expand housing choice. MTW allows participating PHAs greater flexibility in determining how to use federal Section 8 voucher and Public Housing funds by allowing them to blend funding sources and experiment with rent rules, with the constraint that they had to continue to serve approximately the same number of households. It also permits them to seek exemption from most Public Housing and Housing Choice Voucher program rules. (For more information, see CRS Report R42562, Moving to Work (MTW): Housing Assistance Demonstration Program , by [author name scrubbed].) The existing MTW program, while called a demonstration, was not implemented in a way that would allow it to be effectively evaluated. Therefore, there is not sufficient information about different reforms adopted by MTW agencies to evaluate their effectiveness. However, there is some information available about how PHAs are using the flexibility provided under MTW. Agencies participating in MTW have used the flexibility it provides differently. Some have made minor changes to their existing Section 8 voucher and public housing programs, such as limiting reporting requirements; others have implemented full funding fungibility between their public housing and voucher programs and significantly altered their eligibility and rent policies. Some have adopted time limit and work requirement policies similar to those enacted in the 1996 welfare reform law. An evaluation for MTW published in January 2004 reported: The local flexibility and independence permitted under MTW appears to allow strong, creative [P]HAs to experiment with innovative solutions to local challenges, and to be more responsive to local conditions and priorities than is often possible where federal program requirements limit the opportunity for variation. But allowing local variation poses risks as well as provides potential benefits. Under MTW, some [P]HAs, for instance, made mistakes that reduced the resources available to address low-income housing needs, and some implemented changes that disadvantaged particular groups of needy households currently served under federal program rules. Moreover, some may object to the likelihood that allowing significant variation across [P]HAs inevitably results in some loss of consistency across communities. The Moving to Opportunity Fair Housing Demonstration (MTO) was authorized in 1992 ( P.L. 102-550 , P.L. 102-139 ). MTO combined housing counseling and services with tenant-based vouchers to help very low-income families with children move to areas with low concentrations of poverty. The experimental demonstration was designed to test the premise that changes in an individual's neighborhood environment can change his or her life chances. Participating families were selected between 1994 and 1998 and followed for at least 10 years. Interim results have found that families who moved to lower-poverty areas had some improvements in housing quality, neighborhood conditions, safety, and adult health. Mixed effects were found on youth health, delinquency, and engagement in risky behavior: girls demonstrated positive effects from the move to a lower-poverty neighborhood; boys showed negative effects. No impacts were found on child achievement or schooling or adult employment, earnings, or receipt of public assistance. (For more information, see CRS Report R42832, Choice and Mobility in the Housing Choice Voucher Program: Review of Research Findings and Considerations for Policymakers . ) The combined Section 8 programs are the largest direct housing assistance program for low-income families. With a combined FY2013 budget of $27 billion, they reflect a major commitment of federal resources. That commitment has led to some successes. More than three million families are able to obtain safe and decent housing through the program, at a cost to the family that is considered affordable. However, these successes come at a high cost to the federal government. Given current budget deficit levels, Congress has begun to reevaluate whether the cost of the Section 8 programs, particularly the voucher program, are worth their benefits. Proposals to reform the program abound, and whether the current Section 8 programs are maintained largely in their current form, changed substantially, or eliminated altogether are questions currently facing Congress. | The Section 8 low-income housing program is really two programs authorized under Section 8 of the U.S. Housing Act of 1937, as amended: the Housing Choice Voucher program and the project-based rental assistance program. Vouchers are portable subsidies that low-income families can use to lower their rents in the private market. Vouchers are administered at the local level by quasi-governmental public housing authorities (PHAs). Project-based rental assistance is a form of rental subsidy that is attached to a unit of privately owned housing. Low-income families who move into the housing pay a reduced rent, on the basis of their incomes. The Section 8 program began in 1974, primarily as a project-based rental assistance program. However, by the mid-1980s, project-based assistance came under criticism for seeming too costly and concentrating poor families in high-poverty areas. Congress stopped funding new project-based Section 8 rental assistance contracts in 1983. In their place, Congress created vouchers as a new form of assistance. Today, vouchers—numbering more than 2 million—are the primary form of assistance provided under Section 8, although over 1 million units still receive project-based assistance under their original contracts or renewals of those contracts. Congressional interest in the Section 8 programs—both the voucher program and the project-based rental assistance program—has increased in recent years, particularly as the program costs have rapidly grown, led by cost increases in the voucher program. In order to understand why costs are rising so quickly, it is important to first understand how the program works and its history. This report presents a brief overview of that history and introduces the reader to the program. For more information, see CRS Report RL34002, Section 8 Housing Choice Voucher Program: Issues and Reform Proposals; and CRS Report R41182, Preservation of HUD-Assisted Housing, by [author name scrubbed] and [author name scrubbed]. |
China's once-a-decade political transition coming this fall seems devoid of drama on the surface: It's clear who will take over, and the fight for other top spots is invisible to the public. But beneath the veneer of calm, the Communist Party is struggling to contain troubling events and mask divisions.
In this photo taken May 4, 2012, Chinese Vice President Xi Jinping speaks at a conference to celebrate the 90th anniversary of the founding of Chinese Communist Youth League at the Great Hall of the People... (Associated Press)
In this photo taken May 4, 2012, Chinese President Hu Jintao, right, Premier Wen Jiabao, left, and Vice President Xi Jinping clap as they arrive for a conference to celebrate the 90th anniversary of the... (Associated Press)
In this photo taken Monday, March 5, 2012, then Chongqing Communist Party Secretary Bo Xilai scratches his chin during the opening session of the National People's Congress in Beijing. As the U.S. presidential... (Associated Press)
In this photo taken May 4, 2012, Zhou Yongkang, left, Chinese Communist Party Politburo Standing Committee member in charge of security, and Chinese Vice Premier Li Keqiang attend a conference to celebrate... (Associated Press)
In this photo taken Saturday, March 3, 2012, a military band conductor rehearses before the opening session of the Chinese People's Political Consultative Conference (CPPCC) in Beijing's Great Hall of... (Associated Press)
The world's second-largest economy is experiencing an unexpectedly sharp slowdown. Violent demonstrations percolate as people tire of corruption, land grabs and policies seen as unfair. Tensions simmer with neighboring countries and the U.S. over territorial disputes in the South China Sea. Then there's the unresolved scandal involving Bo Xilai, who was a well-connected contender for high office before he was ousted for still unexplained transgressions.
The communist grip on power isn't threatened and the lack of open elections means they require no voter approval. But the party risks eroded legitimacy and a reduced ability to impose its will, further alienating younger Chinese and encouraging critical opposition voices arguing for a democratic alternative.
As the party's unstated contract with the people exchanging one-party rule for economic growth frays, pressure for reform is likely to intensify.
"The economic downturn, human rights demands and political reform are major issues for the party," said Wu Si, editor-in-chief of the Beijing-based pro-reform journal Yanhuang Chunqiu. "None of the leaders know what to do about it."
Five years after he was picked as successor, Vice President Xi Jinping remains on track to take over from President Hu Jintao in the party's fall congress, where its leading members will install a new generation of leaders.
China is run by a collective leadership, and many of the other seats in the Politburo Standing Committee, decision-making's inner sanctum, are undecided, analysts and party insiders say.
Final decisions on the leadership lineup and key issues to be addressed at the congress should be hammered out in various sessions this summer, including informal meetings east of Beijing at the seaside Beidaihe resort.
None of the leading contenders _ mostly proteges of Hu or other rival party elders _ is trying to grab headlines.
Open politicking is strongly frowned upon, and while competition for posts and influence is intense, it takes place far from the public as the party seeks to display a united front, said Jiannan Zhu, a political scientist at the University of Nevada, Reno.
"The priority for the Chinese government ahead of the party congress is to make sure no major instability occurs," Zhu said.
The flamboyant Bo is one of the few Chinese politicians who did overtly campaign for a higher-level job, and that is believed to be one of the sins that led to his downfall.
His removal as party boss of the mega-city of Chongqing and suspension from the Politburo exposed divisions within the leadership. It also further convinced an already skeptical public about the greed and bare-knuckled machinations of their secretive leaders.
Bo's ouster this spring was accompanied by the announcement that his wife and a family aide were under investigation for the murder of a British businessman. Reports have said that Bo tried to quash the probe.
Trial for Bo's wife, Gu Kailai, and for his ex-police chief, Wang Lijun _ who exposed the murder when he fled briefly to a U.S. consulate _ could begin this month, according to diplomats in Beijing who spoke on condition of anonymity.
As for Bo himself, politically connected Chinese have previously said that party leaders would have address his fate before the congress to heal divisions, but the government and its media have said nothing about the case in recent weeks.
Dealing with Bo is believed to have distracted the leadership, delaying its response to the deepening slowdown that has dragged growth to a three-year low of 7.6 percent in the three months ending in June. Interest rates have been cut twice in the past month and a half in a bid to kick-start growth, but Beijing is powerless to stem the malaise in Europe and the U.S. that has slashed demand for Chinese exports.
The downturn could worsen unrest. China already sees 180,000 strikes, protests and other mass demonstrations each year, according to government data compiled by Tsinghua University sociologist Sun Liping.
In recent protests in the southwestern city of Shifang, high school students joined ordinary Chinese in demonstrating against a copper smelter. Images of police beating protesters bloody and firing tear gas sparked nationwide outrage after they circulated online.
Trying to put a lid on disturbances ahead of the congress, authorities are tightening already stringent controls on political critics and activists of all stripes. Wu Lihong, who once received a government award for his environmental campaigning, said he has been told not to travel or accept speaking engagements in coming months.
"They're using the congress as a pretext for every kind of restriction they can think of," Wu said by phone from his home near heavily polluted Lake Tai.
The need to quash unrest was underscored in a speech this week by the party's law and order chief, Zhou Yongkang, in which he ordered cadres to nip problems in the bud _ whatever the cost.
"All levels of the party and government must make maintaining stability their first responsibility," Zhou told security officials at a national teleconference Tuesday.
Authorities also are trying to keep Chinese media in line. They dismissed two editors at Shanghai's Oriental Morning Post after publication of a story calling for private industry to enjoy the same rights as state companies.
Southern Weekly, one of the country's most widely respected newspaper groups, has seen editors replaced by propaganda officials. Editors and reporters at the paper said the new leaders spiked an interview with Shifang's party boss, who was replaced soon after the riots. The party leader defended his actions and complained about the lack of outside support, the editors said.
"Everybody feels like things are getting more oppressive. It's been happening for a long time, but the party congress is making it worse," said one former Southern Weekly editor who now works for online media. He asked not to be named to avoid repercussions against his new employer.
To some Chinese, government repression is raising doubts about whether a new slate of leaders will overcome entrenched interests.
Zhang Jian of Peking University's School of Government said that even if Xi were inclined to make bold social or economic changes, he would need support from other leaders, and the government's recent heavy-handed tactics make clear there is no consensus that reforms are needed.
"I see no hope at all that Xi will be able to put in place any serious, meaningful political reform," Zhang said.
Meanwhile, economic pressure is likely to grow. China's rapidly aging society will severely restrict the labor pool, undermining China's low-cost advantage and increasing the social security burden for workers and the state.
"The world is changing rapidly around the party. Muddling through will not be feasible during Xi's rule," said Harvard China expert Tony Saich. A major crisis in coming years will either demand major reforms or severely challenge the communists' grip on power, Saich said. ||||| Beidaihe is a Chinese combination of the Jersey Shore and Martha’s Vineyard, with a pinch of red fervor: the hilly streets and public beaches are packed with shirtless Russians and Chinese families, while the party elites remain hidden in their villas and on their private patches of sand. A clock tower near Kiessling chimes “The East is Red,” a classic Mao anthem.
Photo
The security presence has surged in recent weeks. Police officers in light blue uniforms patrol on Suzuki motorcycles and stand on street corners watching for jaywalkers. They have set up a checkpoint on the main road leading into town.
The informal talks are expected to start late this month and run into August, continuing a tradition that went into partial eclipse after China’s top leader, President Hu Jintao, took over from Jiang Zemin in 2002, and ordered party and government offices to stop more formal operations from the seaside during the summer palaver. But Mr. Jiang reportedly chafed at that and continued hobnobbing here with his allies. There was a notable conclave here in 2007 that Mr. Hu attended, to pave the way for the 17th Party Congress, according to scholars and a State Department cable disclosed by WikiLeaks.
In any case, politicking is inevitable when party elders show up to escape the stifling heat and pollution of Beijing.
Westerners began building up Beidaihe as a summer retreat in the late 19th century, as the Qing dynasty waned. When the People’s Liberation Army entered in 1948, the resort had 719 villas, according to China Daily, a state-run English-language newspaper.
Communist leaders began vacationing here. Mao was an avid swimmer and dove eagerly into the waters of the Bohai Sea. He convened formal conclaves here. His successor, Deng Xiaoping, made the meetings into annual events (he also took swims, supposedly to counter rumors of his ailing health).
The most infamous event at Beidaihe involved Lin Biao, a Communist marshal whom Mao accused of plotting a coup. On Sept. 13, 1971, after the coup attempt was supposedly discovered, Mr. Lin fled his villa here with his wife and a son and boarded a plane at the local airport. Their destination was the Soviet Union, but the plane crashed in Mongolia, killing everyone on board.
Photo
There are plots and counterplots this year, too. Negotiations here will be complicated by the continuing scandal over Bo Xilai, the deposed Politburo member who was most recently party chief of Chongqing. Some political observers had expected that by now the party would have concluded the investigation into Mr. Bo and his wife, who is suspected of killing a British businessman. Several people with high-level party ties say that Mr. Bo, who is being held in secret and without charges, is fighting back against interrogators, and that party leaders are having a difficult time deciding how to resolve his case.
Advertisement Continue reading the main story
During the negotiations, each current Standing Committee member should, at least in theory, have considerable say in determining the successor to his particular post. But party elders behind the scenes sometimes wield more authority. Mr. Jiang, though retired and ailing last year, may carry the greatest weight next to that of Mr. Hu. The heir apparent, Vice President Xi Jinping, also plays a role.
“Consensus among these three — the former, current and incoming leaders — is extremely important,” said Zhang Xiaojin, a political scientist at Tsinghua University in Beijing.
Newsletter Sign Up Continue reading the main story Please verify you're not a robot by clicking the box. Invalid email address. Please re-enter. You must select a newsletter to subscribe to. Sign Up You agree to receive occasional updates and special offers for The New York Times's products and services. Thank you for subscribing. An error has occurred. Please try again later. View all New York Times newsletters.
A flurry of activity in recent months has laid the groundwork. In May, more than 300 senior cadres were asked at a meeting to list the officials they thought should make the Politburo Standing Committee, where all the seats are in play except for the top two. Those are expected to go to Mr. Xi and Li Keqiang, who is slated to take over as prime minister.
Polling of senior party members was also done before the 2007 congress. Such surveys are intended as reference points only, though they have become increasingly important. Talk is swirling in Beijing over the results of the May polling. One member of the party elite said several people associated with Mr. Hu’s political base did not do well. Two insiders said one person who ranked high was Wang Qishan, a vice prime minister who oversees the financial sector.
Party leaders are considering reducing the number of Standing Committee seats to seven from nine, as was the case as recently as 2002, many insiders say. Mr. Hu is believed to support the change, which is in part aimed at curbing the entrenchment of interest groups at the top. That could mean taking two portfolios — probably propaganda and one dubbed “politics and law” that encompasses domestic security — and either adding them to the duties of other leaders or downgrading them to the Politburo level.
Photo
“With fewer people, they can concentrate power and increase their efficiency,” said one official at a state news media organization.
But there are other possible motives. The rapid expansion of security powers under Zhou Yongkang, the current Standing Committee member who heads the politics and law committee and supported Mr. Bo, has alarmed some party leaders, political analysts say. Since assuming the post in 2007, Mr. Zhou has capitalized on Mr. Hu’s focus on stability to build up the security apparatus, whose budget this year is officially $111 billion, $5 billion more than the military budget.
“The politics and law apparatus has grown too powerful,” an intelligence official said. “A lot of us feel this way.”
Advertisement Continue reading the main story
A contraction of the Standing Committee could also hurt those vying for seats who are not among the very top candidates, most notably Wang Yang, the party chief of Guangdong Province, who cultivates a progressive image.
The size and structure of the leadership have been a matter of continuing discussion. One analyst with ties to officials involved in party planning said that at the May meeting, cadres were also asked to submit their views on changing the composition of the party’s upper echelons, in a glimpse of what may be called intraparty democracy. Though few changes were expected anytime soon, “a lot of people had very different ideas,” he said.
Those debates are remote from the lives of most people in Beidaihe. Yet talk of politics flows loosely here. At a beach reserved for local officials, next to an almost-deserted patch of sand blocked off for party leaders, a retired official in swim trunks pointed to the villas across the road. He said the children of party leaders had made off with too much money through corrupt practices in state industries.
Emblematic of the distance between officials and those they rule, he said, is the fact that the party leaders vacationing here nowadays refuse to go into the sea, which is brown from runoff. Ordinary people swim in those waters, but the leaders take dips in swimming pools, including one built recently that is filled with filtered seawater.
“What are they good for?” the retired official asked. “What did they inherit from their fathers? They should have inherited the solidarity of the revolution.” | With China's once-a-decade leadership transition coming this fall, the country's powerbrokers are now in the thick of furious and extremely hush-hush negotiations over who will guide the world's most populous country for the next decade. And in the brutal heat and pollution of the Beijing summer, China's most important politicians head to the beach—specifically, the resort town of Beidaihe, "a Chinese combination of the Jersey Shore and Martha’s Vineyard" that lies 180 miles east of the capital, reports the New York Times. President Hu Jintao tried cracking down on the Beidaihe gatherings when he took power in 2002, but many in the party bucked hard and today the resort town is as important as ever. And in the face of a worsening economic slowdown and the fallout of the Bo Xilai scandal, the struggle for power is growing fiercer, notes the AP. But with expensive private villas and swimming spots for rich party leaders dominating this beach town, many party elders are unhappy with the rising generation of leaders. "What are they good for?" asked one retired official. "What did they inherit from their fathers? They should have inherited the solidarity of the revolution." |
Antivirus pioneer John McAfee is on the run from murder charges, Belize police say. According to Marco Vidal, head of the national police force's Gang Suppression Unit, McAfee is a prime suspect in the murder of American expatriate Gregory Faull, who was gunned down Saturday night at his home in San Pedro Town on the island of Ambergris Caye.
Details remain sketchy so far, but residents say that Faull was a well-liked builder who hailed originally from California Florida. The two men had been at odds for some time. Last Wednesday, Faull filed a formal complaint against McAfee with the mayor's office, asserting that McAfee had fired off guns and exhibited "roguish behavior." Their final disagreement apparently involved dogs.
Advertisement
UPDATE: Here is the official police statement:
MURDER
On Sunday the 11th November, 2012 at 8:00am acting upon information received, San Pedro Police visited 5 ¾ miles North of San Pedro Town where they saw 52 year old U.S National Mr. GREGORY VIANT FAULL, of the said address, lying face up in a pool of blood with an apparent gunshot wound on the upper rear part of his head apparently dead. Initial investigation revealed that on the said date at 7:20am LUARA TUN, 39years, Belizean Housekeeper of Boca Del Rio Area, San Pedro Town went to the house of Mr. Faull to do her daily chores when she saw him laying inside of the hall motionless, Faull was last seen alive around 10:00pm on 10.11.12 and he lived alone. No signs of forced entry was seen, A (1) laptop computer brand and serial number unknown and (1) I-Phone was discovered missing. The body was found in the hall of the upper flat of the house. A single luger brand 9 mm expended shells was found at the first stairs leading up to the upper flat of the building. The body of Faull was taken to KHMH Morgue where it awaits a Post Mortem Examination. Police have not established a motive so far but are following several leads.
As we reported last week, McAfee has become increasingly estranged from his fellow expatriates in recent years. His behavior has become increasingly erratic, and by his own admission he had begun associating with some of the most notorious gangsters in Belize.
Advertisement
Since our piece ran on last week, several readers have come forward with additional information that sheds light on the change in McAfee's behavior. In July of 2010, shortly before Allison Adonizio pulled the plug on their quorum-sensing project and fled the country, McAfee began posting on a drug-focused Russian-hosted message board called Bluelight about his attempts to purify the psychoactive compounds colloquially known as "bath salts."
Writing under the name "stuffmonger," a handle he has used on other online message boards, McAfee posted more than 200 times over the next nine months about his ongoing quest to purify psychoactive drugs from compounds commercially available over the internet. "I'm a huge fan of MDPV," he wrote. "I think it's the finest drug ever conceived, not just for the indescribable hypersexuality, but also for the smooth euphoria and mild comedown."
Advertisement
Elsewhere, he described his pursuit of "super perv powder" and warned about the dangers of handling the freebase version of the drug: "I had visual and auditory hallucinations and the worst paranoia of my life." He recommended that the most effective way to take a dose is via rectal insertion, a procedure known as "plugging," writing: "Measure your dose, apply a small amount of saliva to just the tip of your middle finger, press it against the dose, insert. Doesn't really hurt as much as it sounds. We're in an arena (drugs/libido) that I navigate as well as anyone on the planet here. If you take my advice about this (may sound gross to some of you perhaps), you will be well rewarded."
Just before posting for the last time on April 1, 2011 (a date that for McAfee may well have been freighted with intentional significance) January 4, 2011, Stuffmonger identified himself as "John" and described his work pursuing quorum-sensing compounds and posted photos of his property in Orange Walk. In signing off, he explained that "the on-line world is more of a distraction than the self induced effects of the many experiments I've done using my own body over the past year or so, and I have work to do."
Advertisement
MDPV, which was recently banned in the US but remains legal in Belize, belongs to a class of drugs called cathinones, a natural source of which is the East African plant khat. Users report that it is a powerfully mind-altering substance. In the comments section to my last Gizmodo piece, reader fiveseven15 writes: "mdpv is serious shit. would explain his paranoia and erraticness. i've been thru that. i played with mdpv for about two weeks, then started seeing shadow people in the corner of my eye, and what amphetamine heads call 'tree-cops'... its essentially really, REALLY f-ed up meth."
On his website, addiction specialist Paul Earley warns about the dangers of MDPV: "Our experience clearly warns of the psychiatric and medical dangers of this drug. We have cared for multiple patients who have abused MDPV; they report intense and unpleasant visual hallucinations after a short binge. The drug feels non-toxic with its first use, but following a moderate binge users suffer mild to moderate paranoia… in about 10% of individuals who use higher doses, we have observed a sustained psychotic state with intense anxiety lasting 3 to 7 days."
Advertisement
McAfee's intensive use of psychosis-inducing hallucinogens would go a long way toward explaining his growing estrangement from his friends and from the community around him. If he was producing large quantities of these chemicals, as implied on Bluelight, that would also shed light on his decision to associate with some of Belize's most hardened drug-gang members.
McAfee's purported interest in extracting medicine from jungle plants provided him a wholesome justification for building a well-equipped chemistry lab in a remote corner of Belize. The specific properties of the drugs he was attempting to isolate also fit in well with what those closest to him have reported: that he is an enthusiastic amateur pharmacologist with a longstanding interest in drugs that induce sexual behavior in women. Indeed, former friends of McAfee have said he could be extremely persistent and devious in trying to coerce women who rebuff his advances to have sex with him.
One other aspect of Stuffmonger's postings gibe with McAfee's general MO: his compulsion for making outrageous or simply erroneous assertions, even attached to subjects about which he is being generally sincere. Along with photographs of his lab near Orange Walk, for instance, he posted a picture of a decrepit thatched-roof hut and described it as original home in Belize. He seemed similarly to have embellished his descriptions of his feats of chemical prowess on the Bluelight discussion board, and this ultimately aroused the suspicions of his fellow posters. "Stuffmonger's claims were discredited," a senior moderator later wrote, "and he vanished."
Advertisement
Jeff Wise is a science journalist, writer of the "I'll Try Anything" column for Popular Mechanics, and the author of Extreme Fear: The Science of Your Mind in Danger. For more, visit JeffWise.net. ||||| As dawn broke over the interior of Belize on April 30, an elite team of 42 police and soldiers, including members of the country's SWAT team and Special Forces, converged on a compound on the banks of a jungle river. Within, all was quiet. The police called out through a bullhorn that they were there looking for illegal firearms and narcotics, then stormed in, breaking open doors with sledgehammers, handcuffing four security guards, and shooting a guard dog dead. The compound's owner, a 67-year-old white American man, emerged bleary-eyed from his bedroom with a 17-year-old Belizean girl. The police cuffed him and took him away, along with his guards.
Inside, the cops found $20,000 in cash, a lab stocked with chemistry equipment, and a small armory's worth of firearms: seven pump-action shotguns, one single-action shotgun, two 9-mm. pistols, 270 shotgun cartridges, 30 9-mm. pistol rounds, and twenty .38 rounds. Vexingly for the police, all of this was actually legal. The guns were licensed and the lab appeared not to be manufacturing drugs but an herbal antibacterial compound.
Advertisement
After fourteen hours, the police let the man and his employees go, but remained convinced they had missed something. Why else would a wealthy American playboy hole himself up out here, far from the tourist zone on the coast, by a navigable river that happened to connect, twenty miles downstream, with a remote corner of the Mexican border? Why else would he hire, as head of security, a rogue cop who'd once plotted to steal guns from the police and sell them to drug traffickers?
It's not too unusual for eccentric gringos to wind up in Central America and slowly turn stranger—"Rich white men who come to Belize and act strangely are kind of a type," one local journalist told me. But this one's story is more peculiar than most. John McAfee is a founding father of the anti-virus software industry, an inveterate self-promoter who built an improbable web security empire on the principles of trust and reliability, then poured his start-up fortune into a series of sprawling commune-like retreats, presenting himself in the public eye as a paragon of engaged, passionate living: "Success, for me," he has said, "is being able to wake up in the morning and feel like a 12 year old." But down in Belize, McAfee the enlightened Peter Pan seems to have refashioned himself into a kind of final-reel Scarface.
***
ONE DAY THIS past spring, shortly before the police raid, I paid a visit to McAfee. I'd known John personally for five years, having first met him when I traveled to his ranch in rural New Mexico, an adventure-sports reporter who found him to be a genuinely charismatic entrepreneur and thrill-seeker. By now, though, I'd become convinced he was a compulsive liar if not an outright psychopath, albeit one whose life as a thrill-seeking serial entrepreneur was as entertaining for me to follow as it was amusing for him to perform.
Advertisement
By the time I'd arrived in country, I'd heard that his circumstances had soured since we'd last been in touch—that his business relationships had fallen apart and he'd become estranged even from the other caution-to-the-wind expats in Belize. "He is one strange cookie," a British hostel owner told me.
At the time, he was in residence not at his compound in the interior, near the town of Orange Walk, but at his beachfront property on the tourist-friendly island of Ambergris Caye. I pulled up in a golf cart to the rear entrance to his home and found him sitting by a pool overlooking the ocean—trim, tanned, and relaxed in flip-flops, cargo shorts, and frosted hair. As usual, he wore a goatee and a sleeveless T-shirt that showed off the tattoos that ran up his arms and over his back, with sunglasses on Croakies around his neck. He invited me to sit with him in a screened-in porch. Two young Belizean women lounged in the adjacent living room.
It was a pretty palatial setup, but his only companions these days, he told me, were the locals who work for him. Out on the patio, a dark-skinned man appeared and began cleaning the pool. Another man wearing a crisp uniform positioned himself nearby. He carried a holstered pistol awkwardly in front of him. I intuited that the gun was being brandished for my benefit, and I told McAfee that it made me nervous.
Advertisement
"Well, he's a security guard!" McAfee hopped up and called to the women inside. "Hey girls, you've been by my house in Orange Walk, right? How many security guards do I have there?" Five, the girls said. "Did they carry guns?" Yes, the girls said. "Serious guns?" Yeah!
"When I was here before," I said, "no one was carrying a gun."
"Well, that was a long time ago."
"And do you think things have changed since then?"
"The economy is going south," he said. "As the economy goes south, petty theft begins. And then grand theft. And then muggings. And the next thing you know, you murder someone for twenty dollars."
He explained that the country's crime rate was a result of its terrible economic condition. "People in this country starve! And not just a few. Almost everybody has gone through periods of starvation. You won't find a single person who has not at one point lost their hair. This is a sign of advanced malnutrition." Belize is a relatively prosperous part of Central America, not some civil-war-wracked wasteland in the Horn of Africa, but I kept my peace.
Advertisement
He opened the current issue of the Belize Times, holding the paper down with one arm to keep it from blowing away, and showed me a photo of two men. "I am the only white man in Orange Walk, and I was stupid enough to build right next to the highway, where people could see that I have stuff," he said. "So there have been, in the last year alone, eleven attempts to kidnap or kill me."
Before I could ask how, McAfee had gone on to tell me a story about a Belizean gangster named Eddie "Mac-10" McKoy. According to McAfee, Mac-10 wanted to kill him. "I'm an older dude, and somewhat smarter than him," McAfee said. "I tracked him down and forced him to the bargaining table. And we had this big meeting here in San Pedro, and Eddie and I came to an agreement." I took this to mean that he was paying McKoy protection money, but while I was trying to sort it out one of the Belizean women interrupted us, appearing with a tall glass of orange liquid. He sniffed it suspiciously, like he had no idea what it might be. He offered it to me, then after I declined drank it himself.
Advertisement
Some time later, he continued, he learned of another plot on his life. (Why he thought everyone was so hell-bent on killing him, rather than just taking his money, was unclear.) A group of attackers, including two police officers, was planning to force his car off the road one night, he said, take him back to his compound with a gun to his head, and force the guards to open the gate. They would then kill McAfee and the guards and make off with the $100,000 cash he was rumored to keep at his property. Fortunately, McAfee said, McKoy intervened.
McAfee was proudest of the way he'd responded to his would-be killers: He hired them. "Everyone who has tried to rob me, kill me, works for me now," he declared. This was not just good hacker logic, he explained, but a kind of public service. "None of these people are responsible, because they can't work. At some point, you've got to stop living for yourself. We as Americans have ripped off the world. We get to throw food away. It's insane."
He jumped up and called to the women inside: "Have you ever thrown food away?" Getting no answer, he continued: "The idea is so alien you don't even comprehend it, right?"
Advertisement
He remained standing. We'd been talking for an hour, and I sensed the interview was over. I thanked McAfee for his hospitality, and asked if I could reciprocate by buying him dinner. He looked at me incredulously. "Haven't you been listening to me? I can't leave my home after dark."
***
IN THE LATE EIGHTIES, as computers were starting to become common in American homes, fears began to circulate of malicious rogue programs that could spread from machine to machine. Where many saw an emerging hazard, McAfee recognized opportunity. A software engineer working for Lockheed, he obtained a copy of an early virus, the so-called "Pakistani Brain," and hired coders to write a program that neutralized it. It was a prescient move, but what he did next was truly inspired: He let everyone download the McAfee security software for free. Soon he had millions of users and was charging corporate clients a licensing fee. By his third year, he was pulling in millions in profit.
Advertisement
The anti-virus program wasn't McAfee's first entrepreneurial venture. As a young man, he'd traveled through Mexico, sleeping in a van, buying stones and silver, and making jewelry to sell to tourists. Later, during the AIDS panic in San Francisco, he sold identity cards certifying bearers as HIV-free. His freewheeling approach carried over to his Silicon Valley operation. Employees practiced sword-fighting and conducted Wiccan rituals at lunchtime. One long-running office game awarded employees points for having sex in different spots around the office. McAfee himself was an alcoholic and heavy drug user. (After a 1993 heart attack, at the age of 47, he became an aggressive teetotaler.)
In early 1992, he went on national TV and declared that as many as five million computers could soon be hijacked by a particularly dangerous virus called Michelangelo. McAfee sales skyrocketed, but the date of the supposed onslaught came and went without incident. "It was the biggest nonevent since Geraldo broke into Al Capone's tomb," complained ZDNet. Forced from his management role, McAfee cashed out his stake in the company, earning $100 million.
Cast adrift, McAfee gave himself over to the life of a wealthy adventure seeker. He raced ATVs (crashing a dozen or so) and made open-ocean crossings by Jet Ski (often they sunk en route). He poured millions into a 280-acre yoga retreat in the mountains above Woodland, Colorado, where every Sunday morning he would hold complimentary classes. "Everything was free," recalls a former student. "You would think that this guy was amazingly generous and kind, but he was getting something out of it. He was interested in being the center of attention. He was surrounded by people around him who didn't have any money and were depending on him, and he could control them." Among the entourage was a teenage employee named Jennifer Irwin, whom McAfee began dating.
Advertisement
Growing bored with ashram life, McAfee invented a new pastime called "aerotrekking," which involved flying tiny aircraft very low over remote stretches of desert. Experienced pilots called the practice inherently dangerous, but McAfee found it exhilarating. He brought a cadre of followers, including Irwin, with him down to Rodeo, New Mexico, where he bought a ranch with an airstrip and spent millions adding lavish amenities—a cinema, a general store, a fleet of vintage cars. He started calling his entourage the Sky Gypsies. McAfee took pains not to portray himself their leader, but it was clear that he was the one who paid the bills and called the shots. When he talked, no one interrupted.
This is where I first met McAfee, as a reporter dispatched to write about his ambitions to turn aerotrekking into a new national pastime. He put me up in a bedroom in his ranch house, and we awoke before dawn to walk to an aircraft hangar filled with small planes. "People are afraid of their own lives," he said in a cough-syrup baritone. "Shouldn't your goal be to have a meaningful life? Unknown, mysterious, thrilling?"
Some of his efforts to support his new sport seemed less than kosher. To give aerotrekking an illusion of momentum, he set up a network of fake websites purportedly from aerotrekking clubs scattered around the country. And at the end of my visit, McAfee told me, proudly, of his scheme to distract nearby residents, who had become irritated by the aerotrekking and begun to organize against the company. One of the Sky Gypsies had snuck into the local post office after hours and posted a flyer announcing a national paintball convention coming to town. The flyer promised that hundreds of trigger-happy shooters in camouflage would soon descend en masse and storm through the wilderness. To bolster the hoax, McAfee had set up a fake website promoting the event. The homebrew psy-ops campaign went off without a hitch. By the next day, the town was a beehive of angry protesters, and the aerotrekking issue was forgotten.
Advertisement
In retrospect, it's startling that McAfee was still so committed to aerotrekking. The year before, his own nephew had been killed in a crash, along with the passenger that he had been carrying. The passenger's family hired a lawyer and filed a $5 million lawsuit. McAfee started telling reporters that the financial crisis had all but wiped him out, slashing his net worth to $4 million. (Both the New York Times and CNN reported the claim, which he later characterized to me as "not very accurate at all.") He unloaded all his real estate at fire-sale prices and moved to Belize, having been advised by his lawyers that "a judgment in the States is not valid" there. He obtained residency far more quickly than the one-year minimum waiting time mandated by law. "This is a Third World country," he told me later, "so I had to bribe a whole bunch of folks."
Accompanied by a gaggle of hangers-on (including Irwin, by then 28), McAfee settled into a beachside compound on Ambergris Caye. With characteristic gusto he launched a slew of enterprises, including a coffee shop and a high-speed ferry service. Then he met an attractive 31-year-old named Allison Adonizio, a vacationing Harvard biologist. She told him she was working in a new field of microbiology called "anti-quorum sensing"—instead of killing infectious bacteria, she said, certain chemicals can disrupt and neutralize them. She'd already identified one rain-forest plant that was rich in such compounds and believed there must be many more. They could solve the burgeoning global problem of antibiotic resistance, she said. McAfee offered to build her a lab in Belize where she could work with native plants. She flew home, quit her job, and moved down to the jungle.
Advertisement
McAfee's Next Big Thing was under way. He bought land along the New River, deep in the interior of the country, where he Adonizio would grow the herbs. He also acquired another parcel a few miles downriver, near the town of Orange Walk, where he started building a processing facility. He announced that Adonizio had identified six promising new herbs and invited me down to take a look. This, he said, was the reason he'd come to Belize in the first place: to rid humanity of disease and at the same time to lift Belizeans up from poverty. "I'm 65 years old," he said. "It's time to think about what kind of legacy I'm going to leave behind."
In early 2010, I took a trip to Belize, and once again McAfee welcomed me warmly into his home and treated me as a friend. Strolling around the weed-choked parcel he was cultivating, though, I began to question his claims. The herb, he said, was too fragile to be planted the conventional way, and had to be allowed to grow naturally. But if the plant was too delicate for agriculture, how could he be so sure it would thrive in sufficient quantity to feed his production facility? When I pressed him about it, he suggested that the far-fetchedness of the plan was itself evidence of its legitimacy: "I must either be a fool," he said, "or I feel extremely secure that I will be shipping goods."
Midway through my visit the story grew odder still. Adonizio and McAfee told me that, for all the world-changing potential they saw in their anti-quorum sensing project, they'd decided to put it on hold. Instead, they were concentrating on developing and marketing another jungle-herb compound Adonizio had discovered, one that they said boosted female libido.
Advertisement
Back home, I wrote a story that questioned McAfee's good works, and raised doubts about his motives for being in Belize. After it was published online, McAfee launched a vigorous defense in the comments section, claiming that he'd never shelved the anti-quorum sensing project but had lied to me during my visit because he'd sensed that I'd intended to write a hostile article all along. "I am a practical joker, and I joke no differently with the press than I do with my next-door neighbor," he wrote. "I'm not saying it's a particularly adult way of behaving, or business like, or not offensive to some. But it's me."
At first Adonizio supported McAfee's claims in the comments section. "I felt a bit uncomfortable (at first) about playing our joke on Jeff," she wrote. "However, after reading the piece, I understand why John had wanted us to keep things under wraps. Jeff was there on day one with the intent to write something sensational. John kept saying: ‘an aggressor with no humor deserves no leniency.'"
Advertisement
Then, four months later, she contacted me by email. "Remember me?" she wrote. "I'll just be blunt. I was naive about who and what Mr. McAfee really is."
She explained that before my arrival she had not, as they'd previously claimed, found any new antibiotic compounds. She had only the one that she'd been working on at Harvard, and it was already under patent, and so could not be developed for sale. "We really didn't have anything when you came down," Adonizio said. McAfee decided the libido drug, which originally had been mooted as a joke, could serve as a plausible alternative in the meantime. She played along with his hoax, she said, only at McAfee's insistence.
Amid the article's fallout, their relationship had become tense. He showed her websites devoted to various kinds of outré kink, and became increasingly open, when his girlfriend Irwin was out of town, about bringing prostitutes off the street and into his bedroom. (One day Adonizio came upon "literally a garbage bag full of Viagra.") After she'd broken up with a boyfriend on the mainland, "he kept trying to set me up with these weird friends that were into polyamory and crazy kinky stuff," she said. "He tried to convince me that love doesn't exist, so I might as well just give in and sleep with all these crazy circus folk." He liked to hint that he had connections to dangerous criminals, implying that he could have her ex-boyfriend killed: "I have someone who can take care of that," he told her.
Advertisement
When at last she decided she'd had enough and asked McAfee to buy out her share of the company, he exploded, she says, screaming and lunging at her. She fled and locked herself in the lab. McAfee pounded on the door and shouted obscenities. Afraid for her safety, Adonizio called a friend to escort her off the property. The next day, she boarded a flight back home to Pennsylvania.
Even at thousands of miles away, she said, she felt frightened that he might do her harm. "As soon as I started questioning his motives, he turned on me and became a horrible, horrible person, controlling, manipulative and dangerous," she told me. "I'm thankful that I got out with my life."
In the wake of Adonizio's departure, McAfee grew more isolated. An investor who'd wanted to back the anti-quorum-sensing venture backed away. A joint-venture agreement with Dr. Louis Zabaneh, one of the country's most powerful men, fell apart. The hangers-on drifted away. After 14 years, Irwin left him. McAfee spent most of his time in Orange Walk, where he'd expanded the rickety herb-processing facility into a small walled fortress. "what i experienced out @ his property made me wanna get the fuck outta dodge," an associate e-mailed Adonizio, "creepy, and a bit scary. and i don't scare easily … i have a feeling he's in some deep shit down there."
Advertisement
***
I DIDN'T MAKE IT BACK to Orange Walk during my visit in April, but I was tense throughout our meeting in Ambergis Caye, even though McAfee insisted he bore me no hard feelings and had in fact liked the last article: "I thought it was well written," he said.
When I asked him about why Adonizio was unhappy about her time with him in Belize, he seemed exasperated. "Allison is an unhappy person who is unhappy to the core," he told me. "Whatever's on the table, she will turn it this way, that way, and make something out of it, to be the cause of her unhappiness."
Advertisement
And what about his lack of friends in the expat community? "I don't need friends," he said. "What does friendship actually mean? It's a commitment to an idea that just doesn't interest me."
A moment later he paused and said, "I'm going to tell you the truth, for once." Then he seemed to get distracted, and made a phone call. The next day, he sent an e-mail inviting me to come back for another visit: He'd forgotten that he'd wanted to tell me that very important, he wrote, which he was only willing to impart in person. I had an eerie, inexplicable feeling that the thing he wanted to tell me was that he'd ordered my murder. I waited to call him until I was back in the States, and when he heard that I was already home, his tone was brusque: "I'm really not interested in chatting over the phone about things that are dear to my heart," he said.
Advertisement
Two weeks later, the police raided his compound. In the process they validated what I had taken to be some of McAfee's most far-fetched assertions. Superintendent Marco Vidal confirmed to me that, indeed, several members of his security force were known criminals, and that McKoy was a gang leader of some note. "McKoy is a member of one of the factions of the Bloods Gang," Vidal wrote me in an e-mail. "We know of a meeting between McKoy and McAfee at his café in San Pedro Town, Ambergris Caye in which McAfee was flanked by the two leaders of the most notorious and violent gang operating in Belize City. At that meeting McAfee also took along a Police Officer. We believe that his intention was to make it categorically clear to McKoy that he controlled both the legitimate and the illegitimate armed forces."
In the wake of his arrest, McAfee was nervous enough about the police investigation that he sent two employees to solicit an officer for inside information. Both were arrested for attempted bribery. McAfee then sent another Belizean on the same mission. He, too, was arrested.
McAfee's world seemed to be imploding. In late May, Gizmodo posted the text of a message that McAfee had put up on a private discussion board. In it, he described being on the lam from the police. "I am in a one room house in an uninteresting location," he wrote. "I have not been outdoors for 5 days." He added that he was posting from an iPad but didn't have a charger, and the battery only had a 21 percent charge remaining. He described his run-in with the police, then signed off with this: "I'm down to 17% charge. I will leave you."
Advertisement
But just a few days later, residents spotted McAfee driving a golf cart around Ambergris Caye with a new 17-year-old girlfriend, apparently in good cheer. I dropped him a line, and his reply was upbeat. "Things are getting back to normal," he wrote. "I'm just waiting for a few properties to sell then I'm off to the South Pacific. No doubt to new adventures…"
In the weeks that followed, he didn't decamp for the South Seas. Instead, he took to walking around San Pedro wearing a pistol in a holster, in violation of Belizean gun laws. Then, in late July, McAfee appeared in an article in Westword, an alternative weekly based in Denver, describing his latest business venture—According to McAfee, it is called "observational yoga," and involves sitting in comfortable chairs and watching other people perform asanas. Thanks to its numerous health benefits, McAfee said, "it's very popular" in Belize and he planned to franchise the concept around the country.
Advertisement
"It would be very difficult to sell this concept in America," he admitted. "But here I can make any kind of outrageous claim that I choose."
UPDATE: John McAfee is now the primary suspect in a murder investigation involving his neighbor in Belize.
Advertisement
Jeff Wise is a science journalist, writer of the "I'll Try Anything" column for Popular Mechanics, and the author of Extreme Fear: The Science of Your Mind in Danger. For more, visit JeffWise.net. | Police in Belize are on the hunt for John McAfee—the man who lent his name to the famous antivirus company—because they suspect him of murder. According to Gizmodo, which just last week ran a stunning piece about McAfee's weird transformation into a jungle gangster, McAfee is suspected of killing American expatriate Gregory Faull, a longtime rival who was found dead yesterday, apparently of a gunshot wound. Faull had recently complained to the mayor about McAfee's "roguish behavior," including firing off guns around him. McAfee has become estranged from the tech world. He told Gizmodo that he'd gotten mixed up with Belizean gangsters and that there had been "in the last year alone, eleven attempts to kidnap or kill me." One possible explanation for this slide: It appears that since 2010 he has been posting online about his attempts to purify the drug called "bath salts," which he describes as, "the finest drug ever conceived, not just for its indescribable hypersexuality, but also for the smooth euphoria and mild comedown." |
Russia said Saturday it supports a transparent international investigation of the downing of a Malaysian airliner, but U.S. and other Western officials said they saw no evidence Moscow was seeking to impose that message on its eastern Ukrainian allies who still control the site of the crash.
“It’s another case of the Russians saying one thing and doing another,” a senior Obama administration official said. “They say they want to abide by an international investigation, but there’s more that they can do in terms of calling on the separatists to give unfettered access” to investigators still barred from the debris field.
In a telephone conversation with Russian Foreign Minister Sergei Lavrov, Secretary of State John F. Kerry “underscored that the United States remains deeply concerned” that international investigators were denied access, and that victims and debris were reportedly being “tampered with or inappropriately removed from the site,” a State Department statement said.
A Russian statement said Lavrov and Kerry “agreed that all physical evidence, including the black boxes, must be made available for such an international investigation and that, on the ground, all necessary arrangements must be made for access by an international expert team.”
Although the Russians said that the United Nations’ International Civil Aviation Organization should lead the investigation, Lavrov also “stressed the importance” of including the Interstate Aviation Committee, the Moscow-based civil aviation authority established in 1991 with 11 states of the former Soviet Union, including Ukraine.
The committee’s participation would give Russia access to the investigation, although no Russian citizens were among the 298 aboard the flight. A second Obama administration official said that there was no reason to exclude Russia from what is intended to be a completely open inquiry. Officials spoke on the condition of anonymity to expand on publicly released statements.
Dutch Prime Minister Mark Rutte said that he had called on Russian President Vladimir Putin to “take responsibility” for the situation when the two spoke by telephone Saturday. Putin “has to show that he will do what is expected of him and will exert his influence,” Rutte told a news conference.
Putin also spoke Saturday with German Chancellor Angela Merkel, who “once again” asked him “to exercise his influence over the separatists” to reach a cease-fire and begin political negotiations over the wider Ukrainian conflict, a German government statement said.
After calls with Rutte and Australian Prime Minister Tony Abbott, the office of British Prime Minister David Cameron said that “all three leaders are clear that President Putin needs to actively engage with the international community and use his influence on the separatists to ensure they allow access to the crash site.” There were 27 Australian citizens aboard the flight.
There is no precedent for the scale and circumstances of the proposed international investigation, and it remains unclear how the probe will be organized once access is obtained. The Netherlands, Britain and others have already sent large teams to Kiev, where representatives of the U.S. National Transportation Safety Board and the FBI have also arrived. Malaysia, which lost 44 citizens, is also expected to send a team.
While most attention Saturday focused on growing outrage over the delay in the investigation, Cameron’s statement also said he and Rutte agreed that the European Union “will need to reconsider its approach to Russia in light of evidence that pro-Russian separatists brought down the plane.”
Europe, which has far more extensive economic ties with Russia, has been reluctant to go as far as the United States in imposing economic sanctions against the Russians for aiding the separatists. The administration has indicated it expects the Europeans will be more willing to punish Russia if it is proved to be even indirectly responsible for the shooting down of the plane.
View Graphic Aviation history is littered with civilian planes that were shot from the sky, intentionally or not, by military weapons. Malaysia Flight 17 was cruising at 33,000 feet, more than half a mile higher than Mt. Everest, when a missile hit it July 17. And the missile’s range is believed to be more than twice that high.
In public statements by President Obama and other senior officials Friday, the administration did not directly accuse the separatists. Instead, it said that the plane was downed by a surface-to-air missile fired from separatist-held territory and noted that Russia has supplied the separatists with heavy weaponry, including surface-to-air missiles.
Much of the Kerry-Lavrov conversation centered on repeatedly failed efforts to impose a cease-fire in eastern Ukraine. Kerry, the statement said, “urged Russia to take immediate and clear actions to reduce tensions in Ukraine; to call on pro-Russian separatists . . . to lay down arms, release all hostages and engage in a political dialogue . . . to halt the flow of weapons and fighters into eastern Ukraine” and to allow international monitors to help secure the border. ||||| Story highlights Rebels have downed Ukrainian planes flying at high altitudes
A pro-Russian rebel commander shows off anti-aircraft missiles
The border region with Russia is very porous, unguarded in many places
Some contend that larger weapons have come into Ukraine from Russia
Under a blazing sun in early June, a group of pro-Russian rebels in eastern Ukraine were digging amid pine woods near the town of Krazny Liman.
Their grizzled commander was a bearded man in his 50s who would not tell us where he was from, but acknowledged that he wasn't local. He was proud to show off his unit's most prized possession -- a truck-mounted anti-aircraft unit that was Russian-made.
He told us the weapon had been seized from a Ukrainian base.
A few miles away, in the town of Kramatorsk, rebel fighters displayed two combat engineering tanks they said they had seized them from a local factory. Eastern Ukraine has long been a center of weapons production. They had parked one of the tanks next to the town square.
These were just two instances of how the rebels in eastern Ukraine were steadily adding more sophisticated weapons to their armory, including tanks, multiple rocket launchers -- and anti-aircraft systems.
JUST WATCHED Kerry: 'Moment of truth' for Putin Replay More Videos ... MUST WATCH Kerry: 'Moment of truth' for Putin 01:15
JUST WATCHED Officials: 'Buk sent back to Russia' Replay More Videos ... MUST WATCH Officials: 'Buk sent back to Russia' 02:05
JUST WATCHED Who fired the Buk missile launcher? Replay More Videos ... MUST WATCH Who fired the Buk missile launcher? 02:03
JUST WATCHED Video appears to show black box from MH17 Replay More Videos ... MUST WATCH Video appears to show black box from MH17 01:03
Photos: Malaysia Airlines Flight 17 crashes in Ukraine Photos: Malaysia Airlines Flight 17 crashes in Ukraine Debris from Malaysia Airlines Flight 17 sits in a field at the crash site in Hrabove, Ukraine, on September 9, 2014. The Boeing 777 was shot down July 17, 2014, over Ukrainian territory controlled by pro-Russian separatists. All 298 people on board were killed. In an October 2015 report, Dutch investigators found the flight was shot down by a warhead that fit a Buk rocket, referring to Russian technology, Dutch Safety Board Chairman Tjibbe Joustra said. Hide Caption 1 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine Australian and Dutch experts examine the area of the crash on August 3, 2014. Hide Caption 2 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine A woman walks with her bicycle near the crash site on August 2, 2014. Hide Caption 3 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine Police secure a refrigerated train loaded with bodies of passengers from Malaysia Airlines Flight 17 as it arrives in a Kharkiv, Ukraine, factory on July 22, 2014. Hide Caption 4 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine A pro-Russian rebel passes wreckage from the crashed jet near Hrabove on Monday, July 21, 2014. Hide Caption 5 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine – Wreckage from the jet lies in grass near Hrabove on July 21, 2014. Hide Caption 6 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine A man covers his face with a rag as members of the Organization for Security and Co-operation in Europe and the Dutch National Forensic Investigations Team inspect bodies in a refrigerated train near the crash site in eastern Ukraine on July 21, 2014. Hide Caption 7 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine Emergency workers carry a victim's body in a bag at the crash site on July 21, 2014. Hide Caption 8 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine A piece of the plane lies in the grass in eastern Ukraine's Donetsk region on July 21, 2014. Hide Caption 9 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine An armed pro-Russian rebel stands guard next to a refrigerated train loaded with bodies in Torez, Ukraine, on Sunday, July 20, 2014. Hide Caption 10 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine Ukrainian State Emergency Service employees sort through debris on July 20, 2014, as they work to locate the deceased. Hide Caption 11 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine A woman covers her mouth with a piece of fabric July 20, 2014, to ward off smells from railway cars that reportedly contained passengers' bodies. Hide Caption 12 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine Toys and flowers sit on the charred fuselage of the jet as a memorial on July 20, 2014. Hide Caption 13 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine People search a wheat field for remains in the area of the crash site on July 20, 2014. Hide Caption 14 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine A woman walks among charred debris at the crash site on July 20, 2014. Hide Caption 15 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine Emergency workers load the body of a victim onto a truck at the crash site on Saturday, July 19, 2014. Hide Caption 16 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine Emergency workers carry the body of a victim at the crash site on July 19, 2014. Hide Caption 17 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine A large piece of the main cabin is under guard at the crash site on July 19, 2014. Hide Caption 18 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine Victims' bodies are placed by the side of the road on July 19, 2014, as recovery efforts continue at the crash site. International officials lament the lack of a secured perimeter. Hide Caption 19 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine A man looks through the debris at the crash site on July 19, 2014. Hide Caption 20 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine An envelope bearing the Malaysia Airlines logo is seen at the crash site on July 19, 2014. Hide Caption 21 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine Armed rebels walk past large pieces of the Boeing 777 on July 19, 2014. Hide Caption 22 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine Ukrainian rescue workers walk through a wheat field with a stretcher as they collect the bodies of victims on July 19, 2014. Hide Caption 23 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine A woman looks at wreckage on July 19, 2014. Hide Caption 24 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine Pro-Russian rebels stand guard as the Organization for Security and Co-operation in Europe delegation arrives at the crash site on Friday, July 18, 2014. Hide Caption 25 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine A woman walks through the debris field on July 18, 2014. Hide Caption 26 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine Pro-Russian rebels stand guard at the crash site. Hide Caption 27 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine Wreckage from Flight 17 lies in a field in Shaktarsk, Ukraine, on July 18, 2014. Hide Caption 28 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine A man covers a body with a plastic sheet near the crash site on July 18, 2014. The passengers and crew hailed from all over the world, including Australia, Indonesia, Malaysia, Germany and Canada. Hide Caption 29 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine A diver searches for the jet's flight data recorders on July 18, 2014. Hide Caption 30 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine Coal miners search the crash site. Hide Caption 31 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine Wreckage from the Boeing 777 lies on the ground July 18, 2014. Hide Caption 32 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine People search for bodies of passengers on July 18, 2014. Hide Caption 33 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine A woman walks past a body covered with a plastic sheet near the crash site July 18, 2014. Hide Caption 34 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine Belongings of passengers lie in the grass on July 18, 2014. Hide Caption 35 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine People inspect the crash site on Thursday, July 17, 2014. Hide Caption 36 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine People walk amid the debris at the site of the crash. Hide Caption 37 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine Debris smoulders in a field near the Russian border. Hide Caption 38 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine Fire engines arrive at the crash site. Hide Caption 39 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine A man stands next to wreckage. Hide Caption 40 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine Debris from the crashed jet lies in a field in Ukraine. Hide Caption 41 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine Family members of those aboard Flight 17 leave Schiphol Airport near Amsterdam, Netherlands. Hide Caption 42 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine A large piece of the plane lies on the ground. Hide Caption 43 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine Luggage from the flight sits in a field at the crash site. Hide Caption 44 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine A couple walks to the location at Schiphol Airport where more information would be given regarding the flight. Hide Caption 45 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine Flight arrivals are listed at the Kuala Lumpur International Airport in Sepang, Malaysia. Hide Caption 46 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine Debris from the Boeing 777, pictured on July 17, 2014. Hide Caption 47 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine A man inspects debris from the plane. Hide Caption 48 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine Wreckage from the plane is seen on July 17, 2014. Hide Caption 49 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine A man talks with security at Schiphol Airport on July 17, 2014. Hide Caption 50 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine Wreckage burns in Ukraine. Hide Caption 51 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine A man stands next to the wreckage of the airliner. Hide Caption 52 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine People inspect a piece of wreckage believed to be from Malaysia Airlines Flight 17. This image was posted to Twitter Hide Caption 53 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine People inspect a piece of wreckage believed to be from Malaysia Airlines Flight 17. This image was posted to Twitter. Hide Caption 54 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine A piece of wreckage believed to be from Malaysia Airlines Flight 17. This image was posted to Twitter Hide Caption 55 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine A piece of wreckage believed to be from MH17. This image was posted to Twitter Hide Caption 56 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine An airsickness bag believed to be from MH17. This image was posted to Twitter Hide Caption 57 of 58 Photos: Malaysia Airlines Flight 17 crashes in Ukraine A piece of wreckage believed to be from MH17. This image was posted to Twitter Hide Caption 58 of 58
In early June, they began to target Ukrainian planes and helicopters, with some success.
The day after we met the commander in the pine woods, an Antonov AN-26 transport plane was brought down over nearby Slovyansk.
Several Mi-8 and Mi-24 helicopters were also hit in this period, as was an Ilyushin IL-76 cargo plane near Luhansk -- it is about the size of a passenger jet.
Forty-nine military personnel were killed when the IL-76 crashed short of the airport.
For the most part, these aircraft were flying at relatively low altitudes, and were targeted by shoulder-launched SA-7 missiles and anti-aircraft guns. The pro-Russian rebels had taken control of several Ukrainian military depots and bases and stripped them of their weapons.
The SA-7 was standard Soviet issue. Relatively easy to operate, it is effective to altitudes of some 2,500 meters (8,000 feet).
But it and ZU 23-2 anti-aircraft batteries, which rebel units also obtained, are a world away from the SA-11 or "Buk" system that seems increasingly likely to have been used to shoot down Flight MH17 on Thursday.
Stealing a Buk
Could the pro-Russian rebels have acquired a serviceable Buk from a Ukrainian base and operated it? The evidence is circumstantial; a great deal of Ukrainian military hardware is in poor condition or redundant.
But on June 29, rebels raided the Ukrainian army's A-1402 missile facility near Donetsk. Photographs show them examining what they found.
The Russian website Vosti ran an article the same day titled "Skies of Donetsk will be defended by surface-to-air missile system Buk."
The article claimed: "The anti-air defense point is one of the divisions of the missile corps and is equipped with motorized "Buk" anti-aircraft missile systems."
Peter Felstead, an expert on former Soviet military hardware at Janes IHS, says that "the Buk is in both the Russian and Ukrainian inventories, but it's unclear whether the one suspected in the shoot-down was taken by rebels when they overran a Ukrainian base, or was supplied by Russia."
He told CNN that the Buk "would normally operate with a separate radar that picks up the overall air picture. This was almost certainly not the case with MH17," making it more difficult to identify the target and track its course.
Among the pro-Russian rebels are fighters who served in the Russian army. It is possible that some were familiar with the Buk, but Felstead agrees with the U.S. and Ukrainian assessment that Russian expertise would have been needed to operate it.
Photos: MH17: What they left behind Photos: MH17: What they left behind MH17: What they left behind – A birthday card found in a sunflower field near the crash site of Malaysia Airlines Flight 17 in eastern Ukraine, on Thursday, July 24. The passenger plane was shot down July 17 above Ukraine. All 298 people aboard were killed, and much of what they left behind was scattered in a vast field of debris. Hide Caption 1 of 16 Photos: MH17: What they left behind MH17: What they left behind – A classical music record is seen among the sunflowers on July 24. Hide Caption 2 of 16 Photos: MH17: What they left behind MH17: What they left behind – A shoe, appearing to be brand new, sits under foliage at the crash site. Hide Caption 3 of 16 Photos: MH17: What they left behind MH17: What they left behind – Two Dutch passports belonging to passengers lie in a field at the site of the crash on Tuesday, July 22. Hide Caption 4 of 16 Photos: MH17: What they left behind MH17: What they left behind – Clothing, sunglasses and chocolate are seen on July 22. Hide Caption 5 of 16 Photos: MH17: What they left behind MH17: What they left behind – More sunglasses and a travel guide lie in the field on July 22. Hide Caption 6 of 16 Photos: MH17: What they left behind MH17: What they left behind – A doll is seen on the ground on Saturday, July 19. Hide Caption 7 of 16 Photos: MH17: What they left behind MH17: What they left behind – A single shoe is seen among the debris and wreckage on July 19. There has been concern that the site has not been sealed off properly and that vital evidence is being tampered with. Hide Caption 8 of 16 Photos: MH17: What they left behind MH17: What they left behind – Pieces of a wristwatch lie on a plastic cover at the crash site. Hide Caption 9 of 16 Photos: MH17: What they left behind MH17: What they left behind – A toy monkey. Hide Caption 10 of 16 Photos: MH17: What they left behind MH17: What they left behind – Books, bags, a tourist T-shirt. Ukraine's government said it had received reports of looting, and it urged relatives to cancel the victims' credit cards. But a CNN crew at the scene July 19 said it did not see any signs of looting. Hide Caption 11 of 16 Photos: MH17: What they left behind MH17: What they left behind – Passports were scattered across the large field. Hide Caption 12 of 16 Photos: MH17: What they left behind MH17: What they left behind – Playing cards and euros are seen at the crash site. Hide Caption 13 of 16 Photos: MH17: What they left behind MH17: What they left behind – A travel guide and toiletries. Hide Caption 14 of 16 Photos: MH17: What they left behind MH17: What they left behind – Luggage on Friday, July 18. Hide Caption 15 of 16 Photos: MH17: What they left behind MH17: What they left behind – An empty suitcase is cordoned off near the plane's impact site on Thursday, July 17. Hide Caption 16 of 16
JUST WATCHED Entire family killed in MH17 crash Replay More Videos ... MUST WATCH Entire family killed in MH17 crash 03:07
JUST WATCHED Bodies being taken from crash site Replay More Videos ... MUST WATCH Bodies being taken from crash site 01:00
JUST WATCHED Dutch PM demands access to MH17 scene Replay More Videos ... MUST WATCH Dutch PM demands access to MH17 scene 01:03
"The system needs a crew of about four who know what they're doing. To operate the Buk correctly, Russian assistance would have been required unless the rebel operators were defected air defense operators - which is unlikely."
It is now the "working theory" in the U.S. intelligence community that the Russian military supplied a Buk surface-to-air missile system to the rebels, a senior US defense official told CNN Friday.
Russia has denied that any equipment in service with the Russian armed forces has crossed the border into Ukraine. And Aleksander Borodai, the self-described prime minister of the Donetsk People's Republic, said Saturday his forces did not have weapons capable of striking an aircraft at such a high altitude.
But someone in the border region where eastern Ukraine meets Russia has been using an advanced anti-air missile system.
Late Wednesday, the day before MH17 was presumably hit, a Ukrainian air force Sukhoi Su-25 combat jet was shot down close to the border with Russia.
The Ukrainian Defense Ministry told CNN that the plane was flying at 6,200-6,500 meters (about 21,000 feet) and was hit near a town called Amvrosiivka, which is only some 30 kilometers (20 miles) from where MH17 was hit and 15 kilometers (10 miles) from the border with Russia.
The Ukrainian military alleged the missile had been fired from Russian territory. It was the first time that a combat jet flying at high speed had been hit and came two days after an AN-26 -- flying at a similar altitude in the same area -- was shot down further north, in the Luhansk area.
Smuggling on the black roads
The Russian Defense Ministry said Friday that weapons could not be smuggled across the border "secretly." But they can.
By early June, rebels controlled several crossings along a stretch of border more than 200 kilometers (125 miles) long. The border area is open farmland that was neither patrolled regularly nor even marked in many places.
Dozens of unmonitored tracks known as black roads -- because they have been used for smuggling -- cross the border. Additionally, the Ukrainian border guard service was in disarray after an attack on its command center in Luhansk early in June.
On the road east toward the border through the town of Antratsyt there was no sign of a Ukrainian military or police presence. The pro-Russian rebels had already begun to bring across heavy weapons at that point.
A CNN team visited the border post at Marynivka in June, soon after a five-hour firefight involving border guards and members of the self-declared Vostok battalion of rebels who had been trying to bring over two Russian armored personnel carriers.
They had been abandoned during the battle.
The unknowns are these: Just how much weaponry has been brought in from Russia, how was it obtained, and did it include the SA-11 Buk?
In June, the U.S. State Department claimed that three T-64 tanks, several rocket launchers and other military vehicles had crossed the Russian border. Ukraine made similar accusations, saying the weapons had gone to Snezhnoe, a rebel stronghold close to where MH17 came down.
The State Department said the tanks had been in storage in south-west Russia, suggesting collusion between the Russian authorities -- at some level -- and the rebels. It said at the time that the equipment held at the storage site also included "multiple rocket launchers, artillery, and air defense systems."
It added, notably, that "more advanced air defense systems have also arrived at this site."
Moscow rejected the claims as fake.
NATO has also released satellite images which, it said, showed tanks in the Rostov-on-Don region in Russia early in June , before they were taken to eastern Ukraine. The tanks had no markings.
Even so, some experts, such as Mark Galeotti at New York University's Center for Global Affairs, say the evidence is largely circumstantial. NATO's images did not show the tanks actually crossing into Ukraine.
Wherever they came from, Russian language websites soon featured calls for people with military skills to call a number associated with the separatist Donetsk People's Republic if they could help operate or maintain the tanks.
One answered, "I served in the military engineering academy...and am a former commander in the intelligence."
But the separatists' greatest vulnerability was always from the air.
The Ukrainians had already shown, in driving them away from the Donetsk airport at the end of May, that they could use airpower to devastating effect. And they had begun to fly at higher altitudes to avoid shoulder-launched missiles.
To hold what remained of their territory, the pro-Russian rebels needed to be able to challenge Ukrainian dominance of the skies.
Whether they received help from across the border to do so, and in what way, is the question that governments around the world want answered. | Leaders around the world are exerting pressure on Vladimir Putin—especially Dutch PM Mark Rutte, who says he gave Russia's leader a mouthful over the handling of remains from Malaysia Airlines MH17, the Guardian reports. In a "very intense" conversation, Rutte said he gave Putin "one last chance to show he means to help" rescuers recover crash victims, including 193 Dutch nationals. "I was shocked at the pictures of utterly disrespectful behavior at this tragic spot. It's revolting." David Cameron urged the EU to consider new relations with Russia unless Putin takes action, and John Kerry demanded that investigators be allowed on the crash site. Russia agreed yesterday to back an open investigation led by the UN (and a Moscow-based aviation group), but Washington says that's all talk so far, the Washington Post reports. "It’s another case of the Russians saying one thing and doing another,” said a top Obama official. Meanwhile, international investigators say unknown "experts" are bagging bodies at the site, and Ukraine is accusing pro-Russia fighters of removing 38 victims and destroying plane evidence; other reports have them looting bodies of valuables. Adding to circumstantial evidence that rebels shot down the plane, CNN quotes a Russian-website report that they raided a missile facility last month and acquired a Buk surface-to-air missile system. |
A wearable camera designed to take a picture every 30 seconds, to allow owners to record their daily lives, has become the latest technogical hit on Kickstarter, the "crowd funding" website.
So far, Memoto, billed as "the world's smallest wearbale camera" has attracted more than $44,000 of its $50,000 funding target from more than 250 gadget fans keen to capture a digital record of their entire lives .
The tiny device is designed to be clipped to clothes or worn on a necklace. As well as a five megapixel digital camera, it will feature a GPS chip to keep track of owners' locations and automatically log and organise pictures via a specially-created iPhone and Android apps. Memoto claims the battery will last two days.
"Many fantastic and special moments become blurred together after a while and it feels like life just rushes by, too fast for us to grasp," said the Swedish start-up behind the project.
"We at Memoto wanted to find a way to relive more of our lives in the future - and enjoy the present as it happens."
Memoto describes the project as "lifelogging" technology and plans to ship its first finished cameras in February next year.
It is part of a trend dubbed the "Quantified Self Movement", proponents of which aim to record as much data as possible about their lives. They have adopted other products including the Nike+ FuelBand and Fitbit tracker, both of which keep tabs on wearers' exercise patterns, as well as smartphone apps to track heart rate other health data.
"The camera and the app work together to give you pictures of every single moment of your life, complete with information on when you took it and where you were," said Memoto. "This means that you can revisit any moment of your past."
The apps the firm is developing are designed to help deal with the glut of images the capture will take by helping organise them and pick out interesting moments.
"The way this works is that the photos are organized into groups of "moments" on a timeline," said Memote.
"On the timeline, you're presented with keyframes (about 30 per day) each representing one moment. You can tap a moment to relive it in a stop-motion like video of all the pictures in that moment."
"This enables you to not only browse your life the way you remember it, but to search for specific events of your life: who was it that you met at that party or what did the sunset looked like in Lapland in June?"
Critics of the Quantified Self Movement argue that, as well as being narcissistic, it makes people live for technology rather than use technology to help them live. The German writer Juli Zeh reportedly described it as "self-empowerment by self-enslavement".
Others have also raised concerns about privacy, but Memoto said its product would have strict controls and encouraged owners to exerciee restraint.
"If someone asks you not to use your Memoto camera - then please don't," it said.
"If someone doesn't explicitly ask you, but you have reason to believe that the place or the context is inappropriate for photographing - then please don't." ||||| Would you record your life in a series of geotagged photos? Swedish startup Memoto is betting enough people will want digital memory assistance, and their first-day performance on Kickstarter backs them up.
Life-logging is not a new idea. Steve Mann has been working on it since the 1980s, as has Gordon Bell since the 1990s. Just two months ago, a Microsoft patent application showed how Redmond is trying to capitalize on its years of research in the field of computer-assisted life-recording, and Google clearly has similar applications in mind with Project Glass.
One of the big issues with the concept is the size and wearability of the recording device, which is why an upcoming product from a Swedish company called Memoto is so exciting. They launched a Kickstarter project earlier today and achieved their $50k goal within five hours. At the time of writing they had almost doubled that amount, and are now into a $150k stretch goal phase where backers will get to choose a new colour for the postage-stamp-sized device.
Here’s Memoto’s Kickstarter video:
According to Memoto CEO Martin Källström, who was previously head of the blog search engine Twingly, the company has already benefited from a €500k seed round from Passion Capital and angels such as Amen CEO Felix Petersen. That was enough to start paying the Memoto team – now the Kickstarter cash will see the device become reality.
Not only that: Källström also sees the Kickstarter success as validation for the idea ahead of a Series A round this winter.
How does it work?
The Memoto device measures 36x36x9mm and contains a five-megapixel camera, a GPS unit, an accelerometer and 8GB of storage – enough for two days’ worth of photos, seeing as the device takes a photo every 30 seconds. It will cost $279, or nothing for early backers who give $199 or more.
The user will need to hook the device up to their computer every couple of days, both to upload the photos and recharge the battery. The photos will go onto Memoto’s servers and be made accessible for time-lapse-style playback through smartphone apps.
Crucially for security, the photos will be encrypted and inaccessible even to Memoto’s analytics systems while the user is not logged in. During that log-in time, Memoto’s systems will be able to pick out key frames that represent “moments”, for example the four hours where the user is sitting in front of their computer – these are the frames that will be shown when the user is trying to sift through everything they did on a certain day.
The company reckons each device will generate 1.5TB of data a year (although they will of course be able to delete what they don’t want to remember). Users will pay an annual subscription for the photos’ storage, providing Memoto with a second revenue stream on top of device sales.
“If there weren’t a customized storage service available, it would be a big problem for everyday consumers to keep all the data on-hand in a safe and secure way,” Källström told me.
Keeping memories alive
So, what happens if the user falls on hard times and can’t keep up with their subscription fees? Would that mean the digital equivalent of losing your memory?
“We see that the photos we are storing will be very valuable data and we will do everything to make sure that no mistakes or unfortunate circumstances will cause any data loss,” Källström said. “There will be three ways to get the data down from our storage service: you can download individual images through the app interface in full resolution, you can download in bulk, and there will also be an API where third-party developers can build their own lifelogging apps.”
And what about those patents? Källström reckons the idea of a wearable camera is now well-established enough that there are “no patents hindering new applications in that space”.
Hopefully he’s right. What we’re looking at here is a realization of Microsoft’s old SenseCam project, made realistic – if still a bit pricey – for consumers. Even Gordon Bell is quoted in Memoto’s press release as endorsing the thing.
“A small, wearable, geo-aware camera with pictures going to the cloud is just what we need for life-logging of life’s events. I’m anxious to try the Memoto camera,” Bell said.
High praise indeed – now Memoto just needs to live up to those expectations. We’ll see after the commercial launch early next year. ||||| Our Kickstarter campaign is over, but you can still get a Memoto Camera. Head over to Memoto.com for more info and to place an order.
Thank you all backers!
Thanks to you, the Memoto Lifelogging Camera was successfully funded!
This Kickstarter page will be frozen on November 30, 2012. To follow our journey ahead, follow us on Twitter (@memototeam) and on Facebook (fb.com/Memotocompany).
Orders are now available through Memoto.com.
––––––––––––––––––––––––––––––––––––––––––––––––––––––––––
Remember every moment.
Have you thought about how much of life that goes missing from your memories? Many fantastic and special moments become blurred together after a while and it feels like life just rushes by, too fast for us to grasp.
We at Memoto wanted to find a way to relive more of our lives in the future - and enjoy the present as it happens.
The functional prototype
The Memoto camera comes in three different colors: Arctic White, Graphite Grey and Memoto Orange
The Functional Prototype
Hardware devices consists of mechanics, electronics and firmware. The mechanics are often overlooked, the 3D printed prototypes in our Kickstarter video has taken two engineers and one industrial designer over four months to finalize. The result is a production-ready construction of a weather protected, very strong and well designed camera casing.
The electronics has taken even longer to develop. We have used evaluation kits and break-out boards to put together a complete set of components in order to be able to start development of the firmware. At the same time we have made a circuit board design that fits all the components into a very tiny package.
What does the functional prototype do?
The prototype does everything the final camera will do, taking photos and registering GPS position.
Sample photo and gps data from the Memoto prototype camera
Miniaturizing the electronics
Not only has miniaturized circuit board been designed by one of Sweden's foremost digital camera engineers who are working on our team, but it is also undergoing careful review and approval from these outside sources:
The head of engineering at our Taiwanese manufacturer, Yomura, to make sure it fits into the camera casing without causing any problems during assembly or long term use
The engineer that created the Anoto pen, in order to verify our power management and battery life
The engineer behind the electronics in the Mutewatch watch, to verify overall build quality and circuit wiring
The GPS chip manufacturer, to verify that the GPS will function properly and to compare it with their reference design
The GPS antenna manufacturer, to verify our antenna placement in relation to other components and the casing
The PCB currently waiting for review and quote from manufacturers.
The 8-layer High-Density-Interconnect printed circuit board with laserdrilled blind and buried vias
Want more details? Here's a word from our engineers:
"The Memoto camera is small and uses fine-pitch BGA components with as small as 0.5 mm (0.02") distance between the pads. This required the use of an 8-layer IPC-2226 Type-II HDI (High Density Interconnect) PCB of 4/4 mil track/space, with filled and capped laser-drilled partial vias of 0.1 mm (0.004") diameter between the outermost layers as well as buried vias, which are mechanically drilled partial vias between layers inside the PCB. This via technology, including placing vias inside the BGA pads, is required to avoid interfering with routing on the top and bottom component layers and make room for fine-pitch BGA escape. Due to the presence of high-speed USB tracks and antenna signals on the PCB, it also has been impedance controlled which involves analysis of how the electric fields behave in the metal layers relative the other dielectric layers in the PCB."
The world's smallest wearable camera
The Memoto camera is a tiny camera and GPS that you clip on and wear. It’s an entirely new kind of digital camera with no controls. Instead, it automatically takes photos as you go. The Memoto app then seamlessly and effortlessly organizes them for you.
Easy and effortless
The camera has no buttons. (That's right, no buttons.) As long as you wear the camera, it is constantly taking pictures. It takes two, geotagged photos a minute with recorded orientation so that the app can show them upright no matter how you are wearing the camera. And it’s weather protected, so you don’t have to worry about it in inclement weather.
The camera and the app work together to give you pictures of every single moment of your life, complete with information on when you took it and where you were. This means that you can revisit any moment of your past.
The Memoto lifelogging camera - only 36x36x9 mm Long battery life The camera’s batteries won't need to be recharged until after approximately 2 days of use (using default photo frequency of 2 shots per minute). To recharge the camera’s batteries, you connect the camera to your computer; at the same time the photos are automatically uploaded to Memoto’s servers. There are no buttons to press. You just wear the camera, then charge it and wear it again. Access your life through the Memoto app With this many pictures captured and stored every day, we think it's crucial that you can easily browse among the best and most meaningful ones. The app we're building for iPhone and Android organizes the photos to work as a photographic memory even after many years. Concept images of the iPhone app. (Photos taken with a iPhone4S). From left: Timeline view with notifications, Map display, Social timeline view Concept images of the Android app. Left: Login screen. Right: Moment view in private mode. Relive your life like you remember it The way this works is that the photos are organized into groups of "moments" on a timeline. On the timeline, you're presented with keyframes (about 30 per day) each representing one moment. You can tap a moment to relive it in a stop-motion like video of all the pictures in that moment. The image analysis and organization is made out of the images’ metadata, such as time, place and light. This enables you to not only browse your life the way you remember it, but to search for specific events of your life: who was it that you met at that party or what did the sunset looked like in Lapland in June? (Fun fact: there is no sunset in Lapland in June). The app organizes all your photos on a timeline, making them easy accessable to search and share. Your photos are yours and only you can share them The app comes with features for sharing through the biggest social media services. However, we want to stress that your Memoto pictures will always be private by default. That is, you only share pictures when you deliberately want to share them. Cutting your storage costs at least in half The Memoto Camera potentially produces a huge amount of bits and bytes. 4 GB data per day amounts to up to 1,5 terabyte per year. Instead of you storing all this on unreliable and expensive hard drives that can get stolen or lost, Memoto offers safe and secure infinite photo storage at a flat monthly fee, which will always be a lot more affordable than hard drives. For Kickstarter backers, the first year of storage is included in the reward! Technical specifications Camera Automatic photo capture at customizable frequency (default frequency is once every 30 seconds)
Double-tap on the camera case to manually take a photo and bookmark it
5 megapixel resolution images
Log of GPS positions and timestamps
Built-in rechargeable battery which lasts up to two days
LED battery life indicator
2 full days of constant photographing (4000 pictures) space on memory
Built-in accelerometer ensures that pictures are correctly oriented regardless of how the camera is worn
Micro-USB port for charging and connecting to computer
Stainless steel clip to connect the camera to your clothes
36x36x9 millimeters small Software | If you think the digital age already has too much sharing and too little privacy, you might look away now. If not, embrace Memoto, a small wearable camera designed to snap a photo every 30 seconds, then sync with apps to organize the pics into an easily accessible digital diary. Business Insider thinks it's a little "creepy," while GigaOm calls it "exciting." Either way, Memoto is going to be reality soon, because it exceeded its $50,000 funding goal on Kickstarter in a mere five hours. The maker's premise is that people often don't realize that a moment was special until long after the fact, so wouldn't it be nice to record pretty much everything just in case? The concept is called lifelogging, and while it's not new, Memoto could be the most practical application of it so far. Expect the devices on the market next year for $279. |
(European Pressphoto Agency)
On any given day in the United States, at least 137,000 people sit behind bars on simple drug-possession charges, according to a report released Wednesday by the American Civil Liberties Union and Human Rights Watch.
Nearly two-thirds of them are in local jails. The report says that most of these jailed inmates have not been convicted of any crime: They're sitting in a cell, awaiting a day in court, an appearance that may be months or even years off, because they can't afford to post bail.
"It's been 45 years since the war on drugs was declared, and it hasn't been a success," lead author Tess Borden of Human Rights Watch said in an interview. "Rates of drug use are not down. Drug dependency has not stopped. Every 25 seconds, we're arresting someone for drug use."
[Marijuana really can be deadly, but not in the way you probably expect]
Federal figures on drug arrests and drug use over the past three decades tell the story. Drug-possession arrests skyrocketed, from fewer than 200 arrests for every 100,000 people in 1979 to more than 500 in the mid-2000s. The drug-possession rate has since fallen slightly, according to the FBI, hovering near 400 arrests per 100,000 people.
Defenders of harsh penalties for drug possession say they are necessary to deter people from using drugs and to protect public health. But despite the tough-on-crime push that led to the surge in arrests in recent decades, illicit drug use today is more common among Americans age 12 and older than it was in the early 1980s. Federal figures show no correlation between drug-possession arrests and rates of drug use during that time.
But the ACLU and Human Rights Watch report shows that arrests for drug possession continue to make up a significant chunk of modern-day police work.
"Around the country, police make more arrests for drug possession than for any other crime," the report finds, citing FBI data. "More than one of every nine arrests by state law enforcement is for drug possession, amounting to more than 1.25 million arrests each year."
In fact, police make more arrests for marijuana possession alone than for all violent crimes combined.
[Drug cops raid an 81-year-old woman's garden to take out a single marijuana plant]
The report finds that the laws are enforced unequally, too. Over their lifetimes, black and white Americans use illicit drugs at similar rates, according to federal data. But black adults were more than 2½ times as likely to be arrested for drug possession.
"We can't talk about race and policing in this country without talking about the No. 1 arrest offense," Borden said.
The report calls for decriminializing the personal use and possession of drugs, treating it as a public-health matter instead of a criminal one.
"Rather than promoting health, criminalization can create new barriers to health for those who use drugs," the report says. "Criminalization drives drug use underground; it discourages access to emergency medicine, overdose prevention services, and risk-reducing practices such as syringe exchanges."
The report reinforces its point by noting the lengthy sentences handed down in some states for possession of small amounts of drugs.
For example, it sketches the history of Corey J. Ladd, who was arrested for possessing half an ounce of marijuana during a 2011 traffic stop in New Orleans. Because he had convictions for two prior offenses involving the possession of small amounts of hydrocodone and LSD, he was sentenced in 2013 to 17 years in prison as a "habitual offender." He is currently appealing the sentence to Louisiana's Supreme Court.
"Corey's story is about the real waste of human lives, let alone taxpayer money, of arrest and incarceration for personal drug use," Borden said. "He could be making money and providing for his family."
How marijuana legalization in Washington, Colorado and Oregon is working out so far. (Daron Taylor,Danielle Kunitz/The Washington Post)
[The DEA wants to ban another plant. Researchers say the plan is 'insane.']
But Ladd's treatment is far from the harshest drug-possession sentence uncovered by ACLU and Human Rights Watch researchers, who conducted analyses of arrest and incarceration data from Florida, New York and Texas.
In Texas, for instance, 116 people are currently serving life sentences on charges of simple drug possession. Seven of those people earned their sentences for possessing quantities of drugs weighing between 1 gram and 4 grams, or less than a typical sugar packet. That's because Texas also has a habitual-offender law, allowing prosecutors to seek longer-than-normal sentences for people who have two prior felonies.
"In 2015, more than 78 percent of people sentenced to incarceration for felony drug possession in Texas possessed under a gram," the report found. ||||| He said he had been turned down by a fast-food restaurant because of his marijuana conviction, as well as at the restaurant where he worked before his last arrest as a fry cook and dishwasher. “I’ve kind of stopped trying,” said Cory, who is African-American.
Tess Borden, a fellow at Human Rights Watch and the A.C.L.U., who wrote the report, found that despite the steep decline in crime rates over the last two decades — including a 36 percent drop in violent crime arrests from 1995 to 2015 — the number of arrests for all drug possessions, including marijuana, increased 13 percent.
The emphasis on making marijuana arrests is worrisome, Ms. Borden said.
“Most people don’t think drug possession is the No. 1 public safety concern, but that’s what we’re seeing,” she said.
Criminologists say that African-Americans are arrested more often than whites and others for drug possession in large part because of questionable police practices.
Police departments, for example, typically send large numbers of officers to neighborhoods that have high crime rates. A result is that any offense — including minor ones like loitering, jaywalking or smoking marijuana — can lead to an arrest, which in turn drives up arrest rate statistics, leading to even greater police vigilance.
“It is selective enforcement, and the example I like to use is that you have all sorts of drug use inside elite college dorms, but you don’t see the police busting through doors,” said Inimai M. Chettiar, director of the Justice Program at New York University’s Brennan Center for Justice.
African-Americans may also be more apt to face arrest, according to researchers, because they might be more likely to smoke marijuana outdoors, attracting the attention of the police. ||||| Summary Neal Scott may die in prison. A 49-year-old Black man from New Orleans, Neal had cycled in and out of prison for drug possession over a number of years. He said he was never offered treatment for his drug dependence; instead, the criminal justice system gave him time behind bars and felony convictions—most recently, five years for possessing a small amount of cocaine and a crack pipe. When Neal was arrested in May 2015, he was homeless and could not walk without pain, struggling with a rare autoimmune disease that required routine hospitalizations. Because he could not afford his $7,500 bond, Neal remained in jail for months, where he did not receive proper medication and his health declined drastically—one day he even passed out in the courtroom. Neal eventually pled guilty because he would face a minimum of 20 years in prison if he took his drug possession case to trial and lost. He told us that he cried the day he pled, because he knew he might not survive his sentence. VIDEO: HRW/ACLU Say Decriminalize Drug Possession The massive enforcement of laws criminalizing personal drug use and possession in the United States causes devastating harm. *** Just short of her 30th birthday, Nicole Bishop spent three months in jail in Houston for heroin residue in an empty baggie and cocaine residue inside a plastic straw. Although the prosecutor could have charged misdemeanor paraphernalia, he sought felony drug possession charges instead. They would be her first felonies. Nicole was separated from her three young children, including her breastfeeding newborn. When the baby visited Nicole in jail, she could not hear her mother’s voice or feel her touch because there was thick glass between them. Nicole finally accepted a deal from the prosecutor: she would do seven months in prison in exchange for a guilty plea for the 0.01 grams of heroin found in the baggie, and he would dismiss the straw charge. She would return to her children later that year, but as a “felon” and “drug offender.” As a result, Nicole said she would lose her student financial aid and have to give up pursuit of a degree in business administration. She would have trouble finding a job and would not be able to have her name on the lease for the home she shared with her husband. She would no longer qualify for the food stamps she had relied on to help feed her children. As she told us, she would end up punished for the rest of her life. *** Every 25 seconds in the United States, someone is arrested for the simple act of possessing drugs for their personal use, just as Neal and Nicole were. Around the country, police make more arrests for drug possession than for any other crime. More than one of every nine arrests by state law enforcement is for drug possession, amounting to more than 1.25 million arrests each year. And despite officials’ claims that drug laws are meant to curb drug sales, four times as many people are arrested for possessing drugs as are arrested for selling them. As a result of these arrests, on any given day at least 137,000 men and women are behind bars in the United States for drug possession, some 48,000 of them in state prisons and 89,000 in jails, most of the latter in pretrial detention. Each day, tens of thousands more are convicted, cycle through jails and prisons, and spend extended periods on probation and parole, often burdened with crippling debt from court-imposed fines and fees. Their criminal records lock them out of jobs, housing, education, welfare assistance, voting, and much more, and subject them to discrimination and stigma. The cost to them and to their families and communities, as well as to the taxpayer, is devastating. Those impacted are disproportionately communities of color and the poor. This report lays bare the human costs of criminalizing personal drug use and possession in the US, focusing on four states: Texas, Louisiana, Florida, and New York. Drawing from over 365 interviews with people arrested and prosecuted for their drug use, attorneys, officials, activists, and family members, and extensive new analysis of national and state data, the report shows how criminalizing drug possession has caused dramatic and unnecessary harms in these states and around the country, both for individuals and for communities that are subject to discriminatory enforcement. There are injustices and corresponding harms at every stage of the criminal process, harms that are all the more apparent when, as often happens, police, prosecutors, or judges respond to drug use as aggressively as the law allows. This report covers each stage of that process, beginning with searches, seizures, and the ways that drug possession arrests shape interactions with and perceptions of the police—including for the family members and friends of individuals who are arrested. We examine the aggressive tactics of many prosecutors, including charging people with felonies for tiny, sometimes even “trace” amounts of drugs, and detail how pretrial detention and long sentences combine to coerce the overwhelming majority of drug possession defendants to plead guilty, including, in some cases, individuals who later prove to be innocent. The report also shows how probation and criminal justice debt often hang over people’s heads long after their conviction, sometimes making it impossible for them to move on or make ends meet. Finally, through many stories, we recount how harmful the long-term consequences of incarceration and a criminal record that follow a conviction for drug possession can be—separating parents from young children and excluding individuals and sometimes families from welfare assistance, public housing, voting, employment opportunities, and much more. Families, friends, and neighbors understandably want government to take actions to prevent the potential harms of drug use and drug dependence. Yet the current model of criminalization does little to help people whose drug use has become problematic. Treatment for those who need and want it is often unavailable, and criminalization tends to drive people who use drugs underground, making it less likely that they will access care and more likely that they will engage in unsafe practices that make them vulnerable to disease and overdose. While governments have a legitimate interest in preventing problematic drug use, the criminal law is not the solution. Criminalizing drug use simply has not worked as a matter of practice. Rates of drug use fluctuate, but they have not declined significantly since the “war on drugs” was declared more than four decades ago. The criminalization of drug use and possession is also inherently problematic because it represents a restriction on individual rights that is neither necessary nor proportionate to the goals it seeks to accomplish. It punishes an activity that does not directly harm others. Instead, governments should expand public education programs that accurately describe the risks and potential harms of drug use, including the potential to cause drug dependence, and should increase access to voluntary, affordable, and evidence-based treatment for drug dependence and other medical and social services outside the court and prison system. After decades of “tough on crime” policies, there is growing recognition in the US that governments need to undertake meaningful criminal justice reform and that the “war on drugs” has failed. This report shows that although taking on parts of the problem—such as police abuse, long sentences, and marijuana reclassification—is critical, it is not enough: Criminalization is simply the wrong response to drug use and needs to be rethought altogether. Human Rights Watch and the American Civil Liberties Union call on all states and the federal government to decriminalize the use and possession for personal use of all drugs and to focus instead on prevention and harm reduction. Until decriminalization has been achieved, we urge officials to take strong measures to minimize and mitigate the harmful consequences of existing laws and policies. The costs of the status quo, as this report shows, are too great to bear. A National Problem All US states and the federal government criminalize possession of illicit drugs for personal use. While some states have decriminalized possession of small amounts of marijuana, other states still make marijuana possession a misdemeanor or even a felony. In 42 states, possession of small amounts of most illicit drugs other than marijuana is either always or sometimes a felony offense. Only eight states and the District of Columbia make possession of small amounts a misdemeanor. Not only do all states criminalize drug possession; they also all enforce those laws with high numbers of arrests and in racially discriminatory ways, as evidenced by new analysis of national and state-level data obtained by Human Rights Watch. Aggressive Policing More than one of nine arrests by state law enforcement are for drug possession, amounting to more than 1.25 million arrests per year. While the bulk of drug possession arrests are in large states such as California, which made close to 200,000 arrests for drug possession in 2014, Maryland, Nebraska, and Mississippi have the highest per capita drug possession arrest rates. Nationwide, rates of arrest for drug possession range from 700 per 100,000 people in Maryland to 77 per 100,000 in Vermont. Despite shifting public opinion, in 2015, nearly half of all drug possession arrests (over 574,000) were for marijuana possession. By comparison, there were 505,681 arrests for violent crimes (which the FBI defines as murder, non-negligent manslaughter, rape, robbery, and aggravated assault). This means that police made more arrests for simple marijuana possession than for all violent crimes combined. Data presented for the first time in this report shows stark differences in arrest rates for drug possession even within the same state. For example, data provided to us by Texas shows that 53 percent of drug possession arrests in Harris County (in and around Houston) were for marijuana, compared with 39 percent in nearby Dallas County, despite similar drug use rates in the two counties. In New York State, the counties with the highest drug possession arrest rates by a large margin were all in and around urban areas of New York City and Buffalo. In Florida, the highest rates of arrest were spread around the state in rural Bradford County, urban Miami-Dade County, Monroe County (the Keys), rural Okeechobee County, and urban Pinellas County. In Texas, counties with the highest drug possession arrest rates were all small rural counties. Kenedy County, for example, has an adult population of 407 people, yet police there made 329 arrests for drug possession between 2010 and 2015. In each of these states, there is little regional variation in drug use rates. The sheer magnitude of drug possession arrests means that they are a defining feature of the way certain communities experience police in the United States. For many people, drug laws shape their interactions with and views of the police and contribute to a breakdown of trust and a lack of security. This was particularly true for Black and Latino people we interviewed. Racial Discrimination Over the course of their lives, white people are more likely than Black people to use illicit drugs in general, as well as marijuana, cocaine, heroin, methamphetamines, and prescription drugs (for non-medical purposes) specifically. Data on more recent drug use (for example, in the past year) shows that Black and white adults use illicit drugs other than marijuana at the same rates and that they use marijuana at similar rates. Yet around the country, Black adults are more than two-and-a-half times as likely as white adults to be arrested for drug possession. In 2014, Black adults accounted for just 14 percent of those who used drugs in the previous year but close to a third of those arrested for drug possession. In the 39 states for which we have sufficient police data, Black adults were more than four times as likely to be arrested for marijuana possession than white adults. In every state for which we have sufficient data, Black adults were arrested for drug possession at higher rates than white adults, and in many states the disparities were substantially higher than the national rate—over 6 to 1 in Montana, Iowa, and Vermont. In Manhattan, Black people are nearly 11 times more likely than white people to be arrested for drug possession. Darius Mitchell, a Black man in his 30s, was among those targeted in Louisiana. He recounted his story to us as follows: Late one night in Jefferson Parish, Darius was driving home from his child’s mother’s house. An officer pulled him over, claiming he was speeding. When Darius said he was sure he was not, the officer said he smelled marijuana. He asked whether he could search, and Darius said no. Another officer and a canine came and searched his car anyway. They yelled, “Where are the pounds?” suggesting he was a marijuana dealer. The police never found marijuana, but they found a pill bottle in Darius’ glove compartment, with his child’s mother’s name on it. Darius said that he had driven her to the emergency room after an accident, and she had been prescribed hydrocodone, which she forgot in the car. Still, the officers arrested him and he was prosecuted for drug possession, his first felony charge. He faced up to five years in prison. Darius was ultimately acquitted at trial, but months later he remained in financial debt from his legal fees, was behind in rent and utilities bills, and had lost his cable service, television, and furniture. He still had an arrest record, and the trauma and anger of being unfairly targeted. Small-Scale Drug Use: Prosecutions for Tiny Amounts We interviewed over 100 people in Texas, Louisiana, Florida, and New York who were prosecuted for small quantities of drugs—in some cases, fractions of a gram—that were clearly for personal use. Particularly in Texas and Louisiana, prosecutors did more than simply pursue these cases—they often selected the highest charges available and went after people as hard as they could. In 2015, according to data we analyzed from Texas courts, nearly 16,000 people were sentenced to incarceration for drug possession at the “state jail felony” level—defined as possession of under one gram of substances containing commonly used drugs, including cocaine, heroin, methamphetamine, PCP, oxycodone, MDMA, mescaline, and mushrooms (or between 4 ounces and 5 pounds of marijuana). One gram, the weight of less than one-fourth a sugar packet, is enough for only a handful of doses for new users of many drugs. Data presented here for the first time suggests that in 2015, more than 78 percent of people sentenced to incarceration for felony drug possession in Texas possessed under a gram. Possibly thousands more were prosecuted and put on probation, potentially with felony convictions. In Dallas County, the data suggests that nearly 90 percent of possession defendants sentenced to incarceration were for under a gram. The majority of the 30 defendants we interviewed in Texas had substantially less than a gram of illicit drugs in their possession when they were arrested: not 0.9 or 0.8 grams, but sometimes 0.2, 0.02, or a result from the lab reading “trace,” meaning that the amount was too small even to be measured. One defense attorney in Dallas told us a client was charged with drug possession in December 2015 for 0.0052 grams of cocaine. The margin of error for the lab that tested it is 0.0038 grams, meaning it could have weighed as little as 0.0014 grams, or 35 hundred-thousandths (0.00035) of a sugar packet. Bill Moore, a 66-year-old man in Dallas, is serving a three-year prison sentence for 0.0202 grams of methamphetamines. In Fort Worth, Hector Ruiz was offered six years in prison for an empty bag that had heroin residue weighing 0.007 grams. In Granbury, Matthew Russell was charged with possession of methamphetamines for an amount so small that the laboratory result read only “trace.” The lab technician did not even assign a fraction of a gram to it. A System that Coerces Guilty Pleas In 2009 (the most recent year for which national data is available), more than 99 percent of people convicted of drug possession in the 75 largest US counties pled guilty. Our interviews and data analysis suggest that in many cases, high bail—particularly for low-income defendants—and the threat of long sentences render the right to a jury trial effectively meaningless. Data we obtained from Florida and Alabama reveals that, at least in those two states, the majority of drug possession defendants were poor enough to qualify for court-appointed counsel. Yet in 2009, drug possession defendants in the 75 largest US counties had an average bail of $24,000 (for those detained, average bail was $39,900). For lower-income defendants, such high bail often means they must remain in jail until their case is over. For defendants with little to no criminal history, or in relatively minor cases, prosecutors often offer probation, relatively short sentences, or “time served.” For those who cannot afford bail, this means a choice between fighting their case from jail or taking a conviction and walking out the door. In Galveston, Texas, Breanna Wheeler, a single mother, pled to probation and her first felony conviction against her attorney’s advice. They both said she had a strong case that could be won in pretrial motions, but her attorney had been waiting months for the police records, and Breanna needed to return home to her 9-year-old daughter. In New York City, Deon Charles told us he pled guilty because his daughter had just been born that day and he needed to see her. For others, the risk of a substantially longer sentence at trial means they plead to avoid the “trial penalty.” In New Orleans, Jerry Bennett pled guilty to possession of half a gram of marijuana and a two-and-a-half-year prison sentence, because he faced 20 years if he lost at trial: “They spooked me out by saying, ‘You gotta take this or you’ll get that.’ I’m just worried about the time. Imagine me in here for 20 years. They got people that kill people. And they put you up here for half a gram of weed.” For the minority of people we interviewed who exercised their right to trial, the sentences they received in Louisiana and Texas were shocking. In New Orleans, Corey Ladd was sentenced as a habitual offender to 17 years for possessing half an ounce of marijuana. His prior convictions were for possession of small amounts of LSD and hydrocodone, for which he got probation both times. In Granbury, Texas, after waiting 21 months in jail to take his case to trial, Matthew Russell was sentenced to 15 years for a trace amount of methamphetamines. According to him and his attorney, his priors were mostly out-of-state and related to his drug dependence. Incarceration for Drug Possession At year-end 2014, over 25,000 people were serving sentences in local jails and another 48,000 were serving sentences in state prisons for drug possession nationwide. The number admitted to jails and prisons at some point over the course of the year was significantly higher. As with arrests, there were sharp racial disparities. In 2002 (the most recent year for which national jail data is available), Black people were over 10 times more likely than white people to be in jail for drug possession. In 2014, Black people were nearly six times more likely than white people to be in prison for drug possession. Our analysis of data from Florida, Texas, and New York, presented here for the first time, shows that the majority of people convicted of drug possession in these states are sentenced to some form of incarceration. Because each dataset is different, they show us different things. For example, our data suggests that in Florida, 75 percent of people convicted of felony drug possession between 2010 and 2015 had little to no prior criminal history. Yet 84 percent of people convicted of these charges were sentenced to prison or jail. In New York State, between 2010 and 2015, the majority of people convicted of drug possession were sentenced to some period of incarceration. At year-end 2015, one of sixteen people in custody in New York State was incarcerated for drug possession. Of those, 50 percent were Black and 28 percent Latino. In Texas, between 2012 and 2016, approximately one of eleven people in prison had drug possession as their most serious offense; two of every three people serving time for drug charges were there for drug possession; and 116 people had received life sentences for drug possession, at least seven of which were for an amount weighing between one and four grams. For people we spoke to, the prospect of spending months or years in jail or prison was overwhelming. For most, the well-being of family members in their absence was also a source of constant concern, sometimes more vivid for them than the experience of jail or prison itself. Parents told us they worried about children growing up without them. Some described how they missed seeing their children but did not let them visit jail or prison because they were concerned the experience would be traumatizing. Others described the anguish of no-contact jail visits, where they could see and hear but not reach out and touch their young children’s hands. Some worried about partners and spouses, for whom their incarceration meant lost income and lost emotional and physical support. In Covington, Louisiana, Tyler Marshall’s wife has a disability, and he told us his absence took a heavy toll. “My wife, I cook for her, clean for her, bathe her, clothe her…. Now everything is on her, from the rent to the bills, everything…. She’s behind [on rent] two months right now. She’s disabled and she’s doing it all by herself.” In New Orleans, Corey Ladd was incarcerated when his girlfriend was eight months pregnant. He saw his infant daughter Charlee for the first time in a courtroom and held her for the first time in the infamous Angola prison. She is four now and thinks she visits her father at work. “She asks when I’m going to get off work and come see her,” Corey told us. He is a skilled artist and draws Charlee pictures. In turn, Charlee brings him photos of her dance recitals and in the prison visitation hall shows him new dance steps she has learned. Corey, who is currently serving 17 years for marijuana possession, may never see her onstage. Probation, Criminal Justice Debt, and Collateral Consequences Even for those not sentenced to jail or prison, a conviction for drug possession can be devastating, due to onerous probation conditions, massive criminal justice debt, and a wide range of restrictions flowing from the conviction (known in the literature as “collateral consequences”). Many defendants, particularly those with no prior convictions, are offered probation instead of incarceration. Although probation is a lesser penalty, interviewees in Florida, Louisiana, and Texas told us they felt “set up to fail” on probation, due to the enormous challenges involved in satisfying probation conditions (for example, frequent meetings at distant locations that make it impossible for probationers to hold down a job, but require that they earn money to pay for travel and fees). Some defense attorneys told us that probation conditions were so onerous and unrealistic that they would counsel clients to take a short jail or prison sentence instead. A number of interviewees said if they were offered probation again, they would choose incarceration; others said they knew probation would be too hard and so chose jail time. At year-end 2014, the US Department of Justice reported that 570,767 people were on probation for drug law violations (the data does not distinguish between possession and sales), accounting for close to 15 percent of the entire state probation population around the country. In some states, drug possession is a major driver of probation. In Missouri, drug possession is by far the single largest category of felony offenses receiving probation, accounting for 9,500 people or roughly 21 percent of the statewide probation total. Simple possession is also the single largest driver in Florida, accounting for nearly 20,000 cases or 14 percent of the statewide probation total. In Georgia, possession offenses accounted for 17 percent of new probation starts in 2015 and roughly 16 percent of the standing probation population statewide at mid-year 2016. In addition to probation fees (if they are offered probation), people convicted of drug possession are often saddled with crippling court-imposed fines, fees, costs, and assessments that they cannot afford to pay. These can include court costs, public defender application fees, and surcharges on incurred fines, among others. They often come on top of the price of bail (if defendants can afford it), income-earning opportunities lost due to incarceration, and the financial impact of a criminal record. For those who choose to hire an attorney, the costs of defending their case may have already left them in debt or struggling to make ends meet for months or even years to come. A drug conviction also keeps many people from getting a job, renting a home, and accessing benefits and other programs they may need to support themselves and their families—and to enjoy full civil and social participation. Federal law allows states to lock people out of welfare assistance and public housing for years and sometimes even for life based on a drug conviction. People convicted of drug possession may no longer qualify for educational loans; they may be forced to rely on public transport because their driver’s license is automatically suspended; they may be banned from juries and the voting booth; and they may face deportation if they are not US citizens, no matter how many decades they have lived in the US or how many of their family members live in the country. In addition, they must bear the stigma associated with the labels of “felon” and “drug offender” the state has stamped on them, subjecting them to private discrimination in their daily interactions with landlords, employers, and peers. A Call for Decriminalization As we argue in this report, laws criminalizing drug use are inconsistent with respect for human autonomy and the right to privacy and contravene the human rights principle of proportionality in punishment. In practice, criminalizing drug use also violates the right to health of those who use drugs. The harms experienced by people who use drugs, and by their families and broader communities, as a result of the enforcement of these laws may constitute additional, separate human rights violations. Criminalization has yielded few, if any, benefits. Criminalizing drugs is not an effective public safety policy. We are aware of no empirical evidence that low-level drug possession defendants would otherwise go on to commit violent crimes. And states have other tools at their disposal—for example, existing laws that criminalize driving under the influence or child endangerment—to address any harmful behaviors that may accompany drug use. Criminalization is also a counterproductive public health strategy. Rates of drug use across drug types in the US have not decreased over the past decades, despite widespread criminalization. For people who struggle with drug dependence, criminalization often means cycling in and out of jail or prison, with little to no access to voluntary treatment. Criminalization undermines the right to health, as fear of law enforcement can drive people who use drugs underground, deterring them from accessing health services and emergency medicine and leading to illness and sometimes fatal overdose. It is time to rethink the criminalization paradigm. Although the amount cannot be quantified, the enormous resources spent to identify, arrest, prosecute, sentence, incarcerate, and supervise people whose only offense has been possession of drugs is hardly money well spent, and it has caused far more harm than good. Some state and local officials we interviewed recognized the need to end the criminalization of drug use and to develop a more rights-respecting approach to drugs. Senior US officials have also emphasized the need to move away from approaches that punish people who use drugs. Fortunately, there are alternatives to criminalization. Other countries—and even some US states with respect to marijuana—are experimenting with models of decriminalization that the US can examine to chart a path forward. Ending criminalization of simple drug possession does not mean turning a blind eye to the misery that drug dependence can cause in the lives of those who use and of their families. On the contrary, it requires a more direct focus on effective measures to prevent problematic drug use, reduce the harms associated with it, and support those who struggle with dependence. Ultimately, the criminal law does not achieve these important ends, and causes additional harm and loss instead. It is time for the US to rethink its approach to drug use.
Key Recommendations Human Rights Watch and the American Civil Liberties Union call on federal and state legislatures to end the criminalization of the personal use of drugs and the possession of drugs for personal use. In the interim, we urge government officials at the local, state, and federal levels to adopt the recommendations listed below. These are all measures that can be taken within the existing legal framework to minimize the imposition of criminal punishment on people who use drugs, and to mitigate the harmful collateral consequences and social and economic discrimination experienced by those convicted of drug possession and by their families and communities. At the same time, officials should ensure that education on the risks and potential harms of drug use and affordable, evidence-based treatment for drug dependence are available outside of the criminal justice system. Until full decriminalization is achieved, public officials should pursue the following: State legislatures should amend relevant laws so that a drug possession conviction is never a felony and cannot be used as a sentencing enhancement or be enhanced itself by prior convictions, and so that no adverse collateral consequences attach by law for convictions for drug possession.
Legislatures should allocate funds to improve and expand harm reduction services and prohibit public and private discrimination in housing or employment on the basis of prior drug possession arrests or convictions.
To the extent permitted by law and by limits on the appropriate exercise of discretion, police should decline to make arrests for drug possession and should not stop, frisk, or search a person simply to find drugs for personal use. Police departments should not measure officer or department performance based on stop or arrest numbers or quotas and should incentivize and reward officer actions that prioritize the health and safety of people who use drugs.
To the extent permitted by law and by limits on the appropriate exercise of discretion, prosecutors should decline to prosecute drug possession cases, or at a minimum should seek the least serious charge supported by the facts or by law. Prosecutors should refrain from prosecuting trace or residue cases and should never threaten enhancements or higher charges to pressure drug possession defendants to plead guilty. They should not seek bail in amounts they suspect defendants will be unable to pay.
To the extent permitted by law and by limits on the appropriate exercise of discretion, judges should sentence drug possession defendants to non-incarceration sentences. Judges should release drug possession defendants on their own recognizance whenever appropriate; if bail is required, it should be set at a level carefully tailored to the economic circumstances of individual defendants.
To the extent permitted by law and by limits on the appropriate exercise of discretion, probation officers should not charge people on probation for drug offenses with technical violations for behavior that is a result of drug dependence. Where a legal reform has decreased the sentences for certain offenses but has not made the decreases retroactive, parole boards should consider the reform when determining parole eligibility.
The US Congress should amend federal statutes so that no adverse collateral consequences attach by law to convictions for drug possession, including barriers to welfare assistance and subsidized housing. It should appropriate sufficient funds to support evidence-based, voluntary treatment options and harm reduction services in the community.
The US Department of Justice should provide training to state law enforcement agencies clarifying that federal funding programs are not intended to and should not be used to encourage or incentivize high numbers of arrests for drug possession, and emphasizing that arrest numbers are not a valid measure of law enforcement performance.
Methodology This report is the product of a joint initiative—the Aryeh Neier fellowship—between Human Rights Watch and the American Civil Liberties Union to strengthen respect for human rights in the United States. The report is based on more than 365 in-person and telephone interviews, as well as data provided to Human Rights Watch in response to public information requests.
Between October 2015 and March 2016, we conducted interviews in Louisiana, Texas, Florida, and New York City with 149 people prosecuted for their drug use. Human Rights Watch and the American Civil Liberties Union identified individuals who had been subjected to prosecution in those jurisdictions through outreach to service providers, defense attorneys, and advocacy networks as well as through observation of courtroom proceedings. In New York City, the majority of our interviews were conducted at courthouses or at the site of harm reduction and reentry programs. In Florida, Louisiana, and Texas, we met interviewees at detention facilities, drug courts, harm reduction and reentry programs, law offices, and restaurants in multiple counties. In the three southern states, we conducted 64 interviews with people who were in custody—in local jails, state prisons, department of corrections work release or trustee facilities, and courthouse lock-ups across 13 jurisdictions. Within the jails and prisons, interviews took place in an attorney visit room to ensure confidentiality. Most interviews were conducted individually and in private. Group interviews were conducted with three families and with participants in drug court programs in New Orleans (in drug court classrooms) and St. Tammany Parish (in a private room in the courthouse). All individuals interviewed about their experience provided informed consent, and no incentive or remuneration was offered to interviewees. For interviewees with pending charges, we interviewed them only with approval from their attorney and did not ask any questions about disputed facts or issues. In those cases, we explained to interviewees that we did not want them to tell us anything that could be used against them in their case. To protect the privacy and security of these interviewees, a substantial number of whom remain in custody, we decided to use pseudonyms in all but two cases. In many cases, we also withheld certain other identifying information. Upon their urging and because of unique factors in their cases, we have not used pseudonyms for Corey Ladd and Byron Augustine in New Orleans and St. Tammany Parish, Louisiana, respectively. In addition, we conducted 23 in-person interviews with current or former state government officials, including judges, prosecutors, law enforcement, and corrections officers. We also had phone interviews and/or correspondence with US Department of Justice officials and additional state prosecutors. We interviewed nine family members of people currently in custody, as well as 180 defense attorneys, service providers (including those working for harm reduction programs such as syringe exchanges, voluntary treatment programs, and court-mandated treatment programs), and local and national advocates. Where attorneys introduced us to clients with open cases, we reviewed court documents wherever possible and corroborated information with the attorney. In other cases, we also reviewed case files provided by defendants or available to the public online. However, because of the sheer number of interviews we conducted, limited public access to case information in some jurisdictions, and respect for individuals’ privacy, we did not review case information or contact attorneys for all interviewees. We also could not seek the prosecutor’s perspective on specific cases because of confidentiality. We therefore present people’s stories mostly as they and their attorneys told them to us. Human Rights Watch submitted a series of data requests regarding arrests, prosecutions, case outcomes, and correctional population for drug offenses to a number of government bodies, including the Federal Bureau of Investigation (FBI) and various state court administrations, statistical analysis centers, sentencing commissions, departments of correction, clerks of court, and other relevant entities. We chose to make data requests based on which states had centralized systems and/or had statutes or criminal justice trends that were particularly concerning. Attempts to request data were made through email, facsimile, and/or phone to one or more entities in the following states: Alabama, California, Florida, Illinois, Indiana, Kentucky, Louisiana, Maryland, Mississippi, New York, Oklahoma, Texas, and Wisconsin. Among those, the FBI, Alabama Sentencing Commission, Florida Office of State Court Administrator, New York Division of Criminal Justice Services, New York Department of Corrections and Community Supervision, Texas Department of Criminal Justice, and Texas Office of Court Administration provided data to us. Because of complications with the data file the FBI provided to us, we used the same set of data provided to and organized by the Inter-University Consortium for Political and Social Research. We also analyzed data available online from the US Department of Justice’s Bureau of Justice Statistics and some state agencies. As this report was going to press, the FBI released aggregated 2015 arrest data. We have used this 2015 data to update all nationwide arrest estimates for drug possession and other offenses in this report. However, for all state-by-state arrest and racial disparities analyses, we relied on 2014 data, as these analyses required disaggregated data as well as data from non-FBI sources and 2014 remained the most recent year for which such data was available. Although the federal government continues to criminalize possession of drugs for personal use, in practice comparatively few federal prosecutions are for possession. This report therefore focuses on state criminalization, although we call for decriminalization at all levels of government. A note on state selection: We spent a month at the start of this project defining its scope and selecting states on which we would focus, informed by phone interviews with legal practitioners and state and national advocates, as well as extensive desk research. We chose to highlight Louisiana, Texas, Florida, and New York because of a combination of problematic laws and enforcement policies, availability of data and resources, and positive advocacy opportunities. This report focuses on Louisiana because it has the highest per capita imprisonment rate in the country and because of its problematic application of the state habitual offender law to drug possession, resulting in extreme sentences for personal drug use. The report focuses on Texas because of extensive concerns around its pretrial detention and jail system, its statutory classification of felony possession by weight, and its relatively softer treatment of marijuana possession as compared to other drugs. We also emphasize the potential for substantial criminal justice reform in both states—which stakeholders and policymakers are already considering—and the opportunity for state officials at all levels to set an example for others around the country. While this report focuses more heavily on Louisiana and Texas, we draw extensively from data and interviews in Florida and New York. We selected Florida because of its experience with prescription painkiller laws and the codification of drug possession over a certain weight as “trafficking.” We selected New York as an example of a state in which low-level (non-marijuana) drug possession is a misdemeanor and does not result in lengthy incarceration, and yet criminalization continues to be extremely disruptive and harmful to those who use drugs and to their broader communities. New York shows us that reclassification of drug possession from a felony to a misdemeanor, while a positive step, is insufficient to end the harms of criminalization, especially related to policing and arrests. As described in the Background section, all states criminalize drug possession, and the majority make it a felony offense. As our data shows, most if not all states also arrest in high numbers for drug possession and do so with racial disparities. Thus, although we did not examine the various stages of the criminal process in more than these four states, we do know that the front end (the initial arrest) looks similar in many states. It is likely, as people move through the criminal justice system, that many of the problems that we documented in New York, Florida, Texas, and Louisiana are also experienced to varying degrees in other states. At the same time, there may be additional problems in other states that we have not documented. Wherever possible throughout the report, we draw on data and examples from other states and at the national level. We are grateful to officials in Texas, Florida, and New York for their transparency in providing us remarkable amounts of data at no cost. We regret our inability to obtain data from Louisiana. For 15 of Louisiana’s 41 judicial districts, plus the Orleans Criminal District Court, we made data requests to the clerk of court by phone, email, and/or facsimile. None was able to provide the requested information, and those who responded said they did not retain such data. A note on terminology: Although most states have a range of offenses that criminalize drug use, this report focuses on criminal drug possession and drug paraphernalia as the most common offenses employed to prosecute drug use (other offenses in some states include, for example, ingestion or purchase of a drug). Our position on decriminalization—and the harm wrought by enforcement of drug laws—extends more broadly to all offenses criminalizing drug use. When we refer to “drug possession” in this report, we mean possession of drugs for personal use, as all state statutes we are aware of do. Like legal practitioners and others, we sometimes refer to it synonymously as “simple possession.” Possession of drugs for purposes other than personal use, such as for distribution, is typically noted as such in laws and conversation (for example, “possession with intent to distribute,” which we discuss in this report as well). Not all drugs are criminalized: many substances are regulated by the US Food and Drug Administration (FDA), but are not considered “controlled substances” subject to criminalization. This report is about “illicit drugs” as they are understood in public discourse, the so-called “war on drugs,” and state and federal laws such as the Controlled Substances Act. For simplicity, however, when we refer to “drugs” in this report, we mean illicit drugs. Many people who use drugs told us the language of addiction was stigmatizing to them, whether or not they were dependent on drugs. Because drug dependence is a less stigmatizing term, we used it where appropriate in our interviews and in this report to discuss the right to health implications of governments’ response to drug use and interviewees’ self-identified conditions. In so doing, we relied upon the definition of substance dependence as laid out in the American Psychiatric Association’s Diagnostic and Statistical Manual of Mental Disorders (Fourth Edition) (also known as DSM-IV). Factors for a diagnosis of dependence under DSM-IV focus on individuals’ loss of ability to control their drug use.
I. The Human Rights Case for Decriminalization Human Rights Watch and the American Civil Liberties Union oppose the criminalization of personal use of drugs and possession of drugs for personal use. We recognize that governments have a legitimate interest in preventing societal harms caused by drugs and in criminalizing harmful or dangerous behavior, including where that behavior is linked to drug use. However, governments have other means beyond the criminal law to achieve those ends and need not pursue a criminalization approach, which violates basic human rights and, as this report documents, causes enormous harm to individuals, families, and communities. On their face, laws criminalizing the simple possession or use of drugs constitute an unjustifiable infringement of individuals’ autonomy and right to privacy. The right to privacy is broadly recognized under international law, including in the International Covenant on Civil and Political Rights and the American Convention on Human Rights. Limitations on the right to privacy, and more broadly on an individual’s autonomy, are only justifiable if they serve to advance a legitimate purpose; if they are both proportional to and necessary to achieve that purpose; and if they are non-discriminatory. Criminalizing drug use fails this test. Governments and policymakers have long argued that laws criminalizing drug use are necessary to protect public morals; to deter problematic drug use and its sometimes corrosive effects on families, friends, and communities; to reduce criminal behavior associated with drugs; and to protect drug users from harmful health consequences. While these are legitimate government concerns, criminalization of drug possession does not meet the other criteria. It is not proportional or necessary to achieve those government goals and is often implemented in discriminatory ways. Indeed, it has not even proven effective: more than four decades of criminalization have apparently had little impact on the demand for drugs or on rates of use in the United States. Criminalization can also undermine the right to health of those who use. Instead, governments have many non-penal options to reduce harm to people who use drugs, including voluntary drug treatment, social support, and other harm reduction measures. Criminalization of drug use is also not necessary to protect third parties from harmful actions performed under the influence of drugs, and the notion that harmful or criminal conduct is an inevitable result of drug use is a fallacy. Governments can and do criminalize negligent or dangerous behavior (such as driving under the influence or endangering a child through neglect) linked to drug use, without criminalizing drug use itself. This is precisely the approach US laws take with regard to alcohol consumption. Those who favor criminalization of drug use often emphasize harms to children. While governments have important obligations to take appropriate measures—legislative, administrative, social, and educational—to protect children from the harmful effects of drug use, imposing criminal penalties on children for using or possessing drugs is not the answer. States should not criminalize adult drug use on the grounds that it protects children from drugs. Worldwide, the practical realities of governments’ efforts to enforce criminal prohibitions on drug use have greatly compounded the urgent need to end those prohibitions. Criminalization has often gone hand-in-hand with widespread human rights violations and adverse human rights impacts—while largely failing to prevent the possession or use of drugs. And rather than protecting health, criminalization of drug use has in fact undermined it. These grim realities are on stark display in the United States. This report describes the staggering human rights toll of drug criminalization and enforcement in the US. Not only has the government’s “war on drugs” failed on its own terms, but it has needlessly ruined countless lives through the crushing direct and collateral impacts of criminal convictions, while also erecting barriers that stand between people struggling with drug dependence and the treatment they may want and need. In the United States the inherent disproportionality of criminalizing drug use has been greatly amplified by abusive laws. Sentences imposed across the US for drug possession are often so excessive that they would amount to disproportionate punishment in violation of human rights law even if criminalization were not per se a human rights problem. In many US states these excessive sentences take the form of lengthy periods of incarceration (especially when someone is sentenced as a “habitual offender” for habitual drug use), onerous probation conditions that many interviewees called a set-up for failure, and sometimes crippling fines and fees. This report also describes a range of other human rights violations and harms experienced by people who use drugs and by entire families and communities as a result of criminalization, in addition to the punishments imposed by law. For instance, enforcement of drug possession laws has a discriminatory racial impact at multiple stages of the criminal justice process, beginning with selective policing and arrests. In addition, enforcement of drug possession laws unfairly burdens the poor at almost every step of the process, from police encounters, to pretrial detention, to criminal justice debt and collateral consequences including exclusion from public benefits, again raising questions about equal protection rights. Many of the problems described in this report are not unique to drug cases; rather, they reflect the broader human rights failings of the US criminal justice system. That fact serves only to underscore the practical impossibility of addressing these problems through incremental changes to the current criminalization paradigm. It also speaks to the urgency of removing drug users—people who have engaged in no behavior worthy of criminalization—from a system that is plagued with broader and deeply entrenched patterns of human rights abuse and discrimination. Rather than criminalizing drug use, governments should invest in harm reduction services and public education programs that accurately convey the risks and potential harms of drug use, including the potential to cause drug dependence. Harm reduction is a way of preventing disease and promoting health that “meets people where they are” rather than making judgments about where they should be in terms of their personal health and lifestyle. Harm reduction programs focus on limiting the risks and potential harms associated with drug use and on providing a gateway to drug treatment for those who seek it. Implementing harm reduction practices widely is not just sound public health policy; it is a human rights imperative that requires strong federal and state leadership. The federal government has taken some important steps to promote harm reduction, as have some state and local entities. However, the continued focus on criminalization of drug use—and the aggressiveness with which that is pursued by many public officials—runs counter to harm reduction. This report calls for a radical shift away from criminalization, towards health and social support services. Human rights principles require it.
II. Background The “War on Drugs” For four decades, federal and state measures to battle the use and sale of drugs in the US have emphasized arrest and incarceration rather than prevention and treatment. Between 1980 and 2015, arrests for drug offenses nearly tripled, rising from 580,900 arrests in 1980 to 1,488,707 in 2015. Of those total arrests, the vast majority (78 percent in 1980 and 84 percent in 2015) have been for possession. Yet drug possession was not always criminalized. For much of the 19th century, opiates and cocaine were largely unregulated in the US. Regulations began to be passed towards the end of the 19th and at the start of the 20th century—a time when the US also banned alcohol. Early advocates for prohibitionist regimes relied on moralistic arguments against drug and alcohol use, along with concerns over health and crime. But many experts also point to the racist roots of early prohibitionist efforts, as certain drugs were associated in public discourse with particular marginalized races (for example, opium with Chinese immigrants). The US has also been a major proponent of international prohibition, and helped to push for the passage of the three major international drug control conventions beginning in the 1960s. The purpose of the conventions was to combat drug abuse by limiting possession, use, and distribution of drugs exclusively to medical and scientific purposes and by implementing measures against drug trafficking through international cooperation. In 1971, President Richard Nixon announced that he was launching a “war on drugs” in the US, dramatically increasing resources devoted to enforcing prohibitions on drugs, using the criminal law. He proclaimed, “America’s public enemy number one in the United States is drug abuse. In order to fight and defeat this enemy, it is necessary to wage a new, all-out offensive.” There are reasons to believe that the declaration of the “war on drugs” was more political in nature than a genuine response to a public health problem. Over the next decade, the “war on drugs” combined with a larger “tough on crime” policy approach, whose advocates believed harsh mandatory punishments were needed to restore law and order to the US. New laws increased the likelihood of a prison sentence even for low-level offenses, increased the length of prison sentences, and required prisoners to serve a greater proportion of their sentences before any possibility of review. These trends impacted drug offenses as well as other crimes. The new drug laws contributed to a dramatic rise in the prison population. Between 1980 and 2003 the number of drug offenders in state prisons grew twelvefold. By 2014, an estimated 208,000 men and women were serving time in state prisons for drug offenses, constituting almost 16 percent of all state prisoners. Few of those entering prison because of drug offenses were kingpins or major traffickers. A substantial number were convicted of no greater offense than personal drug use or possession. In 2014, nearly 23 percent of those in state prisons for drug offenses were incarcerated simply for drug possession. Because prison sentences for drug possession are shorter than for sales, rates of admission are even more telling: in 2009 (the most recent year for which such data is available), about one third of those entering state prisons for drug offenses (for whom the offense was known) were convicted of simple drug possession. Drug Use in the United States More than half of the US adult population report having used illicit drugs at some point in their lifetime, and one in three adults reports having used a drug other than marijuana.[44] The US Department of Health and Human Services’ Substance Abuse and Mental Health Services Administration (SAMHSA) conducts an annual survey of nearly 70,000 Americans over the age of 12 to produce the standard data used to research drug use. According to SAMHSA, 51.8 percent of adults reported lifetime use in 2014.[45] Moreover, 16.6 percent of the adult population had used illicit drugs in the past year, while one in ten had used such drugs in the past month.[46] Lifetime rates of drug use are highest among white adults for all drugs in total, and for specific drugs such as marijuana, cocaine (including crack), methamphetamine, and non-medical use of prescription drugs. Latino and Asian adults use most drugs at substantially lower rates.[47] For more recent drug use, for example use in the past year, Black, white, and Latino adults use drugs other than marijuana at very similar rates. For marijuana, 16 percent of Black adults reported using in the past year compared to 14 percent of white adults and about 11 percent of Latino adults: State Drug Possession Laws All US states currently criminalize the possession of illicit drugs. Different states have different statutory schemes, choosing between misdemeanors and felonies, making distinctions based on type and/or quantities of drugs, and sometimes treating second or subsequent offenses more harshly. State sentences range from a fine, probation, or under one year in jail for misdemeanors, up to a lengthy term in prison—for example, 10 or 20 years—for some felony possession offenses or, when someone is sentenced under some states’ habitual offender laws, potentially up to life in prison. While some states have decriminalized possession of small amounts of marijuana, other states still make marijuana possession a misdemeanor or even a felony. No state has decriminalized possession of drugs other than marijuana. As to “schedule I and II” drugs (which include heroin, cocaine, methamphetamines, and most commonly known illicit drugs), eight states and the District of Columbia treat possession of small amounts a misdemeanor, including New York. In the remaining 42 states, possession of small amounts of most illicit drugs other than marijuana is either always or sometimes a felony offense. In addition to the “convicted felon” label, many felony possession laws provide for lengthy sentences. Of the states we visited, Florida, Louisiana, and Texas classify possession of most drugs other than marijuana as a felony, no matter the quantity, and provide for the following sentencing ranges. In Florida, simple possession of most drugs carries up to five years in prison. Florida drug trafficking offenses are based simply on quantity triggers: simple possession can be enough, without any evidence of trafficking other than the quantity. In Louisiana, possession of most drugs other than heroin carries up to five years in prison, and heroin carries a statutory minimum of four years in prison, up to a possible ten years. In Texas, possession of under a gram of substances containing commonly known drugs including cocaine, heroin, methamphetamine, PCP, oxycodone, MDMA, mescaline, and mushrooms (or between four ounces and five pounds of marijuana) carries six months to two years. One to four grams carries two to ten years. Texas judges may order the sentence to be served in prison or may suspend the sentence and require a term of probation instead. On top of these baseline ranges, some states allow prosecutors to enhance the sentence range for drug possession by applying habitual offender laws that treat defendants as more culpable—and therefore deserving of greater punishment—because they have prior convictions. For example, in Louisiana a person charged with drug possession who has one or two prior felony convictions faces up to 10 years in prison. With three prior felony convictions, a person charged with drug possession faces a mandatory minimum of 20 years to life in prison. In Texas, a person with two prior felony convictions who is charged with possession of one to four grams of drugs faces 25 years to life. In 2009 (the most recent year for which such data is available), 50 percent of people arrested for felony possession offenses in the 75 largest US counties had at least one prior felony conviction, mostly for non-violent offenses. Thus the scope of potential application of the habitual offender laws to drug possession cases is extensive for states that employ them.
III. The Size of the Problem: Arrests for Drug Use Nationwide Possession Arrests by the Numbers Across the United States, police make more arrests for drug possession than for any other crime. Drug possession accounts for more than one of every nine arrests by state law enforcement agencies around the country. Although all states arrest a significant number of people for drug possession each year, police focus on it more or less heavily in different states. For example, in California, one of every six arrests in 2014 was for drug possession, while in Alaska the rate was one of every 27. In 2015, state law enforcement agencies made more than 1.25 million arrests for drug possession—and because not all agencies report data, the true number of arrests is higher. Even this estimate reveals a massive problem: 1.25 million arrests translates into an arrest for drug possession every 25 seconds of each day. Some states arrest significantly more people for drug possession than other states: While the bulk of drug possession arrests are in large states such as California, Texas, and New York, the list of hardest-hitting states looks different when mapped onto population size. Maryland, Nebraska, and Mississippi have the highest per capita drug possession arrest rates. For comparison, the rate of arrest for drug possession ranged from 700 per 100,000 people in Maryland to 77 per 100,000 in Vermont: The differences in drug arrest rates at the state level are all the more striking because drug use rates are fairly consistent across the country. SAMHSA data shows that about 3 percent of US adults used an illicit drug other than marijuana in the past month.[64] There is little variation at the state level, where past month use ranges from about 2 percent in Wyoming to a little over 4 percent in Colorado. For marijuana, there is slightly greater variation. About 8 percent of US adults used marijuana in the past month, but this ranged from about 5 percent in South Dakota to 15 percent in Colorado.[65] While many public officials told us drug law enforcement is meant to get dealers off the streets, the vast majority of people arrested for drug offenses are charged with nothing more than possessing a drug for their personal use. For every person arrested for selling drugs in 2015, four were arrested for possessing or using drugs—and two of those four were for marijuana possession. Despite shifting public opinion on marijuana, about half of all drug possession arrests are for marijuana. In 2015, there were over 574,640 arrests just for marijuana possession. By comparison, there were 505,681 arrests for violent crimes (which the FBI defines as murder, non-negligent manslaughter, rape, robbery, and aggravated assault). This means that police made almost 14 percent more arrests for simple marijuana possession than for all violent crimes combined. Some police told us that they have to make an arrest if they see unlawful conduct, but this glosses over the key question of where and upon whom the police are focusing their attention to begin with. Differences in arrest rates for drug possession within a state reveal that individual police departments have substantial discretion in how they enforce the law, resulting in stark contrasts. For example, data provided to us by Texas shows that 53 percent of drug possession arrests in Harris County (in and around Houston) were for marijuana, compared with 39 percent in nearby Dallas County. Yet a nearly identical proportion of both counties’ populations used drugs in the past year.[72] Additionally, certain jurisdictions within a state place a stronger focus on policing drug possession. In New York State, the counties with the highest drug possession arrest rates by a large margin were all in and around urban areas of New York City and Buffalo. In Florida, the highest rates of arrest were spread around the state in rural Bradford County, urban Miami-Dade County, Monroe County (the Keys), rural Okeechobee County, and urban Pinellas County. Within both states, drug use rates vary little between regions.[75] In Texas, the counties with the highest drug possession arrest rates are all small rural counties. Kenedy County, for example, has an adult population of 407 people, yet police there made 329 arrests for drug possession between 2010 and 2015. Racial Disparities Rather than stumbling upon unlawful conduct, when it comes to drug use and possession, police often aggressively search it out—and they do so selectively, targeting low-income neighborhoods and communities of color. As criminal justice practitioners, social science experts, and the US public now recognize all too well, racially disparate policing has had devastating consequences. Research has consistently shown that police target certain neighborhoods for drug law enforcement because drug use and drug sales occur on streets and in public view. Making arrests in these neighborhoods is therefore easier and less resource-intensive. Comparably few of the people arrested in these areas are white. Harrison Davis, a young Black man who was charged with possession of cocaine in Shreveport, Louisiana, recalled how the arresting officer had defended what Harrison considered racial profiling during a preliminary examination: “‘I pulled him over because he was in a well-known drug area,’ the police officer says to the judge. But I’ve been living there for 27 years. It’s nothing but my family.” Black adults are more than two-and-a-half times as likely as white adults to be arrested for drug possession in the US. In 2014, Black people accounted for just 14 percent of people who used drugs in the previous year, but close to a third of those arrested for drug possession. In the 39 states for which we have sufficient police data, Black adults were more than four times as likely to be arrested for marijuana possession as white adults.[83] The disparities in absolute numbers or rates of arrests cannot be blamed on a few states or jurisdictions. While numerous studies have found racial disparities in marijuana arrests, analyses of state- and local-level data provided to Human Rights Watch show consistent disparities across the country for all drugs, not just marijuana. In every state for which we have sufficient police data, Black adults were arrested for drug possession at higher rates than white adults, and in many states the disparities were substantially higher than the national rate—over 6 to 1 in Montana, Iowa, and Vermont.[85] These figures likely underestimate the racial disparity nationally, because in three states with large Black populations—Mississippi, Louisiana, and Alabama—an insufficient proportion of law enforcement agencies reported data and thus we could not include them in our analysis. Our in-depth analysis of Florida and New York data show that disparities are not isolated to a few municipalities or urban centers, though they are considerably starker in some localities than in others. In Florida, 60 of 67 counties arrested Black people for drug possession at higher rates than white people.[86] In Sarasota County, the ratio of Black to white defendants facing drug possession charges was nearly 8 to 1 when controlling for population size. Down the coast in comparably sized Collier County, the ratio, while still showing a disparity, was less than 3 to 1. In New York, 60 of 62 counties arrested Black people for drug possession at higher rates than white people. In Manhattan (New York County), there were 3,309 arrests per 100,000 Black people compared to 306 per 100,000 white people between 2010 and 2015. In other words, Black people in Manhattan were nearly 11 times more likely than white people to be arrested for drug possession. Under international human rights law, prohibited racial discrimination occurs where there is an unjustifiable disparate impact on a racial or ethnic group, regardless of whether there is any intent to discriminate against that group. Enforcement of drug possession laws in the US reveals stark racial disparities that cannot be justified by disparities in rates of use. Incentives for Drug Arrests Department cultures and performance metrics that incentivize high numbers of arrests may drive up the numbers of unnecessary drug arrests and unjustifiable searches in some jurisdictions. In some cases, department culture may suggest to individual officers that the way to be successful and productive, and earn promotions, is to have high arrest numbers. In turn, a focus on arrest numbers may translate into an emphasis on drug arrests, because drug arrests are often easier to obtain than arrests for any other type of offense, especially if certain neighborhoods are targeted. As Randy Smith, former Slidell Chief of Police and current Sheriff for St. Tammany Parish, Louisiana, told us, [Suppose I say,] “I want you to go out there and bring me in [more arrests]. Your numbers are down, last month you only had 10 arrests, you better pick that up or else I’m going put you in another unit.” You’re going to go out there and do what? You’re going [to go] out there to make drug arrests. Although the practice is outlawed in several states, some police departments operate a system of explicit or implicit arrest quotas. Whether arrest numbers are formalized into quotas or understood as cultural expectations of a department, they may put some officers under immense pressure not only to make regular stops and arrests but to match or increase their previous numbers, in order to be seen as adequately “productive.” In August 2015, twelve New York Police Department officers filed a class-action lawsuit against the department for requiring officers to meet monthly arrest and summons quotas, with one plaintiff noting that after being told he was “dragging down the district’s overall arrest rate,” he was given more undesirable job assignments. In an Alabama town, an officer claimed in 2013 to have been fired after publicly criticizing the police department’s new ticket quota directives, which included making roughly 72,000 contacts (including arrests, tickets, warnings, and field interviews) per year in a town of 50,000 people. Such departmental pressure to meet arrest quotas can easily lead to more arbitrary stops and searches. In the aftermath of Michael Brown’s death in Ferguson, Missouri, the US Department of Justice’s Civil Rights Division recommended that the Ferguson Police Department change its stop and search policies in part by prohibiting “the use of ticketing and arrest quotas, whether formal or informal,” and focus instead on community protection. Randy Smith told us he opposes putting “expectations” or quotas on officers. “It kills you,” he explained: I think sometimes you start building numbers and stats, and you kind of lose the ability to make better decisions on getting someone help, [when] getting a stat is putting them in jail. I’ve been there, and I’ve seen. You start having some serious problems. He said when officers understand that they are expected to make high arrest numbers, they often focus on drug possession: So you’re going to stop 10 cars in maybe a not so good neighborhood. Out of 10 cars, you might get one out of those 10 that you get some dope or marijuana or a joint in the ash tray, or a Xanax in your purse…. If you dump [any] purse out, there’s probably some kind of anti-depression medicine in there, which is a felony [potentially]. And we know it, we’ve seen it, where those street crime guys will get out there and bring you to jail on a felony for a schedule four without a prescription, just because they got a stat. They’re tying up the jail. It’s ridiculous. That shit has got to stop…. You’ve got to look at the big picture. If you put quotas—which is a bad word—[officers] are going to start bum rapping people. The guy we got with the one pill, the one stop out of 10, what did I do with those other nine people that weren’t doing nothing? I stopped them. I harassed them. I asked them if they had guns in their car. I asked them if they had any illegal contraband. I’m asking them to search their car. What am I doing to the general citizen? Federal Funding, an Opportunity for Leadership In recent years, many advocates have expressed concern that high arrest numbers were incentivized by federal grant monies to state and local law enforcement through the Edward Byrne Memorial Justice Assistance Grant (JAG) program, administered by the Department of Justice’s Bureau of Justice Assistance (BJA). Although funding is allocated based on a non-discretionary formula, grant recipients must report back to BJA on how they use the funds, including—historically—reporting as a “performance measure” the number of individuals arrested. Many groups were concerned that this sent a message to state and local law enforcement agencies that high arrests numbers meant more federal funds, and that it in turn incentivized drug arrests. Recognizing that arrest numbers are not meaningful measures of law enforcement performance, BJA undertook a thorough revision of JAG performance measures, now called “accountability measures.” As of fiscal year 2015, law enforcement agencies receiving JAG funds no longer must report on arrest numbers as a measure of performance or as accountability for funds received. BJA Director Denise O’Donnell told us, “Arrests can easily misrepresent what is really going on in criminal justice practice, and be misleading as to what we are really interested in seeing supported with JAG funds, namely evidence-based practices. So BJA has moved away from arrests as a metric, instead focusing on evidence-based practices, such as community collaboration, prevention, and problem-solving activities.” This move is commendable. In the extensive training and technical assistance BJA provides to state law enforcement agencies, through JAG and other funding streams, BJA should reiterate that arrest numbers are not a sound measure of police performance. BJA should also encourage state agencies to pass the message along to local law enforcement agencies, which must still apply to the state agency for their share of the federal fund allocations. In many cases, that process continues to be discretionary and application-based and, in at least one recent call for applications, may still improperly emphasize drug arrests.
IV. The Experience of Being Policed They disrupt, disrupt, disrupt our lives…. From the time the cuffs are put on you, from the time you’re confronted, you feel subhuman. You’re treated like garbage, talked to unprofessionally. Just the arrest is aggressive to subdue you as a person, to break you as a man. —Cameron Barnes, arrested repeatedly for drug possession by New York City police from the 1980s until 2012 The sheer magnitude of drug possession arrests means that they are a defining feature of the way people experience police in the United States. For people we interviewed, drug laws shaped their interactions with and views of the police and contributed to a breakdown of trust. Instead of experiencing police as protectors, arrestees in all four states we visited described experiences in which police officers intimidated and humiliated them. They described having their pockets searched, their cars ransacked, being subjected to drug-sniffing dogs, and being overwhelmed by several officers at once. This led some people to feel under attack and “out of a movie.” Prosecutor Melba Pearson in the Miami-Dade State Attorney’s office said, “The way we treat citizens when we encounter them is wrong. If they expect to have their rights violated, of course there’s going to be hatred of the police…. You can’t take an invading-a-foreign-country mentality into the neighborhood.” Pretextual Stops and Searches without Consent Many people we interviewed said police used pretextual reasons to stop and search them, told them to take things out of their pockets, otherwise threatened or intimidated them to obtain “consent” to search, and sometimes physically manhandled them. These stories are consistent with analyses by the American Civil Liberties Union and other groups that have extensively documented the failures of police in many jurisdictions to follow legal requirements for stops and searches. US Supreme Court Justice Sonia Sotomayor has argued in dissent that the Court’s interpretation of US law “allow[s] an officer to stop you for whatever reason he wants—so long as he can point to a pretextual justification after the fact…. When we condone officers’ use of these devices without adequate cause, we give them reason to target pedestrians in an arbitrary manner.” These fears certainly accord with the realities facing many heavily policed communities. Defendants and attorneys we interviewed described a litany of explanations offered by the police to justify stopping a person on the street or in a car, many of which appeared to them to be pretextual: failure to signal, driving in the left lane on an interstate, driving with a license plate improperly illuminated or with a window tint that is too dark, walking in the opposite direction of traffic, failing to cross the street at the crosswalk or a right angle, or walking in the street when a sidewalk is provided. In many jurisdictions, these reasons are not sufficient in themselves to allow the officer to search the person or vehicle. Yet in police reports we reviewed for several cases in Texas and Florida, officers stopped the defendant for a traffic violation, did not arrest for that violation, but conducted a search anyway. They cited as justification that they smelled marijuana, that consent to search was provided, or that the person voluntarily produced drugs from their pockets for the officer to seize. These justifications often stand in stark contradiction to the accounts of the people who were searched. The Smell of Marijuana The criminalization of marijuana in many states has given officers a powerful and widely-used pretext for searching people’s cars. Will Pryor, the prosecutor responsible for screening cases in Caddo Parish, Louisiana, told us that most drug possession cases he sees result from traffic stops where the officer allegedly smells marijuana. Where the possession of marijuana is criminal—as it remains in most states—the odor of marijuana often gives law enforcement probable cause to search a car,typically anywhere that marijuana could be found (including car doors, consoles, glove compartments, trunks, and containers and bags inside the car). People within the car can then be charged for possessing anything illegal found as a result of the search, even when no marijuana is discovered. A number of interviewees in Florida, Louisiana, and Texas described arrests that followed this pattern, and we reviewed other police reports that cited the odor of marijuana. Miami prosecutor Melba Pearson told us, If I hear one more time, “I smelled marijuana,” and the subsequent search revealed no marijuana! … I work with police officers every day. A large majority are wonderful, fair people. However, there is a mentality in certain departments that tends to draw individuals who are action junkies, the “jump out boys.”… Some officers believe the ends justify the means [and don’t] consider it a problem because their job is to get drugs off the street without worrying about whether or not the case is prosecutable, or if there is a long term positive effect on the community. Miami Judge Dennis Murphy told us, “Easily one out of four [police] stops, [I see] ‘defendant ran a stop sign, [officer] approached, there was a distinct odor of marijuana, so I searched and arrested for [other] drugs.’” Once stopped, some of the people we interviewed did not realize they had a right not to acquiesce to police searches, or felt they could not exercise it in the face of the officer’s authority. A few allowed officers to search a vehicle because they did not think there were any drugs inside. In many other cases, interviewees told us, they never consented at all, and police simply did what they pleased. We reviewed arrest reports in Texas and Florida where police accounts of how they obtained consent for a search were highly implausible. Police described defendants voluntarily emptying their pockets and revealing drugs, sometimes without being asked to do so; freely consenting to a search of their person when they had drugs on them; and admitting that they were about to use drugs before the officer found drugs on them. Prosecutor Melba Pearson told us, “I have had [defendants] who sometimes do give up the drugs…. However, many times where we get a story [from police] about how consent was obtained or drugs were located pursuant to a search [it is problematic].” Other interviewees described police tactics that they said allowed officers to manipulate their way around the requirements of the law. In Brevard County, Florida, Isabel Evans told us that she was arrested in 2015 for the first time for hydromorphine possession and that she felt unable to disobey the officer: He said my pocket was bulged. He said, “Reach in there and take it out.” I pulled it out, and he handcuffed me. The cop knew what he was doing. He couldn’t pull it out himself, so he took advantage of my ignorance of the law, of a first-timer like me. I’m not going to say you can’t do that. I’m scared. In Shreveport, Louisiana, Glenda Hughes was charged with felony possession of Klonopin in 2015. She told us, “If you say no to the search, that gives them suspicion…. If I had known more, maybe it would have come out differently. It’s not my fault though that I don’t know the legal system and the laws.” Feeling Targeted All of the people arrested for drug possession we interviewed said they experienced fear, anger, or deep feelings of being unfairly targeted when police confronted, searched, and arrested them. Many interviewees said that because they had been targeted or profiled in the past, they experienced a heightened sense of vulnerability to police intervention and insecurity in their person whenever they were in public. They described constantly feeling the need to look over their shoulder and exercise hyper-vigilance in all their actions, regardless of whether they had drugs on them. Leonard Lewis, a 28-year-old Black man in Houston, had been arrested and convicted of drug possession in the past. He said he felt that made him more likely to be stopped again and more likely to be arrested once police ran his name. He said the fact that he is big and Black makes him more vulnerable. His mother told us, “[It] mess[es] with his mind. [Leonard] drives like a grandpa, like how an old man drives. He turns on his signals, he stops [before stop signs]. Even when he is pulling into the house, the boy turns on his signal. He says, ‘Mama, the police are never gonna have a reason to stop me.’” In the Bronx, Angel Suarez explained to us: I consider myself an addict and sometimes I worry when I’m using, because they search you for no reason. The cops know me; most of the time they see me they stop and search me. It makes it harder to live life when you’re walking down the street watching your back, but at the same time when you don’t have your drug it makes you sick. Drug enforcement practices do not only affect people who use or have used drugs. They broadly impact people who live in heavily policed neighborhoods, people who are homeless, and people police claim to regard as “suspicious” for whatever reason—sometimes solely because of their race. Damian’s Story Damian Williams related his story to us as follows: In 2016, Damian and his girlfriend were living out of his car in Houston and trying to make ends meet. He said, “We were just working [all the time]. I was going to work during the day; she goes to work at nighttime. It was hectic, it was hard, but it was life.” They had just been approved to rent an apartment when Damian was pulled over for failure to signal. The officer said he smelled marijuana, and while Damian waited in handcuffs, he ransacked the car for 45 minutes, tearing through their bags, throwing their belongings on the ground. The officer finally emerged with half of a pill, and no marijuana. Damian said he did not know where the half-pill came from and thought it was a joke at first, but the officer told him it was a felony. Damian was taken to booking and charged with felony possession of Ecstasy. Damian appeared before a judge at 3 or 4 a.m. and his girlfriend bonded him out the next morning. She had rented a hotel room, because the car they used to sleep in had been impounded. He got out of jail before the buses ran, so he took a taxi straight to Walmart to buy clothes and soap because everything he owned was in his impounded car. Then he went to the hotel room to lie down for half an hour, before he had to catch the bus back to court. He said that all the money they lost on the impoundment, bond, and hotel meant they were no longer able to rent the apartment for which they had been approved. “It’s making me feel a little paranoid every time I see a police officer…. I didn’t think I was doing nothing then, and then I was put in jail and am paying all this money,” he said. On his girlfriend’s urging, Damian cut off his dreadlocks while out on bond and started dressing differently. He said appearance matters to the police. *** Darius’ Story Darius Mitchell, a Black man in his 30s, said he does not use drugs. From Darius and many other interviewees, we heard a similar story: Police stop a Black man walking or driving in a “bad” neighborhood, citing a minor and sometimes pretextual reason; they treat him as if he is a suspected drug dealer and insist on searching his person or his car without first obtaining consent; they find a small amount of drugs and arrest him for possession; and his life is put into upheaval by a prosecution. Darius recounted his arrest to us as follows: Late one night in Jefferson Parish, Louisiana, an officer pulled Darius over as he was leaving his child’s mother’s house. The officer said he had been speeding. When Darius replied that he certainly had not, the officer said he smelled marijuana. He asked whether he could search, and Darius said no. Another officer and canine came and searched his car anyway. They yelled, “Where are the pounds?” suggesting he was a marijuana dealer. The officers eventually found a pill bottle in the glove compartment of Darius’ car, with his child’s mother’s name on it. Darius said that he had driven her to the emergency room after an accident, and she had been prescribed hydrocodone, which she forgot in the car. The police kept him in their vehicle for an hour as they discussed what to do. When they eventually took him in, he was prosecuted for possession of hydrocodone, his first felony charge. The prosecutor filed charges and took the case all the way to a verdict, despite Darius’ explanation of why he had the pill bottle. Bail was set at $1,000, and Darius was able to bond out. He paid another $2,000 to hire a lawyer. Darius was ultimately acquitted at trial, but even months later he remained in financial debt from his legal fees, was behind in rent and utilities bills, and had lost his cable service, television, furniture, and other comforts. He told us: I was pulling money [from wherever I could]. I had three jobs at the time because I had to pay all these fees, because I still had my own apartment [to pay for and] had to take care of my kids. I was already living paycheck to paycheck. I was making it, but with fines and fees I was really pinching then. I was not paying this to pay that…. It was embarrassing for myself like that. [I had] court fees, lawyer fees, the light bill, rent.… They took the TV, the sofa set. I couldn’t pay it. [I was acquitted] but I still lost a lot. I still had to go through a lot of misery. Darius added, “On my record, they show that I didn’t get convicted, but it still shows that I got arrested.” Although Darius “walked free,” he still feels bound by his criminal justice debt and his arrest record.
V. Aggressive Prosecutions I loved to prosecute. I was the avenging angel. I was doing God’s work. I was getting the riff-raff off the street. Time proved me to be wrong. So I don’t consider those years to be a badge of honor. Guilt. A feeling that I did some things I shouldn’t have done…. I have been on both sides of the fence: the War on Drugs is lost. I’m really disgusted with the continuation of the prosecutions. I’m really disappointed. –Marty Stroud, former assistant district attorney and current defense attorney, Shreveport, Louisiana, February 2016 After police arrest a person, prosecutors have enormous discretion in deciding whether to prosecute, what charges to bring, and how the person will experience the criminal justice system. Because any given set of facts can often support different kinds of charges, if prosecutors decide to prosecute a drug use case, they typically have a range of charges to choose from—from misdemeanor drug paraphernalia to, in most states, felony possession to possession with intent to distribute. The National District Attorneys Association advises, “In making a charging decision, the prosecutor should keep in mind the power he or she is exercising at that point in time. The prosecutor is making a decision that will have a profound effect on the lives of the person being charged, the person’s family … and the community as a whole.” Despite these opportunities for discretion, many prosecutors are far too willing to throw the book at people who use drugs, to charge them high and to seek the highest possible sentences. While each prosecutor exercises discretion in his or her own cases, office culture often encourages prosecutors to adopt a default position of charging unreasonably high, instead of applying charges that speak appropriately to the facts of the case or declining to charge at all. As discussed later in this report, in many cases this appears to be a deliberate tactic aimed at coercing defendants into pleading guilty to a lesser offense—an inherently abusive application of prosecutorial discretion. Prosecutor Melba Pearson said she believed prosecutors have an obligation to use their discretion to address racial disparities in the cases police bring them: It’s incumbent upon the state to report to the police that we’re having this disparity. To say, what can we do about this? ... It is a policing issue if you’re stopping a kid 15 times a month for a [car window] tint. [I can say,] “Don’t bring me that case. You’re clearly racially profiling.” Where the circumstances of a stop are such that there are issues of constitutionality, when you don’t prosecute, police will notice. When you tacitly approve it, police will continue. In some cases, prosecutors not only fail to confront this problem but compound it by exercising their own discretion in racially biased or at least racially disparate ways, for instance by charging Black defendants with more serious crimes or seeking sentencing enhancements more often when the defendant is Black. Different prosecutors’ offices applying the same state laws may prosecute drug possession differently, revealing another layer of potential arbitrariness in who is prosecuted for drug use. In Florida, among the counties with at least 5,000 possession cases, there were striking disparities in the rate at which prosecutors declined to prosecute drug cases. For example, Polk County prosecutors declined to prosecute 57 percent of drug possession cases brought to them while Broward County prosecutors declined only 13 percent. Going after the Small Stuff We interviewed over 100 people in Texas, Louisiana, Florida, and New York who were prosecuted for small quantities of drugs—in some cases, fractions of a gram—that were clearly for personal use. Particularly in Texas and Louisiana, prosecutors did more than simply pursue these cases—our interviewees reported that prosecutors often selected the highest charges available and went after people as hard as they could. Possession Charges in Texas for Fractions of a Gram Perhaps nothing better illustrates the harmful realities of aggressive prosecution and a charge-them-high philosophy than state jail felony cases in Texas. Our data analysis suggests that in 2015, nearly 16,000 people were convicted and sentenced to incarceration for state jail drug possession offenses. State jail felony drug possession is possession of less than one gram of substances containing common drugs such as cocaine, heroin, methamphetamine, PCP, oxycodone, MDMA, mescaline, and mushrooms. This means they received a felony conviction and time behind bars for possessing less than a gram of drugs—the weight of less than one-fourth of a sugar packet. Depending on the type of drug, its strength and purity, and the tolerance of the user, one gram may be a handful of doses or even a dose or less of many drugs. Data provided to Human Rights Watch by the Texas Office of Court Administration, and presented here for the first time, shows case outcomes for all felony drug possession cases in Texas courts. Although the data does not differentiate between felony degrees, we can extrapolate based on state law and sentencing options. Based on these extrapolations, the data suggests that in Texas in 2015, over 78 percent of people sentenced to incarceration for felony drug possession in Texas were convicted of a state jail felony. That means some 16,000 people were sentenced to time behind bars for possessing less than one gram of commonly used drugs. Because this figure represents only those sentenced to incarceration, the number of people prosecuted and potentially convicted of state jail felony drug possession is likely thousands more, since Texas law requires that all persons convicted of first time state jail felony drug possession receive probation, and judges may impose probation in other cases as well. The majority of the 30 defendants we interviewed in Texas had substantially less than a gram in their possession when they were arrested: not 0.9 or 0.8 grams, but sometimes 0.2 or 0.02, or even a result from the lab reading “trace,” meaning that the amount was too small even to be measured. One defense attorney in Dallas told us a client was charged with drug possession in December 2015 for 0.0052 grams of cocaine. To put it into perspective, that is equivalent to the weight of 13 ten-thousandths (.0013) of a sugar packet. The margin of error for the lab that tested it is 0.0038 grams, meaning it could have weighed as little as 0.0014 grams, or 35 hundred-thousandths (0.00035) of a sugar packet. These numbers are almost incomprehensibly small. In Dallas County, the data suggests that nearly 90 percent of people sentenced to jail or prison for possession in 2015 were convicted of possessing less than a gram. In fact, throughout the state, the overwhelming proportion of drug possession defendants were sentenced to incarceration for fractions of a gram: Bill Moore, a 66-year-old man in Dallas, was prosecuted for third degree felony possession (normally one to four grams) for what the laboratory tested as 0.0202 grams of methamphetamines. The charge was enhanced to a third degree offense under the habitual offender law because of his prior possession charges, which he said were all under a gram as well. He spoke to us after he had pled to three years in prison for that 0.0202 grams: “It was really small; you wouldn’t even believe what I’m talking about. It’s unbelievable that they would even charge me with it.” He added, “It’s about five dollars’ worth of drugs…. Now think about how many thousands of dollars are wasted over five dollars of that stuff.” In Fort Worth, Hector Ruiz was prosecuted for an empty bag that had heroin residue weighing 0.007 grams. Apparently believing that he deserved aggressive charges, the prosecutor sought enhancements based on Hector’s prior state jail convictions, increasing the high end of his sentencing range from two to ten years. The prosecutor offered him six years in prison in exchange for a guilty plea. Leonard Lewis was charged with third degree felony possession (one to four grams) in Houston for two tobacco cigarettes dipped in PCP. Because he had two prior felonies, he faced 25 years to life in prison. He told us the actual weight of the liquid PCP on the cigarettes was microscopic. Although his attorney convinced the prosecutors to discount the filter, she said they still counted the weight of the rest of both cigarettes (tobacco and paper), resulting in a final weight of 1.4 grams combined. Tobacco in an average cigarette weighs around 0.65 to 1 gram on its own, meaning the trace amount of PCP on Leonard’s two cigarettes must have been nearly weightless. Nevertheless, Leonard ended up receiving four years in prison for it. In Dallas, Gary Baker was charged with 0.1076 grams of cocaine. Although he was arrested for outstanding traffic tickets, he and his attorney said the police searched his car for 45 minutes without finding anything. At his arraignment, the judge informed him he was also charged with possession of a controlled substance. Apparently, after Gary had been taken to booking, an officer reported finding what Gary remembered being described as “crumbs of crack cocaine” on the car console. Gary told us he did not know where the crumbs came from: “For the little amount of cocaine they found in my car, if I put it in your car, you wouldn’t even notice it. Some ‘crumbs’?” The 0.1 grams allegedly discovered by the police is the equivalent in weight of 28 thousandths of a sugar packet. In Granbury, Texas, Matthew Russell was charged with possession of methamphetamines for an amount so small that the laboratory result read only “trace.” The lab technician did not even assign a fraction of a gram to it. Matthew said the trace amount was recovered from inside his girlfriend’s house, while he was outside. Under the circumstances, he speculated—quite reasonably—that he was charged because of his history of drug use: “I’m not guilty of what they charged me with. I didn’t have any drugs in my possession. Am I guilty of being a drug user? Yes, I am. Did I use drugs the day before? Yes, I did. I admitted that. But I didn’t have any drugs on me. I shouldn’t be here.” The prosecutor sought enhancements because Matthew had prior felony convictions, mostly out-of-state and related to his drug dependence, Matthew told us. Because of his priors, Matthew faced 2 to 20 years for this trace amount. The prosecutor did not have to seek these enhancements. He also could have offered Matthew a gentler plea deal. Instead, he offered a 3-year discount off the statutory maximum in exchange for a guilty plea: 17 years for a trace case. Matthew refused and insisted on his right to trial. After 21 months of pretrial detention, Matthew finally went to trial in August 2016. A jury convicted him of possessing a trace amount of methamphetamines and sentenced him to 15 years in prison. Explaining why prosecutors pursue so many state jail possession cases, Galveston prosecutor Chris Henderson told us, “The idea behind it is that we want to prevent the bigger cases that may come down the line…. So we want to try to get to those people early. We want to prevent the murder in a drug deal gone wrong, theft, child endangerment, the larger cases…. If we decided not to prosecute small drug cases, we’d see situations like that more often.” However, we are aware of no empirical evidence that low-level drug possession defendants would otherwise go on to commit violent crimes such as murder, and the theft and child endangerment cases can be addressed with the laws that criminalize them. When we asked him whether he thought state jail prosecutions were working to stop crime, he added, “No, I don’t think so.” Paraphernalia Charged as Possession In a handful of cases we investigated in Texas and Louisiana, defendants had drug paraphernalia, such as pipes, straws, syringes, or even empty baggies, in their possession when they were confronted by the police. But instead of simply charging them with misdemeanor drug paraphernalia—or letting them go—the police arrested them for drug possession because of the residue or trace amount of drugs left in or on the paraphernalia. And rather than questioning the utility of those arrests, prosecutors formally charged and prosecuted the defendants for drug possession. Former District Attorney Paul Carmouche explained that the police typically make the initial decision to charge paraphernalia as possession, but that they have discretion not to arrest in those cases at all: “If it were good cops … they would say, ‘This is BS, we’re not going to do that.’ So, residue … I don’t think it ought to be charged. The problem for the DA’s office is, it’s going to come in as a possession of cocaine [because] the police are always going to charge the most serious under the facts of the case.” In such scenarios, the prosecutor still has the authority to reduce the charges, or to dismiss the case altogether. Yet in practice prosecutors often do not deviate in their charges from what is listed on the police report. For example, our data shows that in 93 percent of all drug use/possession cases that were filed in Florida, prosecutors did not deviate from the police arrest charge. In Miami, where drug possession carries up to five years in prison, Melba Pearson told us, [As to] residue prosecutions, it’s ridiculous to potentially incarcerate for five years when you don’t even have the substance on you. The theory is you just smoked it, but we don’t know that’s necessarily true. When you don’t even have it in your possession, to charge it as a felony, the punishment doesn’t fit the crime. It’s a bad use of resources. Prosecutors are overburdened and resources are better directed to more serious crime. Enforcing residue cases is a philosophy reflective of “lock everyone up.’’ The consequences of that philosophy play out in terms of human lives. In St. Tammany Parish, Louisiana, District Defender John Lindner told us he was still seeing residue cases where needles were charged as heroin possession, which in the state carries a minimum of four years and up to ten years in prison. For example, Amanda Price and her friend were arrested for a needle in St. Tammany Parish. After she had spent two months in pretrial detention, Amanda’s charge was reduced to a misdemeanor, but only after her friend (and co-defendant) said the needle was his and took the heroin possession conviction himself. Prosecutors have even pursued felony indictments and accepted guilty pleas for drug possession in the absence of any evidence. Jason Gaines said he was arrested in Granbury, Texas, for having one syringe cap in his pocket and three unused needles near him, one of which was missing a cap. He said after he had been handcuffed, the police asked if he used meth, and he said yes. According to his attorney, the lab report showed that the syringes were never actually tested, and no meth was found on Jason’s person. On these facts, the prosecutor should not have charged Jason at all; if the needles were unused, there was no real evidence he had committed any crime, only that he might eventually inject drugs sometime in the future—and that he was preparing to do so safely with clean needles. But even if the prosecutor insisted on charging Jason, he could have charged misdemeanor drug paraphernalia. Instead, the prosecutor pursued a felony charge and 89 days after Jason was arrested—one day short of the maximum 90 days Texas prosecutors have to obtain a felony indictment—Jason was indicted for his first felony: possession of methamphetamines. He said: I was thinking, I told them I was a meth user, which explains why on the indictment it came back methamphetamine. If I were to have told them heroin, it makes me think my indictment would have said heroin, because the needles were brand new; there is no way they could have tested for methamphetamine. Jason pled to four years’ probation on the same day the indictment was read to him in court, before he knew that the lab had never performed a drug test. Jason ultimately had his probation revoked for failure to report to his probation officer, and he pled to 20 months in a Texas state jail facility. Alyssa Burns was arrested in Houston for a meth pipe and charged with drug possession, her first felony. She said police performed a field test on the pipe, pouring a liquid inside that turned blue to show residue. Trace cases need to be reevaluated. If you’re being charged with a .01 for a controlled substance, the fact that it turned blue, even if there’s nothing in it, that’s an empty baggie, that’s an empty pipe. There used to be something in it. They are ruining people’s lives over it. In Galveston, Breanna Wheeler’s lawyer said she waited 79 minutes while officers called a canine unit and searched the car in which she was a passenger. Breanna told us they found an empty plastic bag under her seat which they alleged belonged to her and had methamphetamine residue on it. After a period of pretrial detention because she could not afford bail, Breanna, a single mother, pled to her first felony conviction and time served so she could return home to her young daughter. In Houston, Nicole Bishop was charged with two counts of felony possession for heroin residue in an empty baggie and cocaine residue in a plastic straw. The charges meant she was separated from her three children, including her breastfeeding baby. She had been in pretrial detention for two months already when we interviewed her in March 2016. Miami Judge Dennis Murphy told us judges can take an active role to ensure defendants are not charged with possession for mere drug paraphernalia: There’s so much space for judicial discretion. The police and SA’s [State Attorney’s] office will typically arrest for residue and charge for paraphernalia and possession. When a defendant is arraigned in my division and the lab report says merely residue, the defendant is invited to [move to] dismiss the possession [charge]…. So I dismiss the possession and let them plead to paraphernalia. Despite case law from the Third District [Court of Appeal] saying [residue] is still possession, I disagree. Medications Made into Felonies A number of interviewees were charged with felony drug possession for medications for which they could not provide the prescription. Some interviewees said they were prescribed the medication in question but had allowed the prescription to lapse. Others had a partner’s or friend’s medication in their possession when they were arrested. None of them were formally accused of dealing or committing fraud in obtaining the medications. And none of them felt they should be considered criminals simply for possessing pills many other people in the United States keep in their medicine cabinets. Possession of certain prescription medications without evidence of the prescription is criminalized, sometimes at the felony level, in most states. Although this may derive from a legislative intent to curb misuse and unlawful sale of prescription medicines, and is particularly relevant today with respect to prescription painkillers, its enforcement can be overbroad. Some of the cases we learned about suggest a lack of reasonableness and prosecutorial investigation that might have revealed mitigating facts, where prosecutors failed to exercise discretion to decline cases or to seek lesser charges and instead pursued cases aggressively. Defendants we met were prosecuted with felony charges for possession of commonly prescribed medications including Adderall, Vyvanse, Xanax, and Klonopin. Anita Robinson, 25, was charged with felony drug possession in Houston for seven Adderall pills. She said that, from the prosecutor’s perspective, “it doesn’t matter that it’s Adderall. [They treat it] like it could be meth or cocaine or whatever; it’s just classified with those same drugs.” Furthermore, in some Texas cases we examined in March 2016, prosecutors sought sentencing enhancements for these offenses or chose to charge according to the total weight of the pills, rather than the strength of the medication within them. For example, Adderall pills come in 5 to 30 mg strengths, but because the prosecutor considered the entire weight of the pills, George Morris’ possession of seven 20 mg strength Adderall pills translated into a third degree felony under Texas law. George Morris’ story George Morris told us his story as follows: When he was 17 years old, George was convicted of burglary. George said he entered the open window of a friend who owed him money and took a PlayStation 2, and that he was prosecuted for burglary even though his friend’s mother tried to get the charges dropped. He served three years in prison. Ten years later, George was arrested in The Colony, Texas, when police found seven 20 mg Adderall pills in his car. George told us the pills were prescribed to his girlfriend. Because of his prior felony, he faced up to 20 years in prison for possession of the seven pills, despite the fact that the combined strength of the pills was a mere 0.14 grams. Prosecutors chose to enhance George’s charge with the PlayStation 2 conviction so that he faced up to 20 years for the pills. They ultimately offered him six years in prison in exchange for a guilty plea. Although six years is significantly less than a possible 20, it is a very long time from any other perspective and is a grossly disproportionate punishment for George’s “crime.” When he spoke to us, George was out on bond and had not decided whether to take the offer, but he said this case had already destroyed his life. He said it caused him to go into a depression for which he was hospitalized. His relationship with his girlfriend of 12 years was strained and eventually ended. His depression was so severe that he left his job and lost his house. He told us, “When I caught that charge, it took so much out of me because I was not doing anything to break the law, not doing anything to affect or hurt anyone around me…. Six years of your life … for seven Adderall pills.” Before being prosecuted, George said he had a small grass-cutting and construction business; he woke up every day at 8 a.m. and worked all day. He told us everybody knew they could make a little money on the side if they sold drugs but that he refused to do so: I made a vow to God … I am not going to have these drugs; I am not going to sell no drugs; I am not going to do any drugs. I am going to focus on what I need to focus on, and that is cutting green grass and building fences. So that’s what I did…. But I gave up on [that] when I caught that case. I was just like, there’s no point in living.... I stay in my room and I sleep. We met many others like George. One of them was Amit Goel, a 19-year-old college sophomore in Dallas who had been prescribed Adderall since high school but said that he let his prescription run out the previous month. He was arrested with eight pills of Adderall and Vyvanse, another ADHD medication, and was facing a third degree felony for drug possession, which carries two to ten years in prison. He told us he got his prescription renewed the month after his arrest, but the prosecutor continued to pursue felony charges, on what would be Amit’s first felony conviction. Months after our visit to Texas, practitioners told us it had been discovered that possession of Adderall and Vyvanse was “mistakenly” no longer a felony offense, due to the “unintended consequences” of a Texas bill passed in 2015. According to the Texas District and County Attorneys Association, “The upshot of all this is that after September 1, 2015, most (all?) Adderall and Vyvanse crimes became misdemeanors, not felonies.” The three Texas cases above were all being prosecuted as felonies in March. In Jefferson Parish, Louisiana, Darius Mitchell, profiled in section IV, was charged with his first felony for hydrocodone pills he said his son’s mother had left in his car after their visit to the Emergency Room. In Shreveport, Glenda Hughes pled guilty to her first felony for possession of pills that she said were her husband’s. She told us she was arrested in her nightgown, without shoes, having run out the door with her purse after her husband beat her. She said that her husband was prescribed Klonopin and, because he would misuse them, she carried them for him to help him comply with the prescribed dosage. Glenda told us her husband said the pills were his and tried to explain things to the prosecutor. Charging Distribution in Possession Cases In all four states we visited, some defendants were arrested in possession of drugs that they said were for their own use, but prosecutors chose to charge distribution or possession with intent to distribute (PWID)—without making any effort, as far as defendants or their lawyers could tell, to investigate whether the drugs were in fact for personal use. Pursuing distribution charges for facts supporting simple possession is yet another example of prosecutors’ charging as aggressively as possible. A Caddo Parish defense attorney summed up what many had told us in all four states we visited: “They overbill the PWID charges. Anything approaching the weight [of distribution], anything with baggies. [Because] if the charge is PWID, it’s a higher bond.” Because a higher bond means defendants are more likely to have to wait in jail until their case is disposed, and because PWID carries longer sentences, many interviewees felt the charge was meant to force their hands so they would accept a plea offer on simple possession, a topic explored in more depth in the next section. In most states, PWID is usually proved based on circumstantial evidence such as the presence of individually packaged bags; scales, ledgers, or records of sales; and, more problematically, the presence of cash. In some states, drug quantity alone is presumptive evidence of possession with intent to distribute or of distribution. In Florida, possession over certain thresholds is considered drug trafficking. Although individually packaged bags, scales, ledgers, and sales records may be sound evidence of distribution in some cases, cash or quantity alone is problematic. As Judge Murphy told us in Miami, “More than half the time, those PWIDs [should] become possession charges.... You get people on payday [so they have cash]. There goes your rent check, your food check.” Using the presence of cash as evidence of distribution is flawed, both as a matter of evidence and as a matter of fairness. It is clearly not illegal to carry cash; without more, a person’s possession of significant sums in cash is at best extremely dubious evidence of criminal activity of any kind. At worst, it is a flimsy pretext to bolster charges that lack real evidence to support them. In fact, poor people may be more likely to carry cash on them, not because they are drug dealers but because they are less likely to maintain a bank account. A large percentage of poor people are unbanked (having no bank account) or underbanked (relying more heavily on alternative financial providers than on their bank). Black and Latino households are significantly more likely to be unbanked or underbanked than white households. In a case in Shreveport, a defendant and his attorney reported that prosecutors used the fact that the defendant had $800 cash on him to increase the charges against him to include distribution of drugs, and that they did so even after he showed them he had just cashed a check from an insurance claim from a car accident. David Ross said he was arrested in 2013 with a couple of grams of methamphetamines and eight to ten Percocet pills. Although there was no evidence of actual dealing, he was charged with two separate counts of distribution because he had drugs and money on him. In the courtroom, David told us, the prosecutor offered to lower the charges to possession if he took 10 years in prison—5 on each charge, run consecutively. David accepted on the spot, and the police kept his $800 through civil forfeiture. He said: [My cases were] always possession, because I’ve had a drug problem since I was 16 or 17 years old…. They’re going to say you’re distributing when they know you’re not, so that when it comes to make a deal with you they will drop it down to simple possession and max you out. And you’re happy to take it, as you’d rather do 5 than 30. In addition to the problems of relying solely on cash as evidence, a number of interviewees argued it is a mistake to assume a larger quantity of drugs means the person is necessarily distributing. They said they buy a larger amount because it is cheaper and so that they do not need to return so frequently to their dealer, which can be dangerous and intimidating. Carla James was arrested in Dallas in 2010 for possession of seven grams of methamphetamines. Although she said the police wrote it up as drug possession, she was indicted on distribution charges because of the quantity. But she explained the meth was for personal use: I bought a large quantity because I didn’t like going to the dope house…. You get more for your money when you get a higher amount…. It’s just like going to the grocery store…. You know you need a gallon of milk to make it to Friday. A gallon costs $2.50, and a half gallon costs $1.75. Why would you buy the half gallon, knowing it’s only going to last half of the week, when the full gallon is only [75 cents] more? Why buy a gram for $100 when you could buy 7 for $300? [189] Where judges call foul, some prosecutors amend the charge down to possession. In Caddo Parish, Louisiana, Judge Craig Marcotte said he had intervened in this way: Now, have I seen cases charged with possession with intent when they should have been possession? Sure. You can say this looks like possession to me, not possession with intent, which I have done before. A lot of the times, they say, “Okay, judge” [and they downgrade the charge]. You can just tell … you know, having done this for so long, having seen thousands and thousands of these cases.
VI. Pretrial Detention and the False Choice of a Plea Deal Bail is very wrong here, very wrong. It’s always too high. That causes at least two problems that I see. Number one, it causes more people to have to stay in jail. [Number two,] when people are sitting in jail they’re much more prone to say, “Well, I’ll plead because I’ll get out.”… [But] they shouldn’t have been there in the first place. They should have had an unsecured promise to come to court. Because [pleading] is going to come to haunt you down the line. ―Paul Carmouche, former district attorney for Caddo Parish, Louisiana, February 2016 Pretrial detention in drug cases contributes significantly to soaring jail and prison admissions and the standing incarcerated population in the United States. In 2014, approximately 64,000 people per day were detained pretrial for drug possession, many of them in jail solely because they could not afford to post bail. As detailed in this section, this fact gives prosecutors significant leverage to coerce plea deals. Pretrial detention, an inherently negative experience, also separates many defendants from their families and jobs and threatens lasting harm or disruption to their lives. To avoid all of this—or because long sentences otherwise hang over their head if they lose at trial—many defendants plead guilty simply to secure their release, in cases where they might otherwise want to go to trial. Pretrial Detention During the pretrial stages of a criminal case, judges can either release defendants on their own recognizance or set a money bond (also known as bail). Release on own recognizance (ROR), also known as a personal recognizance (PR) bond, permits someone to be released until the next court date simply on a promise to appear; they must pay the specified bond amount if they fail to do so. For people we interviewed in Texas and Louisiana, a PR bond was not offered, even though it was statutorily available to the judge. Instead, bail was set at thousands of dollars. Defendants who cannot afford to pay the full bail amount often use a bondsman instead. Under this scheme, defendants pay a fee to a private bondsman company (sometimes 10 to 13 percent of the total bail amount), and the bondsman then takes on the obligation to ensure their reappearance. Defendants never receive the bondsman’s payment back, so the system has the effect of imposing financial costs on low-income defendants that people who possess the independent means to post bail do not incur. If defendants lack the financial resources to post bail, either through a bondsman or on their own, they remain incarcerated either until they come up with the money or until case disposition. That effect is wide-reaching. In the two states for which we received court data containing attorney information, the majority of drug possession defendants were indigent—in other words, poor enough that they qualified for court-appointed counsel. In Florida, 64 percent of felony drug possession defendants relied on court-appointed rather than retained counsel. In Alabama, the rate was 70 percent, including marijuana as well as felony drug possession. And these numbers are conservative, because indigent defendants who qualify for court-appointed counsel may still choose to sacrifice other resources and needs to pay for an attorney. High rates of pretrial detention reflect the reality that judges set bail so high that many defendants cannot afford it. In 2009, the most recent year for which the US Department of Justice has published data, 34 percent of possession defendants were detained pretrial in the 75 largest counties. Nearly all of those detained pretrial (91.4 percent) were held on bail, meaning that if they had had the means to pay, they would have been released.[197] That same year, possession defendants in the 75 largest counties had an average bail of $24,000. Because the higher the bail, the more likely someone will not be able to afford it, the average bail for those detained was even higher. For those defendants, the average was $39,900. The money bail system is premised on the idea that defendants will pay to get out of jail and that, if the amount is high enough, they will return to court to get their money back. In theory, the principal goal is to ensure that defendants return: in other words, to prevent flight. Yet data shows that drug possession defendants released pretrial do come back to court. Human Rights Watch has previously examined the myth that released defendants evade justice in New York. Failure to appear rates are similarly low in other jurisdictions and for drug possession specifically. In the 75 largest US counties in 2009, 78 percent of people charged with possession and released pretrial made all their appearances in court; another 18 percent returned to court after their missed appearance(s). This means that in total 96 percent of all possession defendants ultimately came back to court. Although the data does not indicate whether these defendants posted bail or were released on their own recognizance, it certainly counsels in favor of affordable bond that enables release. When judges set bond, the amount should be individually tailored, reflecting an individualized determination not only of the flight risk posed by a particular defendant but also of that person’s ability to pay. But in many jurisdictions we visited, interviewees said judges did not take their individual circumstances into account. In St. Tammany Parish, interviewees said their bonds were set even before they had met their appointed counsel, without a formal hearing. In a number of jurisdictions in Louisiana, bond is routinely set high, and it is up to the defense counsel to file a motion to reduce bond, which is then scheduled for a hearing sometime later. For low-income defendants unable to pay a high bond, this means they remain detained at least until the bond is reduced some weeks later. In Texas and Louisiana, we interviewed approximately 30 defendants who could not afford the bondsman amount, let alone their full bail, and as a result were forced to remain in pretrial detention until their case was resolved. For some people, taking a case to trial may mean languishing in detention for over a year. Even for those ready to enter a plea deal, many had to spend months in detention before the prosecutor made an offer. In 2009, the median time between arrest and adjudication for possession defendants in the 75 largest counties was 65 days, which would be spent in jail if a person could not afford bond. For people we interviewed, the wait was often much longer. Jason Gaines was charged with drug possession in Granbury, Texas, and said his bond was set at $7,500. He told us, “It was important to bond out because I didn’t want to be stuck in here forever. It takes at least three months to go to court for your first offer.” In our jail interviews in Texas and Louisiana, some pretrial detainees were waiting in jail while their attorneys investigated the case and filed pretrial motions, so that if they were going to consider pleading guilty, they could do so with a better sense of the strengths and weaknesses of their case. Other interviewees remained in pretrial detention because they wanted to go to trial or because they were hoping to get a better offer from the prosecutor. Some said they ultimately gave up, because fighting a case—either at trial or through pretrial motions such as for suppression of evidence—meant waiting too many months. Delays can be caused by overburdened courts and public defender systems, laboratory testing, and lack of communication between offices. When we met him, Matthew Russell had been waiting in pretrial detention for 16 months to take his “trace” possession case to trial. He said, “[If I didn’t have priors,] I’d be looking at 24 months. I’ve done 16 [pretrial]…. I spent my 39th birthday here, my 40th birthday here in this jail … waiting to go to trial.” Bond Schedules In Texas jurisdictions we visited, bail was set according to bond schedules that provided presumptive amounts of bail according to the charge, sometimes with enhancements for criminal history, but regardless of ability to pay. As a one-size-fits-all model, bond schedules deprive defendants of individualized determinations. In litigation, the US has emphasized that it would be unconstitutional for detention to depend solely on a person’s ability to pay the schedule amount. Yet the use of bond schedules is prevalent nationwide. A 2009 study of the 112 most populous counties found that 64 percent of those jurisdictions relied on them. Presumptive bail amounts may also vary greatly between jurisdictions within a state, increasing the arbitrariness and inequality of the practice. For example, the ACLU of California reported in 2012 that there were 58 different bond schedules in use across the state. For simple drug possession, presumptive bail amounts were $5,000 in Fresno and Sacramento, $10,000 in Alameda and Los Angeles, and $25,000 in San Bernardino and Tulare. Although, in theory, judges can depart from the schedule in individual cases, defense attorneys told us that as a matter of practice they rarely do. High bonds also mean that some people we spoke with were detained pretrial even though they were only facing probation post-conviction. In Texas, a first offense state jail felony requires mandatory probation if the person is convicted. Yet many people are detained pretrial, sometimes even for months, before they are convicted and sentenced to probation. This means that someone ends up doing jail time in a case for which the legislature, judge, prosecutor, and defense attorney all agree any period of incarceration as a form of punishment is unwarranted. Waiting on Charges in Louisiana Defense attorneys in Louisiana told us defendants experienced long waits in detention before the prosecutor charged them through a formal bill of information or indictment. Under international human rights law, authorities cannot hold individuals for extended periods without charge; to do so amounts to arbitrary detention. The US Supreme Court has held that within 48 hours of arrest, a judge or magistrate must make a probable cause determination that the detainee has committed some crime. But beyond the 48-hour rule, the prosecutor has still more time to decide which charges to bring—the Supreme Court has not yet ruled on how long this period may be, and jurisdictions vary widely in how they regulate it. Under Louisiana Code of Criminal Procedure article 701, the district attorney’s (DA) office has 60 days to “accept” the charges in the police report for a felony defendant detained pretrial—far in excess of the period many other US states allow. That means for two months, a defendant who cannot afford bond, and who has not been formally charged—let alone convicted—of any crime, is forced to wait in jail without even knowing the charges against him or her. Judge Calvin Johnson told us, “You shouldn’t be arresting a person on January 1 and charging him in March. I mean that just shouldn’t be.”[213] In St. Tammany and Calcasieu Parishes, public defenders told us that prosecutors regularly would not file charges within the mandatory 60 days, and were routinely granted extensions of time by the court—typically another 30 days—to make their charging decision. Defendants and practitioners call this period of pretrial detention “doing DA time.” Studies show that case outcomes for those fighting their charges from outside of jail are across the board more favorable than for those who are detained pretrial. According to the Bureau of Justice Statistics, in the 75 largest counties in 2009, fewer than 60 percent of defendants charged with drug offenses were convicted if they were released pretrial; however, close to 80 percent of those detained were convicted. Analyzing 60,000 cases in Kentucky from 2009 and 2010, the Arnold Foundation found that defendants detained for the entire pretrial period were over four times more likely to receive a jail sentence and over three times more likely to receive a prison sentence than those released at some point pretrial. Sentences were nearly three times as long for defendants sentenced to jail and more than twice as long for those sentenced to prison than defendants released pretrial. One of the main reasons pretrial detention correlates with worse case outcomes is that detainees may be more likely to plead guilty when they are already in jail. In fact, our research suggests that prosecutors in some jurisdictions seek and judges set bail at an amount they expect defendants will not be able to pay in order to ensure they end up in pretrial detention, which makes them likely to accept a plea deal faster. Joyce Briggs told us, “They hold you until you plead. That impacts people’s decision to plead. It impacts mine. I know I’m not going to get more than three years and I’ve already done a half a year, so—that’s how our minds work.” Conditioning loss of liberty on ability to pay infringes on the right to equality under the law and amounts to wealth discrimination. Human rights law requires that pretrial restrictions be consistent with the right to liberty, the presumption of innocence, and the right to equality under the law. Pretrial detention imposed on criminal defendants accused of drug possession solely because they cannot afford bail is inconsistent with those rights. The stress and suffering interviewees charged with drug possession endured in detention simply because of their low-income status is unfair, unnecessary, and inconsistent with human rights. Coerced Guilty Pleas They forced me. I mean there’s no doubt in my mind they forced me. ―David Ross, on pleading guilty to drug possession in Caddo Parish, Louisiana Like all criminal defendants in the United States, people charged with drug possession have a right to trial by jury. In practice, however, jury trials are exceedingly rare, with the majority of defendants at the state and federal levels—across all categories of crime—resolving their cases through guilty pleas. In 2009, between 99 and 100 percent of individuals convicted of drug possession in the 75 largest counties nationwide pled guilty. In Texas, approximately 97 percent of all felony possession convictions between September 2010 and January 2016 were obtained by a guilty plea. In Florida, more than nine out of every ten people facing drug possession charges in court (both misdemeanor and felony) between 2010 and 2015 pled guilty.[223] Only 1 percent of all drug possession defendants in the state went to trial.[224] In New York, such trials were almost nonexistent: 99.8 percent of the 143,986 adults convicted of drug possession between 2010 and 2015 accepted plea deals.[225] For scores of individuals interviewed for this report, the right to a jury trial was effectively meaningless. For them, the idea of a trial was more of a threat than a right, often because it meant further pretrial incarceration until trial and/or a “trial penalty” in the form of a substantially longer sentence if they exercised that right and lost. Part of the problem is that the criminal justice system is overburdened, which means not only that prosecutors and judges are busy, but also that public defenders—who are often substantially underfunded—do not have sufficient time and resources to devote to each case, disparately impacting poor defendants, who make up the majority of those charged with drug possession. So long as dockets remain as crowded as they are today, there will be a powerful incentive for prosecutors to secure pleas in as many cases as possible—including by strong-arm means. As explained by former chief prosecutor Paul Carmouche, “If every defendant said, ‘Hey, we’re going to trial,’ then the system stops. It would be jammed up. You got to plead.” According to one Texas prosecutor, prosecutors feel pressure to move cases quickly, and the pressure sometimes comes from judges: It’s so unfair: Everybody in the criminal justice system knows that if a person can’t bond out he’s more likely to plead and you’ll have your case moved…. Judges will campaign on efficiency [and] in order to do that, to say “I have the smallest docket of all judges,” they force the prosecutors to plead more cases and force the defendants to plead to them, by issuing high bonds and refusing to lower them.… That external pressure feeds the lock-them-up system. A Crowded Court Docket—Full of Drug Possession Cases Some of the system “jam” is attributable to the large volume of drug possession cases prosecuted and disposed of by state courts. For example, from September 2010 through January 2016, Texas courts disposed of 893,439 drug cases (misdemeanors and felonies). Of all these drug cases, 78 percent (almost 700,000 cases) were for simple possession. Among felony drug cases, 81 percent were for possession. More than half of Texas’ drug cases during this period were misdemeanor cases (such as possession of marijuana or drug paraphernalia). Three quarters of all misdemeanor drug cases in the state were for marijuana possession only. In other words, there were approximately 371,000 marijuana possession cases prosecuted and disposed of in a little over five years. In total, drug possession cases accounted for over 15 percent of all county and district court criminal dockets in Texas. In Florida, drug possession was the most serious charge in about 14 percent of all cases filed by prosecutors in county or district court. There is nothing inherently wrong with plea deals as long as the plea process is not coercive. Coercion arises when prosecutors leverage the threat of an egregiously long sentence to induce defendants to plead guilty to a lesser one, or when unreasonably high bail means that the only way to escape lengthy pretrial detention is to plead to probation, time served, or relatively short incarceration. Before a judge accepts the defendant’s guilty plea, the judge must perform a “plea colloquy” with the defendant—a series of questions to defendants to ensure they are knowingly and voluntarily waiving their right to a jury trial. Among those questions is some version of the following, which is constitutionally required in every state and federal system: “Has anyone forced or threatened you to plead guilty, or offered you any promises other than what’s contained in your plea agreement?” To a number of interviewees, this felt disingenuous. They knew they had to answer “no” to have their plea accepted, but said it was precisely a combination of coercion, threats, and promises that led them to plead. Oscar Washington told us, “I remember everything of what the judge said [in] the plea [colloquy]. I felt like my back was against the wall, like the judge had me by the neck when he said, ‘Did anyone force you to take this plea?’ I couldn’t say yes.” Interviewees in every jurisdiction we visited said they pled because the cost to their lives of waiting for trial in jail, or of risking the unreasonably steep penalties prosecutors threatened them with should they go to trial and lose, was too high. Pressuring Defendants with “Exploding Offers” In many cases we examined in Louisiana and Texas, defendants were pressured into pleading guilty before they had seen the evidence against them or knew anything about the strength of the prosecutor’s case. In several jurisdictions, defense attorneys told us that prosecutors would sometimes make an “exploding offer”—a plea deal that was available only if the defendant took it immediately, sometimes the first time the person appeared in court. In other cases, the offer would be off the table if the defendant filed any pretrial motions, for example a motion to suppress. In Dallas, defense attorneys said plea offers were good only until grand jury indictment, which is the formal charging document. In Slidell, Louisiana, Joel Cunningham, a Navy veteran, said he pled to 15 years in prison at his 2012 arraignment for possession of marijuana and possession of one gram of cocaine with intent to distribute. At the time he pled, he said he had not seen the evidence against him; it was his first day in court, when the charges are read against a defendant. “The bill of information was filed. Eight days later I was arraigned. Two hours later I pled. The 15-year deal would come off the table if I didn’t plead immediately.” Now that Joel had seen the evidence, he told us he would have challenged it with pretrial motions. In Caddo Parish, Louisiana, David Ross pled to 10 years for two possession charges. He said he had less than 10 minutes to accept the prosecutor’s offer: “They made me take 10 years that day, or they would have taken me to trial [on distribution] and I would have got a life sentence … because if you lose in Caddo Parish at trial, you’re getting a life sentence.” These practices add to the pressures, threats, and promises that lead defendants to plead guilty when they might otherwise exercise their right to require the government to prove its case. Pleading to Get Out of Jail For drug possession defendants with little to no criminal history, or in relatively minor cases, prosecutors in each state we visited often made offers of probation or relatively short incarceration terms. A short sentence may effectively mean “time served,” since defendants usually get credit against their sentence for time spent in pretrial detention. Numerous defendants recounted being faced with a choice: fight the case and stay in jail, or take a conviction and walk out the door with their family. A Texas prosecutor told us: Dangling probation out there when a defendant can’t afford to bond out is something prosecutors do to plead cases out. Especially in weaker cases, for example when there are multiple people in the car, or identity is an issue; you might dangle probation out there just to get a conviction and if the person screws up on probation you can go back and get the punishment you wanted. That’s the reality, because the vast majority of people are not going to be successful on probation. Numerous defense attorneys told us that they had counseled their clients on the risks of taking a conviction, the onerous conditions of probation, and/or the strength of their case should they choose to fight it and not take a plea. Moreover, if they pled to a felony, it could serve as a predicate for enhancement of a subsequent charge down the road, or an even worse plea coercion. But taking a case to trial until verdict may take months, all of which defendants must spend waiting in jail if they cannot afford bond. Their choice is ultimately between the right to a trial and the promise of freedom. John Lindner, District Defender in St. Tammany Parish, summarized the problem: Innocent people plead all the time. Not only here, but nationwide. It’s a matter of, if I stick you in jail, and you’ve been in jail for four or five months, and I come to you, “Hey, you can go home today, all you have to say is ‘yeah, I’m guilty,’ and you get to go home on probation.” You might jump on that. Interviewees explained why it was an obvious choice to plead guilty when they were in detention, although they would have fought their case if they had been on pretrial release: In New York City, Deon Charles told us he pled guilty to possession with intent to distribute cocaine because his daughter had just been born that day: “I never sold drugs … it was bogus. [But] I didn’t have the funds to afford to fight [and] my daughter was born [that] day…. I pled because I wanted to see my daughter. And when I pled I got to go home. But I lost my job [as an EMT] because of it.”
Alyssa Burns, charged in Houston with residue in a meth pipe, said if she could bond out she would take the case to trial. “I would probably win at trial, but I talked to a girl yesterday and she had been sitting here for 11 months waiting for labs…. I can’t do it. This place is awful. So now I’m just gonna sign for a felony, flush my degree down the toilet and just see what happens.”
Breanna Wheeler, a single mother in Galveston, never showed up to chaperone her 9-year-old daughter’s school trip. She had been arrested the night before with residue on a plastic bag. Against her attorney’s advice, she pled to probation and her first felony conviction. They both said she had a strong case that could be won in pretrial motions, but her attorney had been waiting months for the police records and Breanna needed to return to her daughter. Afterwards, her attorney said, “She’s home with her kid, but she’s a felon.”
Also in Galveston, Jack Hoffman was detained pretrial for meth possession. He told us, “I don’t have money to bond out…. I don’t want to sign [for this felony], but if it means getting out there to my life and my family, I’ll do whatever it takes…. If I could bond out and still work and support my family, then I would fight it. But from in here? … It’s kind of a catch-22 situation, damned if you do, damned if you don’t.” Dhu Thompson, a former New Orleans and Caddo Parish prosecutor, warned that a decision to plead to probation, though it seems obvious at the time, may haunt the defendant down the road: Say you have an individual charged with possession of cocaine. But the individual has now been in jail for 25 days and will plead to anything to get out. He comes to court, and the prosecutor offers him a felony plea. Nine out ten times they’re going to take it. [But now] they have that first felony on their record. They can’t vote. They can’t get a job. You know, family may ostracize them. That may create a problem where now you’re a repeat offender because this individual is desperate and does something in a desperate situation. Pleading to Avoid the Trial Penalty Prosecutors wield so much power in the plea system that defendants often have no expectation or hope that they will receive a proportionate sentence if they lose at trial. Many prosecutors use the threat of adding, or the promise of dropping, charges or sentencing enhancements to pressure defendants to give up their rights to trial. Concerned about how a pled-to felony makes clients vulnerable under Louisiana’s harsh habitual offender law, public defender Barksdale Hortenstine, Jr. said, “I can’t tell you how many clients [I’ve had where] at the end of the representation, I’ve told them, ‘I will buy you the ticket, I will do anything I can, will you please leave this state? You cannot afford the risk involved in living here.’” The Threat of Enhancements In cases we examined in Louisiana and Texas, prosecutors used habitual offender laws to enhance a defendant’s sentence range based on prior convictions and then offered to drop the enhancements in exchange for a guilty plea. This tactic was used—even in cases where defendants had only drug possession priors or other non-violent, low-level convictions such as theft—either to scare them into a plea deal or, when they refused, to penalize them for going to trial. Interviewees in Louisiana and Texas described how prosecutors used fear of enhancements to scare them into accepting plea offers that, in some cases, were horrible “deals” but that seemed reasonable to them nevertheless in light of the trial penalty they faced as habitual offenders. In the New Orleans Public Defender’s Office, Barksdale Hortenstine, Jr. explained, “The risk associated with [the habitual offender law] is so high that any rational lawyer has to advise vigorously to take deals that otherwise would seem absurd. So you end up pleading to five years in prison or eight years in prison [for possession]. Those numbers are commonly passed around.” When the Prosecutor, Not the Judge, Selects the Sentence In Louisiana, the habitual offender law provides for mandatory minimums, meaning that the judge typically has no discretion to sentence below them. Mandatory minimums take sentencing authority away from the judge and place it in the hands of prosecutors instead. Judges in Louisiana acknowledged that this meant the prosecutor wields a powerful tool—“a huge hammer,” according to Caddo Parish Judge Marcotte. Numerous government officials in Louisiana told us the habitual offender law is used mostly for drug dealers and not for those charged with simple possession charges, or that it is used for defendants with violent criminal histories or other serious felony priors and “then the straw that breaks the camel’s back is the possession.” However, we documented cases in Louisiana and also in Texas where prosecutors used the habitual offender law for defendants whose only prior convictions were for drugs, such as Leroy Carter. After suffering an injury while serving in the Navy, Leroy Carter was given a medical discharge and prescribed pain medications. He became dependent on the medications, and eventually he turned to other drugs. Now Leroy is serving 10 years on a plea deal for possession of marijuana and heroin. He pled guilty in 2012 in New Orleans because he was facing 20 years to life in prison if he lost at trial. His priors were all drug convictions: two marijuana possessions in the early 2000s, a heroin possession in 1999, and a marijuana distribution conviction in 1998. When we spoke to him on the phone, we asked how much time he had to talk. He answered, “Ten years.” In Texas, defendants told us the habitual offender enhancements made them feel they had no choice but to plead: In Fort Worth, Hector Ruiz faced 25 to life for heroin possession because of his two prior felonies. “If I lose at trial, they start at 25…. It’s a scare tactic so you don’t go to trial. ‘You better not go to trial because if you lose, this is what happens! So take the five right now!’ This is not fair.”
In Granbury, Matthew Russell faced 20 years for a trace amount of methamphetamines. He told us, “I’m so stressed out that some days it almost makes me want to kill myself…. [20 years,] that scares me. And that is what they are made on. They are made on a man’s mental capacity, trying to pervert you by fear. This court system is a game of manipulation.”
Douglas Watson was arrested in Dallas for what field tested as 0.1 gram of heroin and 0.2 grams of meth found inside a pipe. He was not charged with the paraphernalia. Because he had two prior state jail felonies for possession, his sentencing range was enhanced to two to 10 years in prison. In a split second Douglas decided to plead, waiving laboratory testing and grand jury indictment, because the prosecutor offered him two years in prison. Although it is shorter than many of the other sentences we documented, it was still time behind bars and two felony convictions for a minuscule amount of drugs whose weight Douglas did not even have time to challenge.
In Dallas, Bill Moore, 66, pled to three years in prison because he faced two to 10 years on what the laboratory tested as 0.0202 grams of meth. He noted that after testing a speck that weighed two hundredths of a gram, the prosecution “wouldn’t have had anything left to show as evidence if [I’d] gone to trial. But what if they did, and they’d given me 10 years instead of three? I wouldn’t have any chance of getting out anytime soon that I know of. [I would have been] in my 70s. It’s hard for me to even say that.” Relatively few people test whether the prosecutor and judge will follow through with the trial penalty: nationwide, as described above, between 99 and 100 percent of drug possession defendants plead guilty. But Jennifer and Corey were among the 1 percent who insisted on their right to trial, even in the face of the trial penalty. When they lost, they were sentenced to two decades behind bars, of which Louisiana law required they serve every day. Jennifer’s Story In 2016 in Covington, Louisiana, Jennifer Edwards was charged with heroin possession for a residue amount. The prosecutor made her a plea offer of seven years in prison. Because of her three drug possession priors (for Xanax, cocaine, and Ecstasy), she faced 20 years to life in prison if she refused the offer and lost at trial. With such a high trial penalty, her lawyer encouraged her to take the plea, but Jennifer insisted on her innocence. She told us, “I got about five minutes [to think about the offer]. That’s it…. I asked if I could please have the night to think about it, and they said, ‘Nope, the jury’s out there, you either taking this deal or you’re going to trial.’” Jennifer took her case to trial, and the jury convicted her. When we spoke to her, she was waiting for the judge to choose a sentence between 20 years and life in prison: I remember when they said I was guilty in the courtroom, the wind was knocked out of me. I went, “The rest of my life?” I still can’t believe it. All I could think about is that I could never do anything enjoyable in my life again. Never like be in love with someone and be alone with them. Just anything, you know…. I’ll never be able to use a cell phone ... take a shower in private, use the bathroom in private. Like all those things, I can never do those things…. I told [my attorney] during trial, no matter what happens, they can keep sticking me in here but they can never convince me what I’m doing is wrong. Jennifer told us that other detainees viewed her case as a cautionary tale: There’s 60 people in my cell, and only one of us has gone to trial. They are afraid to be in my situation. The [prosecutors] threaten everybody. I’ve seen people take 10 years flat, 15 years flat. I don’t even understand it. Ten years flat? Might as well take a chance with the jury. [If everybody went to trial,] I think it would make the negotiating stronger on our end, but nobody does it. Because if [everyone] did that, they wouldn’t be able to bring everyone to trial…. Everybody has to stick together and say, “No,” and, “I want a speedy trial.” *** Corey’s Story In 2011, 25-year-old Corey Ladd was arrested in New Orleans with a plastic baggie containing a half-ounce of marijuana. Years before, Corey had pled guilty to two felony convictions, for hydrocodone possession at age 18 and LSD possession at age 21, and had been sentenced to probation for each. This time, the prosecutor sought serious prison time. Because of his priors, the prosecutor chose to charge Corey as a third-time offender, so that he faced a minimum of 13 years and 4 months, up to a maximum of 40 years in prison for marijuana possession. Corey told us he was offered 10 years in exchange for a guilty plea. Despite the risk of such a high penalty, Corey refused the prosecutor’s offer and insisted on his right to trial. In 2013, the jury returned a guilty verdict. The judge imposed the penalty: For possessing a half-ounce of marijuana, she sentenced Corey to 20 years in prison without parole. Corey appealed his sentence to the state appeals court, which found in 2014 that 20 years was not “excessive” for marijuana possession for a third-time offender. Corey then appealed to the Louisiana Supreme Court, which found in 2015 that the trial judge had failed to state her reasons and sent the case back to her for resentencing. Two of the four Supreme Court judges expressed concern that “this sentence on its face seems very harsh.” When the trial judge resentenced Corey to 17 years without parole, he appealed yet again to the state appeals court. This time, in April 2016, it was the state appeals court that reversed the 17-year sentence and sent Corey’s case back to the trial judge for resentencing. The appeals court wrote: The laws nationwide are changing, as is public perception. As mentioned above, this defendant would conceivably be in his forties before he is released. Although the defendant’s seventeen-year sentence is within the range of permissible sentences, on its face, the sheer harshness of the sentence shocks the conscience. In spite of this history, the prosecutor held his ground. He objected to the appeals court’s reversal and filed an appeal of his own to the Louisiana Supreme Court. As of this writing, Corey is waiting for that decision. He has been in prison for 4 years and has never held his 4-year-old daughter outside of prison walls. Why Habitual Offender Laws Do Not Make Sense for Drug Possession In the context of drug possession, the effect of habitual offender laws is to punish habitual drug use. Although any criminal sanction for drug use is inappropriate, habitual offender sentencing delivers especially disproportionate punishment. If a person is facing a subsequent conviction for drug possession, it is simply an indication that the criminal justice system has failed to stop drug use, not that the person deserves a longer sentence. Moreover, it risks punishing some people for “recidivism” who may in fact be drug dependent, a health rather than a criminal justice issue. Several Louisiana officials, recognizing this fact, argued that habitual offender enhancements should not be applied to drug possession. As Judge Calvin Johnson, formerly on the bench in New Orleans, told us, “The rationale for it again is that individuals who commit multiple offenses are bad, bad people and they should be convicted accordingly…. My knee jerk reaction is no.… The fact that a drug user has been arrested for drugs multiple times means only that that person has had drugs multiple times. It doesn’t impact you, or me, or anyone in this room.” He told us he “would take [drug possession] out” of the criminal justice system entirely, “but a step towards that would be to move drug possessors out of the multi-bill statute.”[271] Reviewing Corey Ladd’s possession case, the Louisiana Court of Appeals for the Fourth Circuit delivered a striking condemnation of the prosecutor’s decision to use the habitual offender law: [The habitual offender law] dramatically limits judges’ ability to consider the human element and the life-time impact of harsh sentences on both defendants and their families, not to mention the State’s economic interest. Sentences should be sufficient but not greater than necessary to meet the goals and expectations of sentencing. Is it deterrence? Is it punitive? Far too much authority has been usurped from judges under the pretext of appearing “tough” on crime and allowing the habitual offender statute to become what now appears to be an archaic draconian measure. Our state, Louisiana, has some of the harshest sentencing statutes in these United States. Yet, this state also has one of the highest rates of incarceration, crime rate and recidivism. It would appear that the purpose of the habitual offender statutes to deter crime is not working and the State’s finances are being drained by the excessive incarcerations, particularly those for non-violent crimes. [272] For all these reasons, sentences for drug possession should not be subject to enhancement under habitual offender laws, regardless of the prior offense type, and past convictions for drug possession should not be used as predicates for enhancements of sentences for any other offense. The Threat of Higher Charges Instead of the threat of enhancements at trial, some defendants face higher charges if they insist on their trial rights, and are offered a plea to the lesser charge of possession if they give up those rights. We heard frequently that prosecutors would charge possession with intent (or distribution) for what otherwise could be considered simple possession and that those cases typically ended in a plea to simple possession. This raises concerns that prosecutors may be overcharging defendants in order to coerce pleas. A significant number of distribution charges are disposed of with pleas to simple possession. For example, in New York, over half of all possession with intent to distribute arrests and a third of sales arrests were disposed of with guilty pleas to possession charges. In some of these cases, people actually guilty of selling may be getting good deals. However, we documented cases where the more serious initial charges appear instead to represent an attempt at coercing defendants to plead guilty to the more appropriate charge. Jerry’s Story Jerry Bennett told us he pled guilty to two-and-a-half years in prison for half a gram of marijuana because the prosecutor threatened to charge him with distribution. Jerry was arrested in New Orleans in March 2015 and charged with possession of half a gram of marijuana that was found in the backseat of the truck in which he was a passenger. Because he had prior marijuana possession convictions, it was a felony charge. He sat in jail for over eight months while his trial date was set and reset. In the intervening time, his attorney won a motion to suppress evidence. Then the prosecutor made Jerry a plea offer of two-and-a-half years, which Jerry did not want. His attorney recalls the prosecutor’s words: “If he doesn’t take this today, we’re going to take that offer off the table. There will be no offer. We’ll just go to trial, and we’re going to change the charge from possession to distribution.” Jerry told us, “Half a gram! There ain’t no way you could distribute half a gram.” He chose not to take the offer and instead go to trial. When Jerry returned to court at the end of January 2016—almost 11 months after his arrest, during all of which he had been in jail—the prosecutor had a new tactic for getting him to take the two-and-a-half years. He would charge Jerry with both possession and distribution (for the same half gram of marijuana). Jerry would be sure to lose on one of them if he went to trial and, when he did, the minimum he would face would be 20 years. The prosecutor offered Jerry the two-and-a-half years instead. Jerry had been detained pretrial in a jail that was a four-hour drive from his lawyer, his girlfriend, and his 3-year-old daughter. He had not had time to speak to them, but the prosecutor gave him only 10 minutes to decide. His girlfriend had not made it to court in time, but she sent text messages to him via his attorney, begging him to think of their daughter: “Man, just take it, because if they mess with you, you’re going to see none of her life.” As Jerry’s attorney recalled, “We had a very frank conversation about the fact that, as much as he on principle didn’t want to take this, and also didn’t want to have to do another year and a half in jail, and he had promised his girlfriend that he was not going to miss the next birthday of his daughter … it was like, you can miss one more birthday, or you can potentially miss her entire childhood.” Jerry took the two-and-a-half years for marijuana possession, and the prosecutor dropped the distribution charge. When we talked to Jerry in jail the next day, he explained, “They spooked me out by saying, ‘You gotta take this or you’ll get that.’ I’m just worried about the time. Imagine me in here for 20 years. They got people that kill people. And they put you up here for half a gram of weed.” Pleading When Innocent Numerous interviewees in each state we visited said they had pled guilty even though they were innocent. Many said they did not feel they had any other real choice. Defendants, defense attorneys, judges, and prosecutors in different jurisdictions used the language of gambling: would the defendant “roll the dice” and go to trial? Most defendants said no, because the odds were against them and the stakes were too high. Tyler’s Plea Tyler Marshall was arrested in Louisiana, charged with possession of marijuana, convicted, and sentenced to 10 years in prison. The transcript of his plea colloquy plainly indicates that he either did not understand or did not want to plead guilty: By the court: Okay, listen to my question again sir. Do you wish to waive your constitutional rights and plead guilty because you have in fact committed this crime? [Defendant]: But I didn’t do it. [Defense counsel]: You are pleading guilty. [Defendant]: I am pleading guilty. By the court: Okay. And in pleading guilty today you are waiving your constitutional rights. Is that correct? [Defendant]: Yes ma’am. By the court: And you are pleading guilty because you committed this crime. [Defendant]: No ma’am. [Defense counsel]: Say yes, please. [Defendant]: Oh, I have to? Yeah. But I’d be lying though. In Texas, where defense attorneys said laboratory scandals and faulty roadside drug tests had raised concerns, Harris County began testing drugs in possession cases that had already been closed. Since 2010, there have been at least 73 exonerations in Harris County for drug possession or sale where the defendant pled guilty for something that turned out not to be a crime at all. In 2015 alone, there were 42. Of the 42 exonerees in 2015, only six were white. Most or all had been adjudged indigent, meaning they could not afford an attorney and had either a public defender or another attorney appointed for them. One of those attorneys, Natalie Schultz, said a significant number of them were homeless. When the laboratory finally tested their drugs, it found only legal substances or nothing at all. For example, in July 2014, police arrested Isaac Dixon, 26, for possession of a substance that field tested positive for Ecstasy. Two days later, Isaac pled guilty to felony drug possession and was sentenced to 90 days in the Harris County Jail. More than 14 months later, the substance was tested by a laboratory, and the field test was proved faulty. No drugs were found—only antihistamine and caffeine. Like Isaac’s conviction for drug possession, dozens more in Harris County in 2015 were ultimately vacated and the charges dismissed, but only because authorities took the time to have the drugs tested, after the case dispositions. The exonerations required laboratory testing, defense and prosecution filings for habeas corpus relief, trial court recommendations, and eventual dismissal by the Texas Court of Criminal Appeals. In the meantime, defendants had to endure pretrial detention, probation, sometimes a jail sentence, and the prospect of a felony conviction for action that was lawful. As the exonerations in Harris County demonstrate, people plead guilty to drug possession even when they are innocent, because the system makes them feel they have no choice. These cases also show that field tests often produce false positives and yet are sometimes the only evidence of drug possession. Fortunately for the defendants, Harris County invested the time and resources to test drugs after conviction. Harris County Public Defender Alex Bunin told us that if other jurisdictions undertook the same effort, he expected we would see that around the country indigent defendants plead guilty to drug possession when they are innocent. Reducing Charges—Discretionarily, for White Defendants Data from New York State suggests that prosecutors’ discretion to reduce charges through plea deals is exercised differently in different jurisdictions, and often with racially disparate impact. Between 2010 and 2015, 38 percent of drug possession arrests in New York State were disposed of at a reduced level. There were striking disparities between jurisdictions across the state and, within New York City, even among boroughs. In the Bronx, 38 percent of arrests ended in convictions on reduced charges while in Manhattan (New York County) the figure was 25 percent. The majority of downgraded arrests involved misdemeanor charges disposed of as violations. The data also shows racial disparities between those who benefit from reductions in or dismissal of charges and those who do not. In New York, for non-marijuana A misdemeanors, the second most common possession arrest charge after marijuana B misdemeanors, white defendants received reduced or dismissed charges at greater rates than Black defendants in all New York City counties, and in the aggregate of all other New York State counties combined.
VII. Sentencing by the Numbers If we go back to why we punish—deterrence, protection of the community—long term, jail isn’t doing those things. But no one is thinking long term about it. [There’s a saying,] “Insanity is doing the same thing over and over and expecting a different result.” … The general community doesn’t understand it’s not working. They don’t know it’s the same 90 people we keep picking up and putting in the system.[288] —A judge in Central Florida, on the mismatch between criminal law and drug use, December 2015 At year-end 2014, more than 25,000 people were serving sentences in jails and another 48,000 in state prisons for drug possession.[289] The number being admitted to jails and prisons to serve sentences at some point over the course of the year was significantly higher.[290] In many cases, particularly for people convicted of their first offense, sentences for drug possession can be comparatively short. However, both our interviews and our analysis of sentencing data reveal that some jurisdictions impose very long sentences—even life sentences in Texas—for drug possession. Miami Judge Dennis Murphy told us that some judges impose disproportionate sentences because “they want to be seen as tough, but studies show that long sentences result in nothing but costliness.”[291] Racial Disparities in Incarceration In examining who is incarcerated for drug possession, we found that stark racial disparities mark both jail and prison populations. Of the total jail population nationwide (convicted and unconvicted) in 2002 (the most recent year for which such jail data is available), 31,662 Black inmates, 19,203 white inmates, and 14,206 Latino inmates were jailed for drug possession.[292] Given that Black people made up 13 percent and white people 82 percent of the US population in 2002, these numbers mean that Black people were more than 10 times as likely as white people to be jailed for drug possession, even though the drug use rate for each group is roughly equivalent.[293] Of the total state prison population at year-end 2014, 18,800 Black inmates, 17,700 white inmates, and 11,400 Latino inmates were imprisoned for drug possession.[294] These numbers mean that Black people were nearly six times more likely than white people to be in prison for drug possession.[295] Because the US Census Bureau does not include race data for Latinos, we could not assess disparities in their incarceration. Human Rights Watch analyzed sentencing data for people convicted of drug possession in Florida, New York, and Texas. This section outlines our findings. In Florida between 2010 and 2015, 84 percent of defendants convicted of felony drug possession were sentenced to prison or jail (about a quarter to state prison and three-quarters to county jail). For misdemeanors, 68 percent of those convicted were sentenced to confinement, almost all going to county jail. Whether or not a person is sentenced to prison in Florida depends not only on the conviction offense but also on past criminal record, based on a scoring or points system. A person whose first conviction is for drug possession would not “score out” to prison time under this system, though he or she may be sentenced to county jail.[297] Roughly three of every four felony drug possession defendants were sentenced to terms in county jail or were not sentenced to incarceration at all, suggesting they had little or no significant prior criminal history. Yet even individuals sentenced to county jail for drug possession spend substantial time behind bars, especially those convicted of felonies:
In Texas between September 2010 and January 2016, more than three-quarters of felony drug possession defendants were sentenced to incarceration: 30,268 to prison, 42,957 to state jails, and 35,564 to county jails.[303] A significant proportion of the rest likely were released on probation because the prosecutor and judge had no choice: Texas law makes probation mandatory for a first-time conviction of drug possession when classified as a state jail felony.[304] This suggests that prosecutors and judges chose not to exercise their discretion to offer probation in the vast majority of cases in which they had some choice.In New York State between 2010 and 2015, the majority (53 percent) of people convicted of drug possession were sentenced to some period of incarceration (33 percent for marijuana and 65 percent for other drugs). The average jail sentence was 44 days for marijuana possession and 63 days for other drugs. Approximately 155 adults were sentenced to one year in jail—the maximum sentence for a misdemeanor—for marijuana possession. Approximately 1,441 adults were sentenced for possession of drugs other than marijuana, and 88 percent of these cases were misdemeanors. Among felony drug possession cases, the average prison sentence was 41 months.[301] At year-end 2015, one of 16 people in custody in New York State was incarcerated for drug possession. Of those, 50 percent were Black, 28 percent Latino, and 20 percent white.[302] Between 2012 and 2016, approximately one of 11 people held by the Texas Department of Criminal Justice (TDCJ) was convicted of a drug possession charge as their most serious offense.[305] Two of every three people serving time in a TDCJ facility for drug charges were there for drug possession.[306] Human Rights Watch examined charge and sentence length information for the 49,092 people incarcerated by TDCJ for drug possession during six snapshot days.[307] For the convictions where the drug amount was provided in the data, half were for possession of under one gram (state jail felony), and another 25 percent for possession of one to four grams (third degree felony).[308] Among the 20 counties that have the largest number of drug possession cases in Texas, there are significant disparities in the types of sentences received for similar charges, showing arbitrariness associated with geography as well as significant opportunity for prosecutorial discretion: Nearly 44 percent of drug possession inmates in Texas were serving sentences of two years or less (the maximum sentence for a state jail felony is two years). A quarter were serving sentences greater than 5 years. Third degree offenses (possession of one to four grams) had an average sentence of 5.3 years (the sentence range is two to ten years).[309] There were clear county disparities in the sentences for drug possession inmates. In counties with 300 or more unique TDCJ inmates, the median sentences varied greatly by county, for all offenses and also for state jail felonies specifically. Life in Prison in Texas for Drug Possession According to Texas Department Criminal Justice data we analyzed, 116 people were serving life sentences in Texas for drug possession as of February 2016. Ten percent of them (11 people) were sentenced in Smith County, a county that sentenced only 1.7 percent of the state’s overall drug possession inmates. Furthermore, in Texas between 2005 and 2014 at least seven people were sentenced to life in prison for simple possession of an amount of drugs weighing between one and four grams (third degree felony possession)—the weight of less than a sugar packet.[310] Under Texas law, third degree drug possession has a normal sentence range of two to ten years,[311] but if a person has two prior felonies, the habitual offender law gives prosecutors the option to enhance the range to a minimum of 25 years up to 99 years, or life in prison.[312] Although prosecutors need not seek the habitual offender enhancements, they did in these seven possession cases. Moreover, since the sentence is described as a range, not mandatory life in prison, the jury and/or the judge may still impose the minimum sentence, 25 years, or any number of years greater than 25 but short of life imprisonment. In one of the seven cases, public documents suggest the defendant pled guilty, yet he still received a life sentence for simple possession. In the six other cases, a jury decided a life sentence was appropriate, and the judge let it stand.[313] *** Drug Sentencing Reform and Non-Retroactivity A significant number of states have decriminalized marijuana possession, as described in section XI. For possession of other drugs, some states have implemented reforms reducing drug sentences, though not decriminalizing. These reforms are positive developments, but in many cases they are not retroactive, so thousands of people remain incarcerated, continuing to bear the costs of a felony conviction for actions that the state no longer criminalizes or that it sanctions less severely. For example, in 2015 Louisiana amended its marijuana laws to make the first two marijuana possession convictions misdemeanors and the third a felony punishable by up to 2 years. This means that the most Corey Ladd—serving 17 years for half an ounce of marijuana—could now face, given his two prior drug possession felonies, is 4 years. Yet he has not benefited from the new law. In 2015, Alabama passed Senate Bill 67, adding a new, lowest felony class D that includes drug possession and carries lesser penalties than felony class C, at which it was previously classified. But these reforms are also not retroactive, meaning that people sentenced more harshly under the previous law remain unaffected. Data we received from the Alabama Sentencing Commission indicated that as of October 2015, 14,000 people had been convicted of class C drug possession since 2010 and had received sentences that would keep them in prison beyond SB 67 enactment—meaning retroactivity could have had enormous impact.[316] As a third example, 32 states and the District of Columbia now have Good Samaritan laws that immunize people from prosecution if they seek emergency medical care after someone has overdosed, but again many if not all of these laws lack retroactivity provisions.[317] Thus Byron Augustine is still in a Louisiana prison. Byron called 911 and saved the life of a friend who had overdosed on heroin. Yet Byron was charged with possession of that heroin and was sentenced to 20 years shortly before Louisiana passed its Good Samaritan law.[318] His friend ultimately overdosed again and died while Byron was incarcerated.[319] Human rights law requires retroactive application of new laws that reduce sentences.[320] Retroactivity is particularly important in this context because the changes to existing law reflect a widespread understanding that sentences imposed prior to the reforms were disproportionately harsh and fundamentally unjust. | Law enforcement agencies made more arrests for marijuana possession last year than for all violent crimes combined, despite decriminalization and outright legalization in some states, according to a new report from Human Rights Watch and the ACLU. The report is harshly critical of drug policies that have 137,000 Americans behind bars on any given day for possessing marijuana or other drugs for their own personal use. Many of them are in pretrial detention in local jails, and those who are convicted end up with criminal records that "lock them out of jobs, housing, education, welfare assistance, voting, and much more," the report states. The researchers found that black Americans smoke marijuana at around the same rate as whites but are four times as likely to be arrested for possessing small amounts of the drug, the New York Times reports. "It's been 45 years since the war on drugs was declared, and it hasn't been a success," lead author Tess Borden of Human Rights Watch tells the Washington Post. "Rates of drug use are not down. Drug dependency has not stopped. Every 25 seconds, we're arresting someone for drug use." The report calls for the government to take steps such as treating drug use as a health problem and decriminalizing the possession of drugs for personal use. |
Media playback is unsupported on your device Media caption North Korea said in November its latest missile was capable of reaching Washington DC
North Korea has described the latest UN sanctions imposed on the country as an "act of war".
A foreign ministry statement said the measures were tantamount to a total economic blockade, the official KCNA news agency reported.
It added that strengthening North Korea's deterrence was the only way to frustrate the US.
The UN Security Council imposed the new sanctions on Friday in response to Pyongyang's ballistic missile tests.
The US-drafted resolution - unanimously backed by all 15 Security Council members - includes measures to slash North Korea's petrol imports by up to 90%.
North Korea is already subject to a raft of sanctions from the US, the UN and the EU.
What did the North Korean statement say?
Characteristically bellicose, it described the latest UN sanctions "as a violent breach of our republic's sovereignty and an act of war that destroys the peace and stability of the Korean peninsula and a wide region.
"The United States, completely terrified at our accomplishment of the great historic cause of completing the state nuclear force, is getting more and more frenzied in the moves to impose the harshest-ever sanctions and pressure on our country.
"We will further consolidate our self-defensive nuclear deterrence aimed at fundamentally eradicating the US nuclear threats, blackmail and hostile moves by establishing the practical balance of force with the US."
Media playback is unsupported on your device Media caption How could war with North Korea unfold?
What exactly are the new sanctions?
Media playback is unsupported on your device Media caption The US ambassador to the UN, Nikki Haley, said the new sanctions cut oil and petrol imports
The US said it was seeking a diplomatic solution to the issue and drafted this new set of sanctions:
Deliveries of petrol products will be capped at 500,000 barrels a year, and crude oil at four million barrels a year
All North Korean nationals working abroad will have to return home within 24 months under the proposals, restricting a vital source of foreign currency
There will also be a ban on exports of North Korean goods, such as machinery and electrical equipment
The UN sanctions came in response to Pyongyang's 28 November firing of a ballistic missile, which the US said was its highest yet.
US President Donald Trump has previously threatened to "totally destroy" North Korea if it launches a nuclear attack. North Korean leader Kim Jong-un has described the US president as "mentally deranged".
What about previous sanctions?
Last month, the US unveiled fresh sanctions against North Korea which it said were designed to limit the funding for its nuclear and ballistic missile programmes.
The measures targeted North Korean shipping operations and Chinese companies that trade with Pyongyang.
The UN also approved new sanctions following North Korea's nuclear test on 3 September.
These measures restricted oil imports and banned textile exports - an attempt to starve the North of fuel and income for its weapons programmes.
What effect have previous sanctions had?
The US has been imposing sanctions on North Korea for more than a decade with little success.
In fact, North Korea has said fresh sanctions will only make it accelerate its nuclear programme. It has continued to test nuclear and ballistic missiles despite these recent examples of UN pressure: ||||| S/RES/2407 (2018) Extends the mandate of the Panel of Experts until 24 April 2019
S/RES/2397 (2017) Strengthens the measures regarding the supply, sale or transfer to the DPRK of all refined petroleum products, including diesel and kerosene, with very specific preconditions and follow-up actions required by Member States, the 1718 Committee and the Committee Secretary. Reduces the allowed maximum aggregate amount for 12 months beginning on 1 January 2018 to 500,000 barrels (and twelve-month periods thereafter); Introduces a limit of 4 million barrels or 525,000 tons in the aggregate amount per a twelve-month period as of 22 December 2017 allowed for the supply, sale or transfer of crude oil by Member States to the DPRK. Member States are required to report the amount of crude oil provided to the DPRK to the 1718 Committee every 90 days; Expands sectoral sanctions by introducing a ban on DPRK’s export of food and agricultural products, machinery, electrical equipment, earth and stone including magnesite and magnesia, wood and vessels. The resolution also prohibits the DPRK from selling or transferring fishing rights; Introduces a ban on the supply, sale or transfer to the DPRK of all industrial machinery, transportation vehicles, iron, steel and other metals with the exception of spare parts to maintain DPRK commercial civilian passenger aircraft currently in use; Strengthens the ban on providing work authorizations for DPRK nationals by requiring Member States to repatriate all DRPK nationals earning income and all DPRK government safety oversight attachés monitoring DPRK workers abroad within their jurisdiction within 24 months from 22 December 2017. Member States are required to submit a midterm report after 15 months from 22 December and a final report after 27 months from 22 December to the Committee of all DPRK nationals that were repatriated based on this provision; Strengthens maritime measures to address the DPRK’s illicit exports of coal and other prohibited items as well as illicit imports of petroleum through deceptive maritime practices by requiring Member States to seize, inspect and freeze any vessel in their ports and territorial waters for involvement in prohibited activities. The provision ceases to apply if the Committee decides, on a case-by-case basis, after six months of impounding a vessel that adequate arrangements have been made to prevent future violations of the relevant resolutions; Strengthens vessel-related provisions by prohibiting the provision of insurance or re-insurance services to and requiring Member States to de-register any vessels involved in illicit activities. The resolution further prohibits Member States from providing classification services to such vessels and expands the ban on the supply, sale or transfer of vessels to the DPRK to also include used vessels; Decides that Member States should improve mutual information-sharing on suspected attempts by the DPRK to supply, sell, transfer or procure illicit cargo, and tasks the Committee, with the support of the Panel of Experts, to facilitate timely coordination. The resolution also introduces a requirement for Member States to notify the Committee of relevant identifying information as well as measures taken to carry out appropriate actions as authorized by the relevant provisions with regard to vessels in their territory or on the high seas designated as subject to the assets freeze, the port entry ban or other relevant measures; Clarifies that no provision in the resolution applies to the existing Russia-DPRK Rajin-Khasan port and rail project for solely exporting Russia-origin coal to other countries; Designates an additional 16 individuals and one entity.
S/RES/2375 (2017) Introduces a full ban on the supply, sale or transfer of all condensates and natural gas liquids to the DPRK; Introduces a limit for all refined petroleum products in terms of the amount allowed (for supply, sale or transfer to the DPRK) with very specific preconditions and follow-up action required by Member States, the 1718 Committee and the Committee Secretary; Introduces restrictions on the supply, sale or transfer of crude oil to the DPRK in any period of 12 months after the adoption of the resolution in the amount that is in excess of the amount Member States supplied in the period of 12 months prior to the adoption of the resolution (11 September 2017); Introduces a ban on the export by the DPRK of textiles (including fabrics and partially or fully completed apparel products); Introduces a ban on Member States from providing work authorizations for DPRK nationals, other than those for which written contracts have been finalized prior to the adoption of this resolution (11 September 2017); Expands financial sanctions by prohibiting all joint ventures or cooperative entities or expanding existing joint ventures with DPRK entities or individuals; Directs the 1718 Committee to designate vessels transporting prohibited items from the DPRK; Introduces further clarifications with regard to the call on Member States to inspect vessels with the consent of the flag State, on the high seas, if there are reasonable grounds to believe that the cargo of such vessels contain prohibited items, including specific obligations of the flag State and Member State requirement to report to the Committee of non-cooperation by a flag State; Directs the 1718 Committee to designate additional WMD-related and conventional arms-related items; Designates one additional individual and three entities.
S/RES/2371 (2017) Introduces a full ban on coal, iron and iron ore, and adds lead and lead ore to the banned commodities subject to sectoral sanctions. Authorizes the 1718 Committee to designate vessels related to activities prohibited by relevant resolutions, and prohibits port calls by designated vessels and chartering of DPRK flagged vessels. Bans the hiring and paying of additional DPRK laborers used to generate foreign export earnings. Prohibits the export by the DPRK of seafood (including fish, crustaceans, mollusks and other aquatic invertebrates in all forms). Expands financial sanctions by prohibiting new or expanded joint ventures and cooperative commercial entities with the DPRK and clarifies that companies performing financial services are considered financial institutions for the purpose of implementing the relevant sanctions measures, and that paragraph 11 of resolution 2094 (2013) also applies to clearing funds through Member States’ territories. Prohibits the deployment and use of chemical weapons and calls for DPRK’s accession to the CWC. Directs the 1718 Committee to develop appropriate arrangements with INTERPOL to issue Special Notices. Directs the 1718 Committee to designate additional WMD-related and conventional arms-related items. Designates additional 9 individuals and 4 entities and provides updated information on 2 previously designated individuals.
S/RES/2356 (2017) Designates additional 14 individuals and 4 entities.
S/RES/2345 (2017) Extends the mandate of the Panel of Experts until 24 April 2018
S/RES/2321(2016) Expands arms embargo to the items listed in a new conventional arms dual-use list (which is to be adopted by the 1718 Committee). Expands cargo inspection by clarifying certain personal and/or checked baggage entering into or departing from the DPRK as “cargo” which is subject to inspection, and by noting that cargo being transported by rail and by road is also subject to inspection. Strengthens maritime transport related provisions by prohibiting the following activities: all leasing, chartering or provision of crew services to the DPRK; registering vessels in the DPRK; obtaining authorization for a vessel to use the DPRK’s flag; owning, leasing, operating, providing any vessel classification, certification or associated service or insuring any vessel flagged by the DPRK. Additionally, prohibits the provision of insurance or re-insurance services to vessels owned, controlled or operated by the DPRK. Exemptions are available if approved in advance by the Committee on a case-by-case basis. Introduces procedures to designate vessels based on reasonable grounds that the vessels are or have been related to prohibited programmes or activities. Prohibits the supply, sale or transfer to the DPRK new helicopters and vessels (except as approved in advance by the Committee on a case-by-case basis). Overhauls and expands sectoral sanctions by placing an annual cap on the amount/value of coal exports by the DPRK and introducing a real-time system on reporting and monitoring these exports. Adds copper, nickel, silver and zinc to the materials banned from supplying, selling or transferring by the DPRK and prohibits their procurement and/or transfer by Member States Calls on Member States to provide no more fuel to DPRK-flagged civil passenger aircraft than necessary (for the relevant flight) and includes a standard margin for safety of flight. Adds new items to the luxury goods ban. Strengthens the proliferation network related measures by requiring Member States to reduce the number of staff at DPRK diplomatic missions and consular posts and limit the number of bank accounts to one per DPRK diplomatic mission or consular posts and one per DPRK diplomat and consular officer. Imposes entry and transit restrictions for DPRK government officials, members of the DPRK armed forces, or members/officials which are associated with prohibited programmes or activities. Bans any use of real property in Member States’ territories for purposes other than diplomatic or consular activities. Strengthens financial measures, including by requesting closure of existing representative offices, subsidiaries or banking accounts in the DPRK within ninety days; prohibiting public and private financial support for trade with the DPRK; expelling individuals who are believed to be working on behalf of or at the direction of a DPRK bank or financial institution; Exemptions are available if approved in advance by the Committee on a case-by-case basis. Clarifies the restrictions on specialized teaching and training to include but is not limited to advanced materials science, advanced chemical engineering, advanced mechanical engineering, advanced electrical engineering and advanced industrial engineering. Requires the suspension of scientific and technical cooperation with exemption procedures requiring Committee approval and notification in certain areas respectively. Prohibits the DPRK from supplying, selling or transferring statues and Member States from procuring such items (unless approved in advance by the Committee on a case-by-case basis). Designates additional 11 individuals and 10 entities.
S/RES/2276 (2016) Extends the mandate of the Panel of Experts until 24 April 2017.
S/RES/2270(2016) Expands arms embargo and non-proliferation measures, including small arms and light weapons, catch-all provisions to ban any item if related to prohibited programmes, dual-use nuclear/missile items, and operational capabilities of DPRK’s and another Member States’ armed forces.
Enforces new cargo inspection and maritime procedures, including mandatory inspection on cargo destined to and originating from the DPRK; ban on DPRK chartering of vessels and aircraft; ban on operating DPRK vessels or using DPRK flags; ban on flights (of any plane) or port calls (of any vessel) if related to prohibited items, prohibited activities, and designated persons or entities.
Expands financial measures, including an assets freeze on Government of the DPRK and its Workers’ Party entities associated with prohibited programmes and activities; clarifies that assets freeze includes vessels; prohibits DPRK banks from opening new branches; requires States to close existing DPRK bank branches in their territories; prohibits Member States from opening branches in the DPRK; requires States to close existing offices in the DPRK if related to prohibited programmes or sanctions violations.
Enforces sectoral sanctions (coal, minerals and fuel ban) and prohibits its procurement and/or transfer by Member States. Adds new items to the luxury goods ban.
Clarifies ban on hosting of DPRK trainers, advisors or other officials for police, paramilitary and military training; Ban on specialized training or teaching for DPRK nationals in specific fields that could contribute to the DPRK’s proliferation-sensitive activities.
Requires Member States to expel DPRK diplomats and foreign nationals involved in illicit activities.
Designates additional 16 individuals and 12 entities.
OMM vessels are subject to the assets freeze. Of the 31 vessels listed in Annex III of resolution 2270 (2016), 4 were removed by the Committee by its decision of 21 March 2016 (Security Council press release SC/12296) and an additional 5 were removed by the Committee by its decision of 17 December 2016 (Security Council press release SC/12636. | North Korea now considers itself at war with all 15 members of the United Nations Security Council, according to a blistering statement issued after the council unanimously approved tough new sanctions. The country's foreign minister described the latest round of sanctions as an "act of war" that was " rigged up by the US and its followers," CNN reports. "Those countries that raised their hands in favor of this 'sanctions resolution' shall be held completely responsible for all the consequences to be caused by the 'resolution' and we will make sure for ever and ever that they pay a heavy price for what they have done," the statement said. UN Resolution 2397, issued in response to Pyongyang's ballistic missile tests, slashes North Korea's fuel imports by around 90% and requires the return home of all North Koreans working overseas within 24 months. North Korea's statement said the sanctions were introduced because the US is frightened of its power. The US is "completely terrified at our accomplishment of the great historic cause of completing the state nuclear force, is getting more and more frenzied in the moves to impose the harshest-ever sanctions and pressure on our country," it said, per the BBC. "We will further consolidate our self-defensive nuclear deterrence aimed at fundamentally eradicating the US nuclear threats, blackmail, and hostile moves by establishing the practical balance of force with the US." |
More survivors of the factory fire in Bangladesh that killed more than 100 garment workers this weekend have told human rights and international labor groups they were actually locked in by security gates as the flames spread.
"The police and the fire department are confirming that the collapsible gates were locked on each floor," said Charles Kernighan, executive director of the Institute for Global Labour and Human Rights. "The fire department said they had to come in with bolt cutters to cut the locks."
The toll of the garment factory blaze now stands at 112, but Kernighan and others interviewed by ABC News said they believe the number may actually be much higher. The destruction inside made it difficult to identify bodies, and Kernighan said factory officials have yet to make public a list of the 1,500 workers believed to be working in the nine-story building at 6:30 p.m. Saturday, when the fire broke out in a first floor warehouse.
Kalpona Akter, a labor activist based in the Bangladesh capital of Dhaka, spoke with a number of survivors, who described a scene of horror as workers started to smell smoke, and then the power went out and they were thrown into darkness.
"Then they ran to the stairs and found it was already fire caught in the stairs," she said. "They broke one window in the east side of the factory and … they started to jump."
Akter said many groups of relatives worked together in the factory, and when the lights went out, many began to scream in search of their mothers and sisters and daughters. She said she also heard accounts of managers shutting the gates as alarms sounded to prevent workers from walking off the job, apparently thinking it was a false alarm.
Authorities in Bangladesh announced three arrests, all supervisors from the factory, whom the police accused of negligence in their handling of the incident.
A journalist who attended the police press conference told ABC News the three men were arrested "because they did not perform their duty" and prevented workers from escaping from the factory, instead of helping them get out.
Also Wednesday, there were new reports that clothing found in the burned-out remains included large quantities of sweat shirts with labels for Disney, the parent company of ABC News. Like Wal-Mart and Sears, Disney said today the Tazreen Fashions Limited factory was not supposed to be making its clothes.
"None of our licensees have been permitted to manufacture Disney-branded products in this facility for at least the last 12 months," a Disney statement read.
As with Disney, other retailers continue to question how their products could be found in a factory they did not know they had hired. Li & Fung, a Hong Kong supplier that works with several large brands, confirmed it was producing clothes in the factory for a Sean Combs label, ENYCE. But in a statement to ABC News Wednesday, Li & Fung said it had not brought clothes to the factory for any other client, including Sears, Disney and Wal-Mart.
Asked why it hired a factory that had been cited by at least one auditor for having safety problems, Li & Fung said it was investigating that question. ||||| Image copyright Reuters Image caption There is growing public anger in Bangladesh about the factory fire
Police in Bangladesh have arrested three supervisors from a clothing factory in which more than 100 people died during a fire.
They say the supervisors are accused of stopping workers from leaving the building and of padlocking exits.
Meanwhile, thousands of garment workers staged fresh protests outside Dhaka, demanding higher safety standards.
Government officials say preliminary information suggests the fire was an act of sabotage.
The government has opened two inquiries.
Police say the supervisors told panicked workers at the Tazreen Fashion factory that the fire on Saturday night was just a drill and they had nothing to worry about.
"All three are mid-level managers of Tazreen. Survivors told us they did not allow the workers to escape the fire, saying it was a routine fire drill," city police chief Habibur Rahman told AFP news agency.
Analysis Image copyright AFP According to their website, Tazreen produced for a host of well-known brand names from Europe and the US. Campaigners allege Western firms making clothes in Bangladesh hide behind inadequate safety audits to help drive down costs. The Clean Clothes Campaign (CCC), an Amsterdam-based textile rights group, says international brands have shown negligence in failing to address the safety issues highlighted by previous fires, and that this leaves them with responsibility for yet another tragic loss of life. The big brands say they have been working with their Bangladeshi partners to improve standards. Around 700 garment workers have been killed in dozens of fires since 2006, according to CCC, but none of the owners has been prosecuted over previous blazes. Questions are being asked again about how robust international brands are in policing health and safety regulations in the factories they have supply contracts with, correspondents say. Often, a complex system of subcontractors makes policing standards either difficult or impossible, which has allowed unscrupulous operators to make savings in the areas of health and safety, they say.
"There are also allegations that they even padlocked doors," he added.
On Wednesday, police fired rubber bullets and tear gas to disperse thousands of workers in the Ashulia industrial area, just outside the Bangladeshi capital.
"We were forced to react as they started pelting officers with stones," local police official Moktar Hossain said.
The BBC's Anbarasan Ethirajan in Dhaka says there has been growing public anger over the fire, and the industrial suburbs around the capital are tense.
Many factories have declared Wednesday a holiday fearing large-scale labour unrest.
Some workers also vandalised factories and set fire to motorcycles, injuring at least 20 people, the online edition of the Daily Star reported.
On Tuesday, Bangladesh declared a day of mourning for the victims.
The burnt-out nine-storey factory supplied clothes to a variety of international brands including US retail giant Walmart.
Walmart says the factory had been sub-contracted without its knowledge. It said it was cutting ties with its supplier without naming the firm.
Labels from the European chain C&A, Hong Kong's Li & Fung and the US rapper and actor Sean "Diddy" Combs were also found in the factory.
The clothing industry is the backbone of the Bangladeshi economy, with exports last year alone worth more than $19bn (£12bn).
Fatal fires are common in Bangladesh's garment sector with lax safety standards, poor wiring and overcrowding blamed for causing several blazes every year. ||||| DHAKA/CHICAGO (Reuters) - Three supervisors of a Bangladeshi garment factory were arrested on Wednesday as protests over a suspected arson fire that killed more than 100 people raged on into a third day, with textile workers and police clashing in the streets of a Dhaka suburb.
The government has blamed last weekend’s disaster, the country’s worst-ever industrial blaze, on saboteurs and police said they had arrested two people, who were seen on CCTV footage trying to set fire to stockpiles of material in another factory.
The fire at Tazreen Fashions has put a spotlight on global retailers that source clothes from Bangladesh, where wage costs are low - as little as $37 a month for some workers. Rights groups have called on Western firms to sign on to a safety program in that country, the world’s second-biggest clothes exporter.
Wal-Mart Stores Inc, the world’s largest retailer, said one of its suppliers subcontracted work to the now burned-out factory without authorization and would no longer be used. But one of the most senior figures in the country’s garment industry cast doubt on that claim.
“I won’t believe Walmart entirely if they say they did not know of this at all. That is because even if I am subcontracted for a Walmart deal, those subcontracted factories still need to be certified by Walmart,” Annisul Huq, former president of the Bangladesh Garment Manufacturers and Exporters Association, told Reuters following a meeting of association members.
“You can skirt rules for one or two odd times if it is for a very small quantity, but no decent quantity of work can be done without the client’s knowledge and permission,” he said.
Wal-Mart, in a statement, reiterated that while it does have an audit and notification system in place, in this case a supplier subcontracted to the workshop without approval.
MOST FACTORIES CLOSED
Witnesses said that at least 20 people were injured on Wednesday in the capital’s industrial suburb of Ashulia as police pushed back protesters demanding safer factories and punishment for those responsible for the blaze, which killed 111 workers and injured more than 150.
Thousands of workers poured out onto the roads, blocking traffic, as the authorities closed most of the 300 garment factories in the area. They were driven back by riot police using tear gas and batons.
Three employees of Tazreen Fashions - an administrative officer, a store manager and a security supervisor - were arrested and paraded in front of the media.
Dhaka District Police Chief Habibur Rahman told Reuters they would be investigated for suspected negligence.
Three supervisors of a Bangladeshi garment factory are escorted by the police after their arrest in Dhaka in this still image taken from November 28, 2012 video footage. The three supervisors were arrested on Wednesday as protests over a suspected arson fire that killed more than 100 people in the country's worst-ever industrial blaze raged on into a third day, with textile workers and police clashing in the streets of a Dhaka suburb. REUTERS/ATN News via Reuters TV
He said police were investigating complaints from some survivors that factory managers had stopped workers from leaving the multi-story building after a fire alarm went off.
Representatives of the Tazreen Fashions factory, including the owner, were not available for comment.
CCTV SHOWS APPARENT ARSON ATTEMPT
The country’s interior minister, Mohiuddin Khan Alamgir, has blamed saboteurs for the fire.
Adding to the case for arson, a news channel aired CCTV footage showing two employees of another factory in the Ashulia area trying to set fire to stockpiles of material.
Police chief Rahman said a woman and a man, who were identified from the video, had been taken into custody.
The TV clip shows a lone woman wearing a mauve head scarf and traditional loose garment passing through a room with clothes piled neatly in various places on a table. She briefly disappears from view beneath the table and then is shown again walking through the room and out of camera range.
Smoke soon begins to billow, first slowly then more rapidly, from the spot where the woman was seen ducking under the table.
Workers come running in and try to douse the flames by various means. The woman in the mauve scarf reenters the room and is seen helping workers in their efforts to put out the blaze.
Two other incidents in the outskirts of Dhaka - a fire at a factory on Monday morning and an explosion and fire at a facility on Tuesday evening - have raised concerns among manufacturing leaders that the industry may be under attack.
Talk of sabotage has also spread fear.
At least 50 garment workers were injured in a stampede as they tried to flee from their factory after a faulty generator caught fire in the city of Chittagong, the fire service said. Factory workers quickly put out the flames.
Slideshow (2 Images)
Bangladesh has about 4,500 garment factories and is the world’s biggest exporter of clothing after China, with garments making up 80 percent of its $24 billion annual exports.
Working conditions in Bangladeshi factories are notoriously poor, with little enforcement of safety laws. Overcrowding and locked fire doors are not uncommon.
More than 300 factories near Dhaka were shut for almost a week earlier this year as workers demanded higher wages and better conditions. At least 500 have died in garment factory accidents in Bangladesh since 2006, according to fire brigade officials. ||||| The owner of a Bangladesh clothing factory where a fire killed 112 people says he was never informed the facility was required to have an emergency exit, a sign of how far removed the leaders of the nation's garment industry are from issues of worker safety.
A Disney brand sweater lays among equipment charred in the fire that killed 112 workers Saturday at the Tazreen Fashions Ltd. factory, on the outskirts of Dhaha, Bangladesh, Wednesday, Nov. 28, 2012.... (Associated Press)
Walmart's Faded Glory label is seen on a piece of clothing laying among equipment charred in the fire that killed 112 workers Saturday at the Tazreen Fashions Ltd. factory, on the outskirts of Dhaha,... (Associated Press)
A Disney brand sweater lays among the equipment charred in the fire that killed 112 people Saturday at the Tazreen Fashions Ltd. factory, on the outskirts of Dhaha, Bangladesh, Wednesday, Nov. 28, 2012.... (Associated Press)
Boxes of garments lay near equipment charred in the fire that killed 112 workers Saturday at the Tazreen Fashions Ltd. factory,on the outskirts of Dhaha, Bangladesh, Wednesday, Nov. 28, 2012. Garments... (Associated Press)
A piece of clothing with a label referring to German brand KIK as the buyer lays among equipment charred in the fire that killed 112 workers Saturday at the Tazreen Fashions Ltd. factory, on the outskirts... (Associated Press)
"It was my fault. But nobody told me that there was no emergency exit, which could be made accessible from outside," factory owner Delwar Hossain was quoted Thursday as telling The Daily Star newspaper. "Nobody even advised me to install one like that, apart from the existing ones."
"I could have done it. But nobody ever suggested that I do it," said Hossain, who could not be reached for comment by The Associated Press on Thursday.
Activists in the South Asian country hope the tragedy will invigorate their lengthy _ but so far fruitless _ efforts to upgrade safety standards and force stronger government oversight of the powerful industry.
The Tazreen Fashions Ltd. factory in a Dhaka suburb was making clothes for Wal-Mart, Sears, Disney and other major global retailers. When a fire broke out over the weekend, many of the 1,400 workers were trapped inside the eight-story building because exit doors were locked. A fire official said the death toll would have been much lower if the factory had had an emergency exit.
Police said they were interrogating three factory managers on possible negligence charges. Workers said as they tried to escape the fire they found exit doors were locked.
An AP reporter who visited the damaged factory Wednesday found three stairways but no special fire exits.
Hossain, a former accounts manager at another garment factory, set up his own clothing business, Tuba Textiles Mills Ltd. in 2004. The Tazreen factory was one of a dozen owned by his company.
Workers interviewed by the AP have expressed support for Hossain, and describe him as a bearded man in his 50s who usually wears white clothes. Worker Mohammad Rajib said he is a "gentle man" who gave them raises and fired some managers after workers protested against low pay and abuse.
"He did not sack any worker. He told us: `You are my people, if you survive, I will survive,'" Rajib said.
The factory employed about 1,400 workers, most from a poor region of northern Bangladesh and about 70 percent of them women.
Labor Minister Rajiuddin Ahmed Raju said that factories without emergency exits _ or with only one such exit _ will be forced to close until they upgrade their safety infrastructure. It was not clear when and how that directive will be enforced.
Nazma Akhter, president of the Bangladesh Combined Garment Workers Federation trade union, called for the arrest of the factory's owners and management to send a message to the industry as a whole.
"There should be a criminal case against them. It could stop the recurrence of such incidents," she said.
In 2001, Bangladesh's High Court directed the government to set up a committee to oversee the safety of garment workers after a similar fire in a factory killed 24 people. But that directive was never implemented, and more than 300 people have been killed in garment factory fires since 2006.
"It's unfortunate that the government has ignored the directive. Had the government complied with it there would have been fewer accidents, I believe," said Sultana Kamal, executive director of Ain O Salish Kendra, a legal and human rights group that had petitioned the court for the ruling.
Government officials did not respond to calls for comment.
An Associated Press reporter at the factory discovered children's shorts with Wal-Mart's Faded Glory label, hooded sweat shirts emblazoned with Disney cartoons, shorts with hip-hop star Sean Combs' ENYCE tag, and sweaters from the French company Teddy Smith and the Scottish company Edinburgh Woollen Mill. Sears was among the companies listed in the account books.
Wal-Mart said it received a safety audit that showed the factory was "high-risk" and had decided well before the blaze to stop doing business with Tazreen. But it said a supplier had continued to use Tazreen without authorization. The retailer said it stopped doing business with the supplier Monday.
Sears said it learned after the blaze that its merchandise was being produced there without its approval through a vendor that has since been fired. Walt Disney Co., which licenses its characters to clothing makers, said its records indicate that none of its licensees have been permitted to make Disney-brand products at the factory for at least a year.
The Bangladesh Garment Manufacturers and Exporters Association fears the repercussions if Western companies pull out of the country's $20 billion a year garment business. Wal-Mart alone buys about $1 billion of garments here.
"It's the time of solidarity, not to go away. Wal-Mart should come forward to resolve existing issues through discussion and an attitude of partnership," said Shafiul Islam Mohiuddin, president of the trade group. "Otherwise, what will happen? Manufacturers would lose orders, workers will lose their jobs. This could create another complicated situation. For whose interest? None will benefit from it." | Survivors of the devastating Bangladesh garment factory fire tell a harrowing tale to human rights and labor groups: As the alarms sounded, managers—assuming it was a false alarm and not wanting workers to leave—shut the security gates, locking workers inside. The police and fire departments confirmed the "gates were locked on each floor," a director of one human rights group tells ABC News. "The fire department said they had to come in with bolt cutters to cut the locks." Three supervisors have been arrested, accused of locking the exits and keeping workers from escaping, the BBC reports. The owner of the factory, meanwhile, told the Daily Star newspaper "it was my fault," the AP reports. But he says he was never told the factory should have an emergency exit. The government is still calling the fire an act of sabotage, and police have also arrested two people seen on video attempting to start a fire in another factory, Reuters reports. There was also a fire at another factory Monday and an explosion at a third facility Tuesday, leading to concerns that someone is trying to sabotage the entire industry. Meanwhile, protests are raging near Dhaka for the third day, with thousands of textile workers demanding safer work conditions. |
2014 Quality of Living worldwide city rankings – Mercer survey
European cities dominate the top of the list for highest Quality of Living
Vienna takes the top spot, Baghdad ranks lowest
Vienna is the city with the world’s best quality of living, according to the Mercer 2014 Quality of Living rankings, in which European cities dominate. Zurich and Auckland follow in second and third place, respectively. Munich is in fourth place, followed by Vancouver, which is also the highest-ranking city in North America. Ranking 25 globally, Singapore is the highest-ranking Asian city, whereas Dubai (73) ranks first across Middle East and Africa. The city of Pointe-à-Pitre (69), Guadeloupe, takes the top spot for Central and South America.
Mercer conducts its Quality of Living survey annually to help multinational companies and other employers compensate employees fairly when placing them on international assignments. Two common incentives include a quality-of-living allowance and a mobility premium. A quality-of-living or “hardship” allowance compensates for a decrease in the quality of living between home and host locations, whereas a mobility premium simply compensates for the inconvenience of being uprooted and having to work in another country. Mercer’s Quality of Living reports provide valuable information and hardship premium recommendations for over 460 cities throughout the world, the ranking covers 223 of these cities.
“Political instability, high crime levels, and elevated air pollution are a few factors that can be detrimental to the daily lives of expatriate employees their families and local residents. To ensure that compensation packages reflect the local environment appropriately, employers need a clear picture of the quality of living in the cities where they operate,” said Slagin Parakatil, Senior Researcher at Mercer.
Mr Parakatil added: “In a world economy that is becoming more globalised, cities beyond the traditional financial and business centres are working to improve their quality of living so they can attract more foreign companies. This year’s survey recognises so-called ‘second tier’ or ‘emerging’ cites and points to a few examples from around the world These cities have been investing massively in their infrastructure and attracting foreign direct investments by providing incentives such as tax, housing, or entry facilities. Emerging cities will become major players that traditional financial centres and capital cities will have to compete with.”
Europe
Vienna is the highest-ranking city globally. In Europe, it is followed by Zurich (2), Munich (4), Düsseldorf (6), and Frankfurt (7). “European cities enjoy a high overall quality of living compared to those in other regions. Healthcare, infrastructure, and recreational facilities are generally of a very high standard. Political stability and relatively low crime levels enable expatriates to feel safe and secure in most locations. The region has seen few changes in living standards over the last year,” said Mr Parakatil.
Ranking 191 overall, Tbilisi, Georgia, is the lowest-ranking city in Europe. It continues to improve in its quality of living, mainly due to a growing availability of consumer goods, improving internal stability, and developing infrastructure. Other cities on the lower end of Europe’s ranking include: Minsk (189), Belarus; Yerevan (180), Armenia; Tirana (179), Albania; and St Petersburg (168), Russia. Ranking 107, Wroclaw, Poland, is an emerging European city. Since Poland’s accession to the European Union, Wroclaw has witnessed tangible economic growth, partly due to its talent pool, improved infrastructure, and foreign and internal direct investments. The EU named Wroclaw as a European Capital of Culture for 2016.
Americas
Canadian cities dominate North America’s top-five list. Ranking fifth globally, Vancouver tops the regional list, followed by Ottawa (14), Toronto (15), Montreal (23), and San Francisco (27). The region’s lowest-ranking city is Mexico City (122), preceded by four US cities: Detroit (70), St. Louis (67), Houston (66), and Miami (65). Mr Parakatil commented: “On the whole, North American cities offer a high quality of living and are attractive working destinations for companies and their expatriates. A wide range of consumer goods are available, and infrastructures, including recreational provisions, are excellent.”
In Central and South America, the quality of living varies substantially. Pointe-à-Pitre (69), Guadeloupe, is the region’s highest-ranked city, followed by San Juan (72), Montevideo (77), Buenos Aires (81), and Santiago (93). Manaus (125), Brazil, has been identified as an example of an emerging city in this region due to its major industrial centre which has seen the creation of the “Free Economic Zone of Manaus,” an area with administrative autonomy giving Manaus a competitive advantage over other cities in the region. This zone has attracted talent from other cities and regions, with several multinational companies already settled in the area and more expected to arrive in the near future.
“Several cities in Central and South America are still attractive to expatriates due to their relatively stable political environments, improving infrastructure, and pleasant climate,” said Mr Parakatil. “But many locations remain challenging due to natural disasters, such as hurricanes often hitting the region, as well as local economic inequality and high crime rates. Companies placing their workers on expatriate assignments in these locations must ensure that hardship allowances reflect the lower levels of quality of living.”
Asia Pacific
Singapore (25) has the highest quality of living in Asia, followed by four Japanese cities: Tokyo (43), Kobe (47), Yokohama (49), and Osaka (57). Dushanbe (209), Tajikistan, is the lowest-ranking city in the region. Mr Parakatil commented: “Asia has a bigger range of quality-of-living standard amongst its cities than any other region. For many cities, such as those in South Korea, the quality of living is continually improving. But for others, such as some in China, issues like pervasive poor air pollution are eroding their quality of living.”
With their considerable growth in the last decade, many second-tier Asian cities are starting to emerge as important places of business for multinational companies. Examples include Cheonan (98), South Korea, which is strategically located in an area where several technology companies have operations. Over the past decades, Pune (139), India has developed into an education hub and home to IT, other high-tech industries, and automobile manufacturing. The city of Xian (141), China has also witnessed some major developments, such as the establishment of an “Economic and Technological Development Zone” to attract foreign investments. The city is also host to various financial services, consulting, and computer services.
Elsewhere, New Zealand and Australian cities rank high on the list for quality of living, with Auckland and Sydney ranking 3 and 10, respectively.
Middle East and Africa
With a global rank of 73, Dubai is the highest-ranked city in the Middle East and Africa region. It is followed by Abu Dhabi (78), UAE; Port Louis (82), Mauritius; and Durban (85) and Cape Town (90), South Africa. Durban has been identified as an example of an emerging city in this region, due to the growth of its manufacturing industries and the increasing importance of the shipping port. Generally, though, this region dominates the lower end of the quality of living ranking, with five out of the bottom six cities; Baghdad (223) has the lowest overall ranking.
“The Middle East and especially Africa remain one of the most challenging regions for multinational organisations and expatriates. Regional instability and disruptive political events, including civil unrest, lack of infrastructure and natural disasters such as flooding, keep the quality of living from improving in many of its cities. However, some cities that might not have been very attractive to foreign companies are making efforts to attract them,” said Mr Parakatil.
-Ends-
Notes for Editors
Mercer produces worldwide quality-of-living rankings annually from its most recent Worldwide Quality of Living Surveys. Individual reports are produced for each city surveyed. Comparative quality-of-living indexes between a base city and a host city are available, as are multiple-city comparisons. Details are available from Mercer Client Services in Warsaw, at +48 22 434 5383 or at www.mercer.com/qualityofliving.
The data was largely collected between September and November 2013, and will be updated regularly to take account of changing circumstances. In particular, the assessments will be revised to reflect significant political, economic, and environmental developments.
Expatriates in difficult locations: Determining appropriate allowances and incentives
Companies need to be able to determine their expatriate compensation packages rationally, consistently and systematically. Providing incentives to reward and recognise the efforts that employees and their families make when taking on international assignments remains a typical practice, particularly for difficult locations. Two common incentives include a quality-of-living allowance and a mobility premium:
A quality-of-living or “hardship” allowance compensates for a decrease in the quality of living between home and host locations.
A mobility premium simply compensates for the inconvenience of being uprooted and having to work in another country.
A quality-of-living allowance is typically location-related, while a mobility premium is usually independent of the host location. Some multinational companies combine these premiums, but the vast majority provides them separately.
Quality of Living: City benchmarking
Mercer also helps municipalities assess factors that can improve their quality of living rankings. In a global environment, employers have many choices as to where to deploy their mobile employees and set up new business. A city’s quality of living standards can be an important variable for employers to consider.
Leaders in many cities want to understand the specific factors that affect their residents’ quality of living and address those issues that lower their city’s overall quality-of-living ranking. Mercer advises municipalities through a holistic approach that addresses their goals of progressing towards excellence, and attracting multinational companies and globally mobile talent by improving the elements that are measured in its Quality of Living survey.
Mercer hardship allowance recommendations
Mercer evaluates local living conditions in more than 460 cities it surveys worldwide. Living conditions are analysed according to 39 factors, grouped in 10 categories:
Political and social environment (political stability, crime, law enforcement, etc.)
Economic environment (currency exchange regulations, banking services)
Socio-cultural environment (media availability and censorship, limitations on personal freedom)
Medical and health considerations (medical supplies and services, infectious diseases, sewage, waste disposal, air pollution, etc)
Schools and education (standards and availability of international schools)
Public services and transportation (electricity, water, public transportation, traffic congestion, etc)
Recreation (restaurants, theatres, cinemas, sports and leisure, etc)
Consumer goods (availability of food/daily consumption items, cars, etc)
Housing (rental housing, household appliances, furniture, maintenance services)
Natural environment (climate, record of natural disasters)
The scores attributed to each factor, which are weighted to reflect their importance to expatriates, allow for objective city-to-city comparisons. The result is a quality of living index that compares relative differences between any two locations evaluated. For the indices to be used effectively, Mercer has created a grid that allows users to link the resulting index to a quality of living allowance amount by recommending a percentage value in relation to the index.
Mercer Quality of Living Survey 2014 – Top 5 and Bottom 5 Cities by Region
Top 5 and Bottom 5 Cities – North America Regional Rank 2014 Overall Rank 2014 City Country 1 5 VANCOUVER CANADA 2 14 OTTAWA CANADA 3 15 TORONTO CANADA 4 23 MONTREAL CANADA 5 27 SAN FRANCISCO UNITED STATES 1 (lowest in region) 122 MEXICO CITY MEXICO 2 70 DETROIT UNITED STATES 3 67 ST. LOUIS UNITED STATES 4 66 HOUSTON UNITED STATES 5 65 MIAMI UNITED STATES Top 5 and Bottom 5 Cities – Central and South America Regional Rank 2014 Overall Rank 2014 City Country 1 69 POINTE-À-PITRE GUADELOUPE 2 72 SAN JUAN PUERTO RICO 3 77 MONTEVIDEO URUGUAY 4 81 BUENOS AIRES ARGENTINA 5 93 SANTIAGO CHILE 1 (lowest in region) 221 PORT-AU-PRINCE HAITI 2 181 TEGUCIGALPA HONDURAS 3 176 CARACAS VENEZUELA 4 175 SAN SALVADOR EL SALVADOR 5 170 MANAGUA NICARAGUA Top 5 and Bottom 5 Cities – Europe Regional Rank 2014 Overall Rank 2014 City Country 1 1 VIENNA AUSTRIA 2 2 ZURICH SWITZERLAND 3 4 MUNICH GERMANY 4 6 DÜSSELDORF GERMANY 5 7 FRANKFURT GERMANY 1 (lowest in region) 191 TBILISI GEORGIA 2 189 MINSK BELARUS 3 180 YEREVAN ARMENIA 4 179 TIRANA ALBANIA 5 168 ST. PETERSBURG RUSSIA Top 5 and Bottom 5 Cities – Asia (excluding Australia and New Zealand) Regional Rank 2014 Overall Rank 2014 City Country 1 25 SINGAPORE SINGAPORE 2 43 TOKYO JAPAN 3 47 KOBE JAPAN 4 49 YOKOHAMA JAPAN 5 57 OSAKA JAPAN 1 (lowest in region) 209 DUSHANBE TAJIKISTAN 2 208 DHAKA BANGLADESH 3 206 ASHKHABAD TURKMENISTAN 4 204 BISHKEK KYRGYZSTAN 5 202 TASHKENT UZBEKISTAN Top 3 Cities - Australasia Regional Rank 2014 Overall Rank 2014 City Country 1 3 AUCKLAND NEW ZEALAND 2 10 SYDNEY AUSTRALIA 3 12 WELLINGTON NEW ZEALAND Top 5 and Bottom 5 Cities – Middle East and Africa Regional Rank 2014 Overall Rank 2014 City Country 1 73 DUBAI UNITED ARAB EMIRATES 2 78 ABU DHABI UNITED ARAB EMIRATES 3 82 PORT LOUIS MAURITIUS 4 85 DURBAN SOUTH AFRICA 5 90 CAPE TOWN SOUTH AFRICA 1 (lowest in region) 223 BAGHDAD IRAQ 2 222 BANGUI CENTRAL AFRICAN REPUBLIC 3 220 N’DJAMENA CHAD 4 219 SANA’A YEMEN ARAB REPUBLIC 5 218 BRAZZAVILLE CONGO
The information and data obtained through the Quality of Living reports are for information purposes only and are intended for use by multinational organisations, government agencies, and municipalities. They are not designed or intended for use as the basis for foreign investment or tourism. In no event will Mercer be liable for any decision made or action taken in reliance of the results obtained through the use of, or the information or data contained in, the reports. While the reports have been prepared based upon sources, information, and systems believed to be reliable and accurate, they are provided on an “as is” basis, and Mercer accepts no responsibility/liability for the validity/accuracy (or otherwise) of the resources/data used to compile the reports. Mercer and its affiliates make no representations or warranties with respect to the reports, and disclaim all express, implied and statutory warranties of any kind, including, representations and implied warranties of quality, accuracy, timeliness, completeness, merchantability, and fitness for a particular purpose. ||||| VIENNA Vienna's excellent infrastructure, safe streets and good public health service make it the nicest place to live in the world, consulting group Mercer said in a global survey which put Baghdad firmly in last place.
German and Swiss cities also performed especially well in the quality of living rankings, with Zurich, Munich, Dusseldorf, Frankfurt, Geneva and Bern in the top 10.
The Austrian capital, with its ornate buildings, public parks and extensive bicycle network recently reduced the cost of its annual public transport ticket to 1 euro a day.
Serious crime is rare and the city of around 1.7 million inhabitants regularly tops global quality of life surveys.
But Mercer warned that top-ranking European cities could not take their position for granted in the survey, which assessed more than 200 cities.
"They are not immune to any decrease of living standards should this (economic) turmoil persist," Mercer's senior researcher Slagin Parakatil said on the company's website.
Mercer, which also ranked cities according to personal safety, gave Athens a poor score because of clashes between demonstrators and police and political instability.
"In 2011 Athens is ranked in Europe among the lowest in the personal safety ranking," Parakatil said.
Oslo also fell to 24th place in the separate safety survey because of Anders Breivik's mass killings in July. It would usually be in the top 15, Mercer said.
Baghdad's political turmoil, poor security enforcement and attacks on local people and foreigners made it the worst place to live in 2011, both in terms of life quality and safety, Mercer said.
Political and economic unrest in Africa and the Middle East also pushed down scores in those regions.
"Many countries such as Libya, Egypt, Tunisia and Yemen have seen their quality of living levels drop considerably," Parakatil said.
"Political and economic reconstruction in these countries, combined with funding to serve basic human needs, will undoubtedly boost the region."
He said that while the outlook is uncertain for most of the world because of economic and political turmoil, cities in Asia-Pacific look set to benefit thanks to political stability and solid growth.
Auckland, Sydney, Wellington, Melbourne and Perth made it into the top 20 for quality of life in 2011 while Singapore was the highest-ranking Asian city in 25th place.
Top 10 in Mercer Quality of Living survey
1 Vienna Austria
2 Zurich Switzerland
3 Auckland New Zealand
4 Munich Germany
5 Dusseldorf Germany
5 Vancouver Canada
7 Frankfurt Germany
8 Geneva Switzerland
9 Bern Switzerland
9 Copenhagen Denmark
Full city rankings: bit.ly/syDUPF
(Reporting by Sylvia Westall) | If you're looking for the world's top quality of living, get thee to Vienna, the elegant European capital that tops the list of best cities to live, reports Reuters. The other end of the spectrum is found, perhaps unsurprisingly, in Baghdad. The rankings are compiled by consulting group Mercer, which evaluated cities based on public safety, housing, local economy, recreation, and a bevy of other quality-of-life indicators. The top 10: Vienna, Austria Zurich, Switzerland Auckland, New Zealand Munich, Germany Dusseldorf, Germany Vancouver, Canada Frankfurt, Germany Geneva, Switzerland Tie: Copenhagen, Denmark; and Bern, Switzerland Check out the top 50 here. |
Save 15% when you book your Go Ape Adventure by March 1st.
USE PROMO CODE: EARLYBIRD
2019 calendars are now available. Pick your date and GO! ||||| We are deeply saddened by the tragic event that occurred yesterday at our Lums Pond State Park Go Ape course.
We know many of you have questions and concerns, and we want to be as transparent as possible. Your safety is our number one priority. We have carried out our own thorough investigation and continue to work with external agencies including the Delaware State Police.
Based on the findings of our investigation, the incident was not a result of structural or equipment failure. Our findings are that the guest had unfortunately disconnected herself from the safety system.
We can confirm that a full inspection of the course and safety equipment has been completed and it remains in sound operational condition. We have made the decision to continue to operate at all of our other courses; however, our Lums Pond course will be closed until further notice as a mark of respect.
We would like to thank the State Police, our Park Partners and the emergency services for their prompt assistance. The thoughts and prayers of the entire Go Ape team remain with the family and friends.
Go Ape ||||| Skip Ad Ad Loading... x Embed x Share A 59-year-old Felton woman died Wednesday after falling from a platform at a zip line attraction at Lums Pond State Park. 8/25/16 Damian Giletto/The News Journal
The Felton woman was standing on a platform at the outdoor zip-lining adventure course when she lost her footing and fell
Tina Werner, who fell Wednesday at Go Ape and died on the outdoor adventure course at Lums Pond State Park, kisses her daughter Melissa Slater. (Photo: COURTESY OF MELISSA SLATER) Story Highlights The 59-year-old woman fell about 35 feet at the outdoor attraction Wednesday.
Police have identified her as Tina Werner, of Felton, and say she lost her footing on the fourth platform at Go Ape.
The outdoor zip-lining course takes about two to three hours to complete and takes patrons as high as 50 feet up.
Tina Werner died Wednesday completing her bucket list.
The 59-year-old Felton woman was standing on the fourth platform at Go Ape, an outdoor zip-lining adventure course in Lums Pond State Park, when she fell about 35 feet to the ground below, Delaware State Police said Thursday.
Go Ape said in a statement Thursday that witnesses said Werner "unfortunately disconnected herself from the safety system" on the final platform of site four. The company stressed that she received proper training on how to remain attached to the safety system.
Police have not said whether Werner was properly attached to the high-ropes adventure course, made up of "a series of zip lines, Tarzan swings, rope ladders, bridges, swings, and trapezes," according to the company's website.
Go Ape pays the state 3% of its gross revenue or a minimum of $15,000 a year to operate the attraction. The company pays for all costs to operate the attraction. Park rangers patrol Lums Pond State Park, which includes the zip line course.
Based on the contract between DNREC and Go Ape, the attraction is not regulated by any state inspection procedures, including for safety, and can operate solely on yearly inspections performed by a third party chosen by the company. The contract does allow representatives from the state to conduct inspections if desired.
According to Delaware code, the state Fire Marshal's Office is required to inspect and approve any "mechanical device or devices that carry or convey passengers along, around or over a fixed or restricted route or course or within a defined area for the purpose of giving its passengers amusement, pleasure or excitement." This includes everything from coin-operated machines and amusement park rides to seesaws and playground swings.
Assistant State Fire Marshal Michael Chionchio said because the zip lines rely on gravity rather than electric or mechanical functions, they don't fall under the state's jurisdiction or inspection requirements.
A 10-year contract between DNREC and Go Ape indicates that the company is required to have a course construction and maintenance inspection once a year by an external course constructor, as well as complete a yearly review with OSHA and the operations director and an annual review with the operations director and the site manager.
There are also required daily checks of equipment and course maintenance by instructors and the site manager, according to the contract.
Go Ape spokesman Jeffrey A. Davis said the company has never had major injuries at any of its 15 locations.
"Any injuries that guests have experienced have been minor and treatable," he said.
Werner's daughter, Melissa Slater, posted a photo on Facebook Wednesday night of her mother kissing her cheek on her wedding day. She declined to comment further Thursday.
"Today, about 3 hours ago, I was told that my mother has died," she wrote. "Full of love and adventure, I am thankful to be her daughter. My mom died completing her bucket list, zip lining in Newark."
"During this time, I seek to understand God's purpose. I want to take this opportunity to remind you that we never know when we are no longer on this earth," she added.
She also asked for prayers for her family, writing that "this is truly the hardest situation that I have ever faced."
Werner was with relatives at the attraction, police said.
Tina's husband, Steve Werner, said completing the zip line was a dream of his wife's and that she was "doing well in checking things off of the list." His daughter broke the news to him about his wife's death, leaving him "crushed," he said.
“I describe her as having one of the biggest hearts in the world," Steve Werner said. "She would do anything for anyone. She’s going to be missed.”
The course, which takes about two to three hours to complete, spans 7 acres and has five zip lines, four of which travel over Lums Pond. The highest platform in the park, according to Go Ape, is 48 feet.
The attraction closed after Werner fell Wednesday and did not open Thursday. Go Ape officials have not announced plans for when it will reopen, Davis said. The company said in a statement that a full safety inspection has already occurred and the course, has been cleared.
"All of the course and associated safety equipment was and remains in sound operational condition," according to the statement. "Nothing was broken or unserviceable."
The company has multiple Go Ape sites throughout the country, including locations in Pittsburgh; Indianapolis; Rockville, Maryland; and Williamsburg, Virginia. Officials stressed that more than 6 million people have completed Go Ape courses safely.
Buy Photo Master Corporal Jeffrey R. Hale answers questions concerning the death after fall at Go Ape during a press conference at Troop 2. (Photo: SUCHAT PEDERSON/THE NEWS JOURNAL)
Numerous medical units, including a state police helicopter, were called to the state park near Kirkwood after her fall. Before they arrived, employees and park rangers attempted to save Werner's life.
She was transported to Christiana Hospital, where she was pronounced dead. An autopsy will be performed by the Division of Forensic Science on Thursday. Results have not been released.
Adventure-seekers receive a 30-minute training session before they are turned loose in the forest canopy. Go Ape confirmed Tina Werner completed the safety training Wednesday.
Though instructors give safety briefings and training, the course is not staffed at every stop with employees, according to the website. It notes that instructors "are constantly patrolling the course to offer assistance and encouragement as needed."
STORY: Delaware's 5 most common infectious diseases
STORY: Cyberattack could impact 148,000 with Highmark Medicaid
STORY: Dover woman charged with running over boyfriend
The company is slated to expand its offerings at Lums Pond on Sept. 3 to include a Treetop Junior Course, which would allow participants ages 6 to 12 the opportunity to complete more than 18 obstacles and two zip lines at heights of 20 feet above the ground. Go Ape said this system is designed so participants stay attached the entire time they are in the trees and become unhooked only when back on the ground.
The course was listed as the first of its kind in the Philadelphia and Wilmington area when it opened in summer 2013.
Buy Photo A closed sign on the window of Go Ape at Lums Pond which remains closed after the death of a Felton woman when she fell to her death last week. (Photo: SUCHAT PEDERSON/THE NEWS JOURNAL)
Reporter Jerry Smith contributed to this story.
Contact Brittany Horn at (302) 324-2771 or bhorn@delawareonline.com. Follow her on Twitter at @brittanyhorn.
STORY: Dover police: Safety at Firefly Music Festival improves
STORY: Family in fear after 5 vehicles have smashed into house
Read or Share this story: http://delonline.us/2bYF5vz ||||| A 59-year-old woman who fell 35 feet to her death from a zip line platform at a Delaware state park had "disconnected herself from the safety system" at the time of the accident, witnesses say.
Tina Werner was waiting to ride the zip line at the Go Ape Tree Top Adventure at Lums Pond State Park in Bear on Wednesday afternoon when she tumbled from the platform.
Park rangers and Go Ape employees performed first aid on Werner until help arrived, but she was pronounced dead at a nearby hospital. Foul play is not suspected.
Daughter Melissa Slater wrote on Facebook later Wednesday that her mother was "full of love and adventure," and was "completing her bucket list."
Woman killed in fall from zip line at state park in Delaware
Slater said her mom, who recently had traveled to Venice and had taken a hot-air balloon ride, told her Tuesday night that she was going to do the zip line.
"I wasn't surprised," said Slater.
An employee closes up the Go Ape course for the day after a 59 year-old woman fell to her death at the zip line course in Lums Pond State Park in Bear, Del., on Wednesday. (Kyle Grantham/AP)
She said her mom, who was visiting Lums Pond with a friend, was able to complete at least one zip line ride before falling.
"So she did do it," said Slater.
Go Ape said in a statement on Thursday that Werner "had unfortunately disconnected herself from the safety system" when she fell, according to witnesses. The tragedy took place toward "the end of the activity."
"We confirm that a full inspection of the course, with particular focus on the last platform at site 4, has been undertaken and all of the course and associated safety equipment was and remains in sound operational condition," read the statement. "Nothing was broken or unserviceable."
While participants navigate the course without direct supervision, Go Ape says its instructors are constantly on the course to offer assistance.
Patrons receive a 30-minute training session before embarking on the course, according to the Go Ape website.
"Waivers are signed by participants to accept responsibility for following the safety rules and advice on the course and assume all risks associated with his/her participation," according to the the company's website. "These safety rules are communicated thoroughly in a safety brief that is required of every participant before they Go Ape."
The rides are inspected on a regular basis, said Go Ape spokesman Jeff Davis.
The ride has been closed for an undetermined amount of time to help with the investigation, and also out of respect for Werner's family.
"The Go Ape company is extremely saddened by this," Davis said.
With News Wire Services ||||| Delaware State Park rangers wait outside the Go Ape zip line course after a 59 year-old woman fell to her death at the zip line course in Lums Pond State Park on Wednesday, Aug. 24, 2016. Jeff Davis,... (Associated Press)
Delaware State Park rangers wait outside the Go Ape zip line course after a 59 year-old woman fell to her death at the zip line course in Lums Pond State Park on Wednesday, Aug. 24, 2016. Jeff Davis, a spokesman for Go Ape, said Thursday, Aug. 25, 2016, that the rides are inspected on a regular basis.... (Associated Press)
DOVER, Del. (AP) — A woman who fell 35 feet to her death from a zip line platform had disconnected herself from the safety system, the attraction's operator said Thursday.
Delaware State Police investigators are investigating how Tina Werner tumbled off the platform at the Go Ape Tree Top Adventure attraction in Lums Pond State Park on Wednesday.
Participants at Go Ape courses are equipped with climbing harnesses and two sets of ropes with carabiners that they unclip and clip to safety wires in sequence as they move through the trees.
Werner, 59, of Felton, had completed the required safety training, and was nearing the end of the course when she fell, said Jeff Davis, a spokesman for Go Ape. The attraction in Bear spans seven acres and includes four zip lines and a variety of swings, rope ladders, bridges and trapezes.
"Participant witnesses have stated that at the time of the accident the participant had unfortunately disconnected herself from the safety system," Davis said in an email.
An inspection found that all of the course and associated safety equipment was in sound operating condition, and that "nothing was broken or unserviceable," Davis wrote.
Late Thursday, state police said in a news release that an autopsy found that Werner died from "multiple blunt force trauma by way of an accident."
Werner's daughter, Melissa Slater, described her mom as "super fun," and "adventurous." After traveling to Venice, Italy and taking a hot-air balloon ride, Werner had told her daughter Tuesday that riding the zip line was next.
"She was finishing her bucket list," said Werner's daughter, Melissa Slater.
According to the website of Go Ape, which is based in Frederick, Maryland, and operates attractions in 15 states, patrons receive a 30-minute training session before being turned loose on the course, which can take them as high as 50 feet in the air.
While participants navigate the course without direct supervision, Go Ape says its instructors are constantly patrolling the course to offer assistance and encouragement as needed.
"Waivers are signed by participants to accept responsibility for following the safety rules and advice on the course and assume all risks associated with his/her participation," the company's website states. "These safety rules are communicated thoroughly in a safety brief that is required of every participant before they Go Ape."
Slater said her mom, who was visiting Lums Pond with a friend, was able to complete at least one zip line ride before falling.
"So she did do it," said Slater, who posted a tribute on Facebook.
The Go Ape attraction opened at Lums Pond in 2013, and Davis said its equipment is regularly inspected. He said the attraction has been closed for now to help with the investigation, and out of respect to Werner's family.
"The Go Ape company is extremely saddened by this," he said.
Park rangers and Go Ape employees performed first aid on Werner until paramedics arrived, but she was pronounced dead at Christiana Hospital. | Witnesses say a 59-year-woman who fell to her death at a Delaware zip line park "unfortunately disconnected herself from the safety system," says the park in a statement. Tina Werner plunged 35 feet from a platform at the Go Ape Tree Top Adventure at Lums Pond State Park on Wednesday afternoon. Police haven't confirmed the witness accounts, reports the Journal News, but the park statement notes "a full inspection of the course and safety equipment has been completed and it remains in sound operational condition." Werner’s daughter, Melissa Slater, tells the New York Daily News her mom was "full of love and adventure" and had recently traveled to Venice and gone hot air ballooning. Zip lining was another item on her bucket list. "My mom died completing" that bucket list, Slater wrote on Facebook. "This is truly the hardest situation that I have ever faced. I ask for you prayers for my family." Werner, of Felton, was at the park with a friend. She completed one ride before her fall. Each zip line participant is fitted with climbing harnesses and two sets of ropes with carabiners that they clip to safety wires, reports the AP, which notes Werner had gone through safety training as required. Go Ape, which opened the Delaware park in 2013, has never seen a major accident at any of the company's 15 locations, a spokesman says. (This woman plunged 150-feet in a zip line fall.) |
Media playback is unsupported on your device Media caption Zimbabwe's week of upheaval in two minutes
Zimbabwe's ruling Zanu-PF party has summoned its MPs to discuss the future of its leader, President Robert Mugabe, after a deadline for his resignation came and went on Monday.
The deadline was set by Mr Mugabe's own party, Zanu-PF.
The embattled leader surprised Zimbabweans on Sunday, declaring on TV that he planned to remain as president.
Zanu-PF says it backs impeachment, and proceedings could begin as soon as Tuesday when parliament meets.
In a draft motion seen by Reuters news agency, the party blamed the president for an "unprecedented economic tailspin".
Mr Mugabe's grip on power has weakened considerably since the country's army intervened last Wednesday in a row over who should succeed him.
The crisis began two weeks ago when the 93-year-old leader sacked his deputy Emmerson Mnangagwa, angering army commanders who saw it as an attempt to position his wife Grace as next president.
Zimbabwe has since then seen huge street rallies demanding his immediate resignation.
The protests have been backed by the influential war veterans - who fought in the conflict that led to independence from Britain in 1980.
The group's leader, Chris Mutsvangwa, on Monday called for more demonstrations. "Mugabe, your rule is over," Mr Mutsvangwa said. "The emperor has no clothes."
Choreographing a departure
Andrew Harding in Harare
The city is swirling with rumours that Mr Mugabe is planning his resignation and that he may go back on television to announce it at any stage, and that Sunday's speech was simply about giving carte blanche to the military for what they've done.
But we just don't know at this stage if he will give in to the pressure from the war veterans, his own party, and the public.
Mr Mugabe said in his speech that he planned to preside over the Zanu-PF congress next month, a statement people here found baffling after the party voted to strip him of his leadership and kick out his wife.
What is clear is that everyone here believes that the Mugabe era is over. Saturday's protests unleashed something and people believe that a line has been crossed. Now it is really about negotiating the time, the process, the choreography of Mr Mugabe's departure.
The fear of Zanu-PF and of the security services will not go away overnight. People here grew up with that fear. In the meantime, the streets are calm, but Tuesday may bring more demonstrations.
Media playback is unsupported on your device Media caption Robert Mugabe: "The congress is due... I will preside over its processes"
What did Mugabe say in his speech?
During the 20-minute address, the president, who was flanked by generals, made no mention of the pressure from his party and the public to quit.
Instead, he declared that the military had done nothing wrong by seizing power and placing him under house arrest.
"Whatever the pros and cons of how they [the army] went about their operation, I, as commander-in-chief, do acknowledge their concerns," he said, in reference to the army's move last week to take over the state broadcaster in the capital Harare.
He also said "the [Zanu-PF] party congress is due in a few weeks and I will preside over its processes".
Before Mr Mugabe's speech, Mr Mnangagwa was named as Zanu-PF's new leader and candidate for the 2018 general elections, while Mr Mugabe's wife was expelled.
BBC Africa Editor Fergal Keane said his understanding was that Mr Mugabe had agreed to resign, but then changed his mind.
Our correspondent says the generals have no intention of forcing Mr Mugabe out by the barrel of a gun, and are happy to let the Zanu-PF carry out its procedures, working through impeachment if necessary.
So what happens next?
Impeachment proceedings could be launched on Tuesday in parliament - but it is not clear how long this would take.
Both the National Assembly and the Senate need to pass a vote by simple majority to begin the process, which is laid out in the constitution.
This can either be on grounds of "serious misconduct", "violation" of the constitution or "failure to obey, uphold or defend" it, or "incapacity".
The chambers must then appoint a joint committee to investigate removing the president.
If the committee recommends impeachment, the president can then be removed if both houses back it with two-thirds majorities.
The opposition MDC-T party has tried unsuccessfully to impeach Mr Mugabe in the past, but this time the ruling party - which has an overwhelming majority in both houses - is likely to go against him.
Media playback is unsupported on your device Media caption Zimbabwe reacts: "We need him to resign... our lives are terrible right now"
The advantage for the military is that if Mr Mugabe is impeached, it can claim that he was removed legally, and not by force.
The problem for the generals is that the current vice-president would then take power. That is Phelekezela Mphoko, a supporter of Mr Mugabe's wife Grace.
The military would prefer to install Emmerson Mnangagwa, the former vice-president who was briefly exiled.
And it is still possible that Mr Mugabe could delay the process or cling to power by refusing to resign - and be forced into exile himself.
What's the reaction been?
The War Veterans Association, which used to back Mr Mugabe, now says it is time for him to step down.
"Thirty-seven years, you have had your time, you are toast now politically," Mr Mutsvangwa told the BBC.
Opposition leader Morgan Tsvangirai said he was "baffled" by the president's address.
"He's playing a game. He has let the whole nation down," he told Reuters news agency.
Mr Mugabe has led the country since it gained independence from Britain in 1980. ||||| FILE - In this Aug. 30 2017 file photo, Zimbabwean Deputy President, Emmerson Mnangagwa, greets party supporters at a ZANU-PF rally in Gweru, Zimbabwe. A Zimbabwe ruling party official confirmed Sunday,... (Associated Press)
FILE - In this Aug. 30 2017 file photo, Zimbabwean Deputy President, Emmerson Mnangagwa, greets party supporters at a ZANU-PF rally in Gweru, Zimbabwe. A Zimbabwe ruling party official confirmed Sunday, Nov. 19, 2017 that the Central Committee has fired President Robert Mugabe as party leader and replaced... (Associated Press)
JOHANNESBURG (AP) — Emmerson Mnangagwa, elected Sunday as the new leader of Zimbabwe's ruling political party and positioned to take over as the country's president, has engineered a remarkable comeback using skills he no doubt learned from his longtime mentor, President Robert Mugabe.
Mnangagwa served for decades as Mugabe's enforcer — a role that gave him a reputation for being astute, ruthless and effective at manipulating the levers of power. Among the population, he is more feared than popular, but he has strategically fostered a loyal support base within the military and security forces.
A leading government figure since Zimbabwe's independence in 1980, he became vice president in 2014 and is so widely known as the "Crocodile" that his supporters are called Team Lacoste for the brand's crocodile logo.
The 75-year-old "is smart and skillful, but will he be a panacea for Zimbabwe's problems? Will he bring good governance and economic management? We'll have to watch this space," said Piers Pigou, southern Africa expert for the International Crisis Group.
Mugabe unwittingly set in motion the events that led to his own downfall, firing his vice president on Nov. 6. Mnangagwa fled the country to avoid arrest while issuing a ringing statement saying he would return to lead Zimbabwe.
"Let us bury our differences and rebuild a new and prosperous Zimbabwe, a country that is tolerant to divergent views, a country that respects opinions of others, a country that does note isolate itself from the rest of the world because of one stubborn individual who believes he is entitled to rule this country until death," he said in the Nov. 8 statement.
He has not been seen in public but is believed to be back in Zimbabwe.
For weeks, Mnangagwa had been publicly demonized by Mugabe and his wife. Grace, so he had time to prepare his strategy. Within days of the vice president's dismissal, his supporters in the military put Mugabe and his wife under house arrest.
When Mugabe refused to resign, a massive demonstration Saturday brought thousands of people into the streets of the capital, Harare. It was not a spontaneous uprising. Thousands of professionally produced posters praising Mnangagwa and the military had been printed ahead of time.
"It was not a last-minute operation," Pigou said. "The demonstration was orchestrated."
At the same time, Mnangagwa's allies in the ruling ZANU-PF party lobbied for the removal of Mugabe as the party leader. At a Central Committee meeting Sunday, Mnangagwa was voted in as the new leader of the party, which had been led by Mugabe since 1977.
In an interview with The Associated Press years ago, Mnangagwa was terse and stone-faced, backing up his reputation for saying little but acting decisively. Party insiders say that he can be charming and has friends of all colors.
Mnangagwa joined the fight against white minority rule in Rhodesia while still a teen in the 1960s. In 1963, he received military training in Egypt and China. As one of the earliest guerrilla fighters against Ian Smith's Rhodesian regime, he was captured, tortured and convicted of blowing up a locomotive in 1965.
Sentenced to death by hanging, he was found to be under 21, and his punishment was commuted to 10 years in prison. He was jailed with other prominent nationalists including Mugabe.
While imprisoned, Mnangagwa studied through a correspondence school. After his release in 1975, he went to Zambia, where he completed a law degree and started practicing. Soon he went to newly independent Marxist Mozambique, where he became Mugabe's assistant and bodyguard. In 1979, he accompanied Mugabe to the Lancaster House talks in London that led to the end of Rhodesia and the birth of Zimbabwe.
"Our relationship has over the years blossomed beyond that of master and servant to father and son," Mnangagwa wrote this month of his relationship with Mugabe.
When Zimbabwe achieved independence in 1980, Mnangagwa was appointed minister of security. He directed the merger of the Rhodesian army with Mugabe's guerrilla forces and the forces of rival nationalist leader Joshua Nkomo. Ever since, he has kept close ties with the military and security forces.
In 1983, Mugabe launched a brutal campaign against Nkomo's supporters that became known as the Matabeleland massacres for the deaths of 10,000 to 20,000 Ndebele people in Zimbabwe's southern provinces.
Mnangagwa was widely blamed for planning the campaign of the army's North Korean-trained Fifth Brigade on their deadly mission into the Matabeleland provinces. Mnangagwa denies this.
He also is reputed to have amassed a considerable fortune and was named in a United Nations investigation into exploitation of mineral resources in Congo and has been active in making Harare a significant diamond trading center.
In 2008, he was Mugabe's election agent in balloting that was marked by violence and allegations of vote-rigging. He also helped broker the creation of a coalition government that brought in opposition leader Morgan Tsvangirai as prime minister.
In recent years, Mnangagwa has promoted himself as an experienced leader who will bring stability to Zimbabwe. But his promises to return Zimbabwe to democracy and prosperity are viewed with skepticism by many experts.
"He has successfully managed a palace coup that leaves ZANU-PF and the military in charge. He's been Mugabe's bag man for decades," said Zimbabwean author and commentator Peter Godwin. "I have low expectations about what he will achieve as president. I hope I will be proved wrong."
Godwin, who has followed Mnangagwa for years, said he has little of Mugabe's charisma or talent for public speaking.
Todd Moss, Africa expert for the Center for Global Development, also expressed reservations.
"Despite his claims to be a business-friendly reformer, Zimbabweans know Mnangagwa is the architect of the Matabeland massacres and that he abetted Mugabe's looting of the country," Moss said. "Mnangagwa is part of its sad past, not its future." ||||| HARARE (Reuters) - President Robert Mugabe stunned Zimbabwe on Sunday by making no mention of resignation in a television address, defying his own ZANU-PF party, which had sacked him hours earlier, and hundreds of thousands of protesters who had already hailed his downfall.
Two sources - one a senior member of the government, the other familiar with talks with leaders of the military - had told Reuters Mugabe would announce his resignation to the nation after ZANU-PF dismissed him as its leader in a move precipitated by an army takeover four days earlier.
But in the speech from his State House office, sitting alongside a row of generals, Mugabe acknowledged criticisms from ZANU-PF, the military and the public but made no mention of his own position.
Instead, he said the events of the week were not “a challenge to my authority as head of state and government”, and pledged to preside over the congress scheduled for next month.
Opposition leader Morgan Tsvangirai was dumbstruck.
“I am baffled. It’s not just me, it’s the whole nation. He’s playing a game,” he told Reuters. “He is trying to manipulate everyone. He has let the whole nation down.”
ZANU-PF had given the 93-year-old, who has led his country since independence in 1980, less than 24 hours to quit as head of state or face impeachment, an attempt to secure a peaceful end to his tenure after a de facto military coup.
Chris Mutsvangwa, the leader of the liberation war veterans who have been spearheading an 18-month campaign to oust Mugabe, said plans to impeach him in parliament, which next sits on Tuesday, would now go ahead, and that there would be mass protests on Wednesday.
He also implied that Mugabe, who spoke with a firm voice but occasionally lost his way in his script during the 20-minute address, was not aware of what had happened just hours earlier.
“BLIND OR DEAF”
“Either somebody within ZANU-PF didn’t tell him what had happened within his own party, so he went and addressed that meeting oblivious, or (he was) blind or deaf to what his party has told him,” Mutsvangwa said.
ZANU-PF’s central committee had earlier named Emmerson Mnangagwa as its new leader. It was Mugabe’s sacking of Mnangagwa as his vice-president - to pave the way for his wife Grace to succeed him - that triggered the army’s intervention.
People watch as Zimbabwean President Robert Mugabe addresses the nation on television, at a bar in Harare, Zimbabwe, November 19, 2017. REUTERS/Philimon Bulawayo
On Saturday, hundreds of thousands had taken to the streets of the capital Harare to celebrate Mugabe’s expected downfall and hail a new era for their country.
In jubilant scenes, men, women and children ran alongside armoured cars and the troops who stepped in to target what the army called “criminals” in Mugabe’s inner circle.
Many heralded a “second liberation” and spoke of their dreams for political and economic change after two decades of deepening repression and hardship.
They, like the more than 3 million Zimbabweans who have emigrated to neighbouring South Africa in search of a better life, are likely to be bitterly disappointed by Mugabe’s defiance.
Speaking from a secret location in South Africa, his nephew, Patrick Zhuwao, had told Reuters that Mugabe and his wife were “ready to die for what is correct” rather than step down in order to legitimise what he described as a coup.
Zhuwao, who was also sanctioned by ZANU-PF, did not answer his phone on Sunday. However, Mugabe’s son Chatunga railed against those who had pushed out his father.
“You can’t fire a Revolutionary leader!” he wrote on this Facebook page. “ZANU-PF is nothing without President Mugabe.”
DANGER AHEAD
The huge crowds in Harare have given a quasi-democratic veneer to the army’s intervention, backing its assertion that it is merely effecting a constitutional transfer of power, rather than a plain coup, which would risk a diplomatic backlash.
But some of Mugabe’s opponents are uneasy about the prominent role played by the military, and fear Zimbabwe might be swapping one army-backed autocrat for another, rather than allowing the people to choose their next leader.
“The real danger of the current situation is that, having got their new preferred candidate into State House, the military will want to keep him or her there, no matter what the electorate wills,” former education minister David Coltart said.
The United States, a longtime Mugabe critic, said it was looking forward to a new era in Zimbabwe, while President Ian Khama of neighbouring Botswana said Mugabe had no diplomatic support in the region and should resign at once.
Slideshow (24 Images)
Besides changing its leadership, ZANU-PF said it wanted to change the constitution to reduce the power of the president, a possible sign of a desire to move towards a more pluralistic and inclusive political system.
However, Mnangagwa’s history as state security chief during the so-called Gukurahundi crackdown, when an estimated 20,000 people were killed by the North Korean-trained Fifth Brigade in Matabeleland in the early 1980s, suggested that quick, sweeping change was unlikely.
“The deep state that engineered this change of leadership will remain, thwarting any real democratic reform,” said Miles Tendi, a Zimbabwean academic at Oxford University. | Robert Mugabe, already known as the Energizer Bunny of southern African strongman leaders, stunned Zimbabwe Sunday by failing to deliver a widely expected resignation. Instead, the 93-year-old, speaking after his own ZANU-PF party fired him as leader, promised to preside over a party congress set for next month, Reuters reports. Mugabe, flanked by generals during a 20-minute televised speech, said the events of the previous days, including an army takeover, didn't pose "a challenge to my authority as head of state and government." The party earlier said it would launch impeachment proceedings if he didn't resign by noon Monday, a deadline that has now passed. According to Fergal Keane, the BBC's Africa editor, Mugabe apparently decided to resign but then changed his mind. Keane says the military appears to have decided to let the impeachment process take its course instead of forcing Mugabe out. Emmerson Mnangagwa, the vice president whose firing led to the army takeover, was elected as the party's new leader during its Central Committee meeting Sunday. The 75-year-old, nicknamed "The Crocodile" for his shrewd and ruthless ways, is believed to have now returned to Zimbabwe from abroad, the AP reports. At the same meeting, ZANU-PF expelled Grace Mugabe, the president's deeply unpopular wife. |
(Carolyn Kaster/AP Photo)
Weary of waiting for an economic recovery worth its name, a frustrated American public has sent Barack Obama's job approval rating to a career low - with a majority in the latest ABC News/Washington Post poll favoring a Republican Congress to act as a check on his policies.
Registered voters by 53-39 percent in the national survey say they'd rather see the Republicans in control of Congress as a counterbalance to Obama's policies than a Democratic-led Congress to help support him. It was similar in fall 2010, when the Republicans took control of the House of Representatives and gained six Senate seats.
See PDF with full results and charts here.
Obama's job approval rating, after a slight winter rebound, has lost 5 points among all adults since March, to 41 percent, the lowest of his presidency by a single point. Fifty-two percent disapprove, with "strong" disapproval exceeding strong approval by 17 percentage points. He's lost ground in particular among some of his core support groups.
Economic discontent remains the driving element in political views in this survey, produced for ABC by Langer Research Associates. Americans rate the condition of the economy negatively by 71-29 percent - the least bad since November 2007, but still dismal by any measure. Only 28 percent think the economy's improving, down by 9 points since just before Obama won his second term. He gets just 42 percent approval for handling it.
Economic views are strongly related to political preferences. Among people who see the economy improving, 65 percent prefer Democratic control of Congress, while among those who see the economy as stagnant or worsening, 62 percent favor Republican control. Notably, economic views are linked with preferences for control of Congress regardless of people's partisan affiliation.
The results suggest the corrosive effects of the long downturn on the president's popularity: Among those who say the economy is in bad shape, Obama's overall approval rating has lost 20 points since February 2012, from 46 percent then to 26 percent now.
The president faces other challenges. While he's hailed insurance exchange sign-ups as a marker of the Affordable Care Act's success, the program and his rating for handling it have lost ground, both down from their levels late last month after the Healthcare.gov website was stabilized. The law gets 44 percent support, down 5 points; Obama has just 37 percent approval for its implementation, down 7.
One reason is that the law seems to have opened an avenue for public ire about health care costs to be directed at the administration. Six in 10 blame the ACA for increasing costs nationally, and 47 percent think it's caused their own health care expenses to rise. Regardless of whether or how much those costs would have risen otherwise, Obamacare is taking a heavy dose of the blame.
Separately, a current issue on the world stage offers no respite for Obama: Given continued tensions over Ukraine, just 34 percent of Americans approve of how he's handling that situation, 8 points fewer than early last month. Forty-six percent disapprove, with two in 10 withholding judgment.
DISCONTENT/MIDTERMS - With these and other problems - but chiefly the economy - the public by more than 2-1, 66-30 percent, says the country's headed seriously off on the wrong track. That's about where it's been lately, and more negative than a year ago.
General anti-incumbency results: Just 22 percent of Americans say they're inclined to re-elect their representative in Congress, unchanged from last month as the fewest in ABC/Post polls dating back 25 years.
Another outcome is risk for the president's party, in punishment for his handling of the helm. A single point divides Democratic and Republican candidates for the House in preference among registered voters, 45-44 percent. Among those who say they're certain to vote (with Republicans more apt to show up in midterms), that goes to 44-49 percent.
Independents, a sometimes swing-voting group, favor Republican House candidates by 55-32 percent (among those who say they're certain to vote). And, as with views on control of Congress, perceptions of the economy correlate with congressional vote preference, regardless of partisanship.
ISSUES - None of this means the GOP is home free. A robust improvement in the economy could change the equation. (As many, at least, say it's currently holding steady, 35 percent, as think it's getting worse, 36 percent.) And even as the brunt of economic unhappiness falls on the president, the public divides essentially evenly on which party they trust more to handle the economy - suggesting that the Republicans have yet to present a broadly appealing alternative.
In another example, for all of Obamacare's controversies, the Democrats hold a slight 8-point edge in trust to handle health care, again indicating that the Republicans have yet to seize the opportunity to present a compelling solution of their own. Indeed, the Democrats have a 6-point lead in trust to handle "the main problems the nation faces" - although, as with all others, that narrows among likely voters, in this case to 37-40 percent, a numerical (but not significant) GOP edge.
The Republicans have a 9-point advantage in trust to handle the federal deficit - an improvement for the party from last month. Similarly, Americans by a 7-point margin trust the Republicans over Obama to find the right mix of spending to cut and federal programs to maintain. The president had an 11-point lead on that question just after the partial government shutdown last fall.
The Democrats push back with two results that they're likely to stress as the November election draws closer: One is a broad, 20-point advantage, 52-32 percent, in trust over the Republicans to help the middle class (but again, this narrows among likely voters). The other is an even wider, 30-point lead, 55-25 percent, in trust to handle issues of particular concern to women.
The Republicans have some vulnerability in other areas, as well. Americans say the Democratic Party comes closer than the GOP to their positions on climate change, by 18 points; whether or not to raise the minimum wage, by 16 points; gay marriage, by 14 points; and the issue of abortion, by 8 points. On one remaining issue, gun control, the Republicans have a slight, 5-point edge.
HEALTH CARE - Obamacare, for its part, is a subject the Republicans have sought to turn to their advantage in the midterm elections, and the poll results show ample opportunity.
Costs are a particular target. As noted, 47 percent of Americans feel that their health care costs are rising as a result of the ACA; 58 percent say the same about the overall costs of health care nationally. Just 8 and 11 percent, respectively, say the law has decreased these costs. If there's a case to be made that costs would have risen anyway - or that they would have risen faster absent the ACA - it's yet to resonate with large segments of the population.
Other assessments also are critical. The public by a 20-point margin, 44-24 percent, is more apt to say the law has made the overall health care system worse rather than better (although the number who say it's made things better is up by 5 points from December). The rest, 29 percent, see no change. Americans by 29-14 percent likewise say the ACA has made their own care worse rather than better, with more, 53 percent, reporting no impact.
Despite the website's improvements, half say the law's implementation is going worse than they expected when it began, vs. 41 percent better - another sign of the persistent antipathy that's dogged Obamacare from the start.
The poll also shows both the striking partisan division on Obamacare and the extent to which, on several questions, independents side more with Republicans on the issue. Thirty-eight percent of Democrats, for instance, say the ACA has increased health care costs nationally; that soars to 67 percent of independents and 73 percent of Republicans. And while 47 percent of Democrats think it's made the health care system better, just 6 and 16 percent of Republicans and independents, respectively, agree.
OBAMA/GROUPS - Divisions among groups remain especially stark in terms of Obama's ratings; further, as noted, he's lost ground in some of his core support groups. The president's approval rating since early March has lost 14 points among liberals, 12 points among people with postgraduate degrees, 10 points among urban residents, 9 points among Democrats and 7 points among those with incomes less than $50,000. He's lost 9 points among independents as well.
With 41 percent approval overall (his previous low was 42 percent last November and the same in October 2011), Obama's at new lows among nonwhites (61-34 percent, approve-disapprove) and liberals (63-31 percent), and matches his lows among moderates (46-48 percent) and independents (33-59 percent). His rating among Democrats, 74-22 percent, is a single point from its low.
Other results also mark the extent of the difficulties facing Obama and his party alike. A form of statistical analysis called regression finds that, as noted above, views on the economy correlate both with congressional vote preference, and views on which party should control Congress, independently of partisan affiliation. That suggests that the Democrats are in serious need of a positive shift in economic views.
That may be hard to accomplish. While 50 percent of Democrats say the economy's in good shape, that plummets not only among Republicans but independents as well, to 12 and 22 percent, respectively. And while 46 percent of Democrats see improvement in the economy, again just 22 percent of independents, and 15 percent of Republicans, agree.
Preferences on which party controls Congress may reflect a general inclination in favor of divided government - and don't always predict outcomes, as in 2002, when more registered voters preferred Democratic control yet the GOP held its ground. It's striking, nonetheless, that this poll finds Republican control favored not only in the 2012 red states, by 56-36 percent, but also by 51-41 percent in the blue states that backed Obama fewer than two years ago.
METHODOLOGY - This ABC News/Washington Post poll was conducted by telephone April 24-27, 2014, in English and Spanish, among a random national sample of 1,000 adults, including landline and cell-phone-only respondents. Results have a margin of sampling error of 3.5 points, including design effect. Partisan divisions are 32-21-38 percent, Democrats-Republicans-independents.
The survey was produced for ABC News by Langer Research Associates of New York, N.Y., with sampling, data collection and tabulation by Abt-SRBI of New York, N.Y. ||||| President Obama’s approval rating fell to 41 percent, down from 46 percent through the first three months of the year and the lowest of his presidency in Washington Post-ABC News polls. (Charles Dharapak/AP)
Democrats face serious obstacles as they look to the November elections, with President Obama’s approval rating at a new low and a majority of voters saying they prefer a Congress in Republican hands to check the president’s agenda, according to a new Washington Post-ABC News poll.
Obama’s approval rating fell to 41 percent, down from 46 percent through the first three months of the year and the lowest of his presidency in Post-ABC News polls. Just 42 percent approve of his handling of the economy, 37 percent approve of how he is handling the implementation of the Affordable Care Act and 34 percent approve of his handling of the situation involving Ukraine and Russia.
Obama’s low rating could be a significant drag on Democratic candidates this fall — past elections suggest that when approval ratings are as low as Obama’s, the president’s party is almost certain to suffer at the ballot box in November.
Republicans are favored to maintain control of the House, with the focus now on whether they can take control of the Senate. One key question about November is who will vote. Turnout in midterm elections is always lower than in presidential elections, and at this point, key elements of the Republican coalition — namely white voters and older voters — say they are more certain to cast ballots this fall than are younger voters and minorities, two groups that Democrats and Obama relied on in 2008 and 2012.
Democrats are not without assets as the midterm election campaigns intensify. Americans trust Democrats over Republicans by 40 to 34 percent to handle the country’s main problems. By significant margins, Americans see Democrats as better for the middle class and on women’s issues. Americans favor the Democrats’ positions on raising the minimum wage, same-sex marriage and on the broad issue of dealing with global climate change.
View Graphic Obama receives low marks as Democrats face midterm turnout challenge
Led by Obama, Democrats have sought to use many of these issues to draw contrasts with Republicans, both nationally and in states with the most competitive races. As yet, however, there is little evidence that those assets outweigh either the normal midterm disadvantages of the party that holds the White House or the dissatisfaction with the general direction of the country and Obama’s leadership generally.
The Affordable Care Act is expected to be a major issue in the midterm elections. Obama recently urged Democrats to defend the law energetically, particularly after the administration announced that 8 million people signed up for it during the initial enrollment period. Republicans are confident that opposition to the new law will energize their supporters.
The Post-ABC poll found that 44 percent say they support the law while 48 percent say they oppose it, which is about where it was at the end of last year and in January. Half of all Americans also say they think implementation is worse than expected.
Last month, a Post-ABC poll found 49 percent of Americans saying they supported the new law compared with 48 percent who opposed it. That finding was more positive for the administration than most other polls at the time. Democrats saw it as a possible leading indicator of a shift in public opinion, but that has not materialized.
A 58 percent majority say the new law is causing higher costs overall, and 47 percent say it will make the health-care system worse. While a majority say the quality of the health care they receive will remain the same, a plurality expect it to result in higher personal costs for that care.
A number of Democratic strategists are urging their candidates to campaign on a message that calls for continued implementation of the law, with some fixes. These strategists say that message is more popular than the “repeal and replace” theme of the Republicans. A separate poll Tuesday from the Kaiser Family Foundation finds nearly six in 10 want Congress to improve the law rather than repeal it and replace it with something new.
Democrats are hoping to put Republicans on the defensive on the question of “what next” for the Affordable Care Act. Republicans say they remain confident that the health-care issue will help them more in November.
Pessimism about the economy also persists, with more than seven in 10 describing the economy in negative terms. Public attitudes about the future of the economy are anything but rosy. Just 28 percent say they think the economy is getting better, while 36 percent say it is getting worse and 35 percent say it’s staying the same.
Americans express continued discontent about the country’s direction, with two-thirds saying things are on the wrong track. Asked whether each party’s incumbents deserve relection, at least six in 10 say they do not.
Among registered voters, 45 percent intend to vote for the Democratic candidate in House elections this fall, and 44 percent for the Republican candidate. Based on past elections, that close margin is troubling news for Democrats. Shortly before they lost control of the House in 2010, Democrats held a five-point advantage on this question.
Another measure of voting intentions came when people were asked whether they thought it was more important to have Democrats in charge in Congress to help support Obama’s policies or Republicans in charge to act as a check on the president’s policies. On this, 53 percent of voters say Republicans and 39 percent say Democrats. That is almost identical to the results of the same question when it was asked in September 2010, two months before the GOP landslide.
The decline in Obama’s approval rating in the most recent poll was the result of lower support among both Democrats and independents. At this point, 74 percent of Democrats say they approve of his job performance, one point higher than his lowest ever in Post-ABC surveys. The worry for Obama and his party is that many of the Democrats who disapprove of Obama’s performance simply won’t show up in November.
Although Obama’s overall approval rating is at its lowest point ever in Post-ABC polls, his disapproval is still a few points better than at its worst. That’s because more people than usual say they had no opinion. At this point, Obama’s approval rating looks only slightly better than that of President George W. Bush in the spring of 2006.
Also, the disapproval of Obama’s handling of the situation with Ukraine and Russia is 46 percent, with 20 percent saying they have no opinion on that — perhaps a sign that Americans see few good policy options for the United States in the standoff.
Some Democratic strategists have argued that their candidates have ample arguments to make against Republicans this fall as they seek to hold down expected losses.
The Post-ABC survey sheds light on what they are. Democrats have a significant advantage on eight issues, from health care to climate change to abortion and same-sex marriage. Democrats have a smaller advantage on immigration, and the two parties are roughly equal on the economy. Republicans have the edge on three — guns, the deficit and striking the right balance on which government programs to cut.
Where Democrats have the biggest advantages are on the same contrasts that helped Obama win reelection in 2012 — indicators of which party voters believe is on their side. By 52 to 32 percent, those surveyed say they trust Democrats to do a better job helping the middle class, and by 55 to 25 percent, they trust Democrats on issues that are especially important to women.
How much those attitudes will actually drive voting decisions and voter turnout will be important in determining what happens in November.
The Post-ABC poll was conducted April 24 to 27 among a random national sample of 1,000 adults, including interviews on land lines and with cellphone-only respondents. The overall margin of sampling error is plus or minus 3.5 percentage points.
Scott Clement contributed to this report. | President Obama's approval rating has sunk to a new low, and a majority of the public would now prefer that Republicans control Congress as a check on his power, a new Washington Post/ABC News poll reveals. Obama's approval dropped 5 points to 41%, his lowest ever by one point, and those who "strongly disapprove" outnumber those who "strongly approve" by 17 points. Voters say they prefer a Republican-controlled Congress by a 53-39 margin. The Post notes that split mirrors the response it got to the same question in September 2010, in advance of the GOP's big win. The economy appears to be a major factor, with 71% viewing the economy negatively. Among those people, 62% favor Republican control. The news isn't all bad for Democrats, though. The public trusts Democrats more than Republicans to solve the country's "main problems," and favors Democratic positions on minimum wage, same-sex marriage, and women's issues. When asked how they'll actually vote this fall, 45% said they'd pull the lever for a Democratic House candidate, to only 44% for Republicans. |
RS20860 -- The Supreme Court Upholds EPA Standard- Setting Under the Clean Air Act: Whitman v. AmericanTrucking Ass'ns March 28, 2001 NAAQSs lie at the very heart of the Clean Air Act. These standards prescribe maximum pollutant concentrations for ground-level, outdoor air, and have beenpromulgated by EPA for six pollutants, (2) includingozone and particulates. The NAAQSs determine the stringency of emission limits that each state must imposeon individual stationary sources of the six pollutants, to achieve the NAAQSs within its borders. NAAQSs comein two forms: "primary NAAQSs" protect thepublic health, while "secondary NAAQSs" protect the public "welfare" (non-public health effects). (3) Once a NAAQS has been promulgated, EPA mustreview it(and the "criteria documents" on which it is based) every 5 years, and make such revisions "as may beappropriate." (4) In 1997, EPA revised the NAAQSs for ozone and particulate matter, making them stricter. Given the perceived impact of these more stringent standards on theeconomy, it was unsurprising that numerous legal challenges were brought - with two members of Congress (Rep.Bliley and Sen. Hatch) filing as amici on theside of the challengers. Pursuant to CAA requirement, the suit was filed in the U.S. Court of Appeals for the Districtof Columbia Circuit. In May, 1999, the D.C. Circuit ruled 2-1 that various deficiencies in EPA's promulgation of the two NAAQSs required that they be sent back to EPA for furtherconsideration. (5) Among other things, the two-judgemajority held that EPA's reading of the CAA section governing the setting of primary NAAQSs gave theagency too much discretion, and thus violated the constitutional "nondelegation doctrine." It also rejected industry'sposition that EPA, in arriving at primaryNAAQSs, may consider the costs of implementation. Finally, it ruled that EPA could not enforce its revised primaryNAAQS for ozone, owing to its being aneight-hour standard, rather than the one-hour standard envisioned by CAA nonattainment-area provisions added tothe Act in 1990 (see further discussion on page4). Five months later, the three-judge panel made minor modifications in its opinion, but the full court refused togrant rehearing en banc (all the judges of thecourt sitting). (6) In May, 2000, the Supreme Courttook the case. (7) The Supreme Court in American Trucking gave EPA a unanimous victory on the two major issues in the case: consideration of costs, and nondelegation doctrine. Justice Scalia authored the opinion of the Court, with various justices writing separate concurrences to notedifferences as to rationale, but not as to holding. Consideration of costs. The Court affirmed the D.C. Circuit decision (which, in turn, had endorsed existing case law of the circuit) in holding that whenpromulgating primary NAAQSs, or revised primary NAAQSs, EPA may not consider the costs ofimplementing the new standard. Health impacts, and healthimpacts alone, are to be the touchstone. The governing standard in the statute, the Court said, made this clear:section 109(b)(1) instructs EPA to set primaryNAAQSs "the attainment and maintenance of which ... are requisite to protect the public health" with an "adequatemargin of safety." (8) Industry's arguments that considerations other than the health impacts of pollutants were cognizable could not overcome the directness of the above statutory text. For example, industry contended that a very stringent NAAQS might close down whole industries, therebyimpoverishing the workers dependent on that industryand, in turn, reducing their health. A health-based standard such as the primary NAAQS should include theseindirect impacts, industry asserted. The Court,however, pointed to numerous other CAA sections where Congress had explicitly allowed consideration ofeconomic factors, concluding that had it intended toallow such factors under section 109(b)(1), it would have been more forthright - particularly given the centrality ofthe NAAQS concept to the CAA's regulatoryscheme. Looking for such a forthright "textual commitment" of authority for EPA to consider costs, the Courtfound none. Its conclusion: section 109(b)(1)"unambiguously bars cost considerations from the NAAQS-setting process." Nondelegation doctrine. The most controversial portion of the D.C. Circuit's majority opinion was its embrace of a long-moribund constitutional principle knownas the "nondelegation doctrine." This separation-of-powers doctrine derives from Article I of the Constitution,which vests "[a]ll legislative Powers" in Congress. Not surprisingly, the Supreme Court reads this vesting provision loosely, recognizing that Congress routinelydelegates quasi-legislative powers to non-Article Ibodies. In particular, Congress frequently commits to the specialized expertise of executive-branch agencies thetask of rulemaking in technical areas -- such asair pollution control. The nondelegation doctrine says that such delegations pass constitutional muster only ifCongress gives the agency an intelligible principle to guide its exercise of that authority. The majority opinion below found that EPA had construed CAA section 109 so loosely as to render it an unconstitutional delegation. The court agreed with the factors used by the agency to assess the public health threat posed by air pollutants. But, it said, EPAhad articulated no intelligible principle for translating thefactors into a particular NAAQS, nor is one apparent from the statute. Given that both ozone and particulates arenon-threshold pollutants (adverse health effectsoccur at any concentration above zero), some public health threat has to be tolerated if EPA is to avoidshutting down entire industries. The agency, in the court'sview, had articulated no standard for determining how much . In invoking the nondelegation doctrine, the D.C. Circuit drew considerable attention. It was the first time in 65 years that the nondelegation doctrine had beensuccessfully used, and raised serious implications for how Congress delegates standard-setting authority to agenciesgenerally. Commentators pointed to otherfederal statutes - such as the Corps of Engineers wetlands permitting program under the Clean Water Act, and therulemaking authority conferred by theOccupational Safety and Health Act - as vulnerable to nondelegation-doctrine challenge, should the D.C. Circuitbe affirmed on appeal. The Supreme Court, however, reversed. The scope of discretion allowed by section 109(b)(1), the Court said, is "well within the outer limits of our nondelegationprecedents." Under section 109(b)(1), primary NAAQSs are to set at levels "requisite" to protect public health -"requisite" being argued by the United States,and accepted by the Court, as meaning "sufficient, but not more than necessary." To be sure, acknowledged theCourt, more guidance must be furnished theagency when the agency action is to have broad scope - as here, where the revised NAAQSs affect the entire U.S.economy. But even for sweeping regulatoryschemes, the Court disclaimed any demand that statutes provide a "determinate criterion" for saying precisely howmuch of the regulated harm is too much. EPAmay therefore be allowed discretion to determine how much of a public health threat from ozone and particulates(recall, they are non-threshold pollutants) it willtolerate at non-zero levels. (9) Issues involving implementation of the revised ozone NAAQS. EPA lost, again unanimously, on two issues arising from its policy for implementing the revisedozone NAAQS in non-attainment areas. First, the Court rejected EPA's argument that the policy did not constitutefinal agency action ripe for review. The Court then proceeded to the second issue: which CAA provisions govern the ozone nonattainment-area implementation policy. This calls for somebackground. The CAA imposes restrictions on nonattainment areas over and above those that the Act imposesgenerally. These additional nonattainment-arearestrictions are found in Title I, Part D of the statute. Subpart 1 of Title D contains general nonattainmentregulations that apply to every pollutant for which aNAAQS exists. Subpart 2 of Part D addresses ozone in particular. The dispute before the Court was whetherSubpart 1 alone, or rather Subpart 2 or somecombination of Subparts 1 and 2, controls the implementation of the revised ozone NAAQS in nonattainment areas. EPA, in its implementation policy, took theformer, Subpart-1-only course. The problem it faced was that Subpart 2 contemplated a 1-hour ozone NAAQS,reflecting the ozone standard existing whenSubpart 2 was enacted in 1990. The revised ozone NAAQS, however, embodied an 8-hour standard. Thus, someSubpart 2 provisions - in particular, thenonattainment-area classification scheme that identified the requirements to be imposed depending on an area'sdegree of nonattainment - did not fit the newozone NAAQS. The Court found that EPA could not ignore Subpart 2 entirely, as it had done. Whatever awkwardness of fit results from applying Subpart 2 to the new ozonestandard, it cannot, said the Court, justify "render[ing] Subpart 2's carefully designed restrictions on EPA discretionutterly nugatory once a new standard has beenpromulgated ...." One example of the discretion-limiting nature of Subpart 2: under Subpart 1, EPA may extendattainment dates for as long as 12 years; underSubpart 2, only 2 years (though Subpart 2's attainment deadlines stretch from 3 to 20 years depending on the severityof an area's ozone pollution). The Court leftit to EPA "to develop a reasonable interpretation of the [CAA's] nonattainment implementation provisions" for therevised ozone NAAQS. As to the nondelegation issue in American Trucking , EPA had gone to the Supreme Court with the stronger arguments. It was not a foregone conclusion,however, that EPA would win unanimously, as it did, even though the Court had on many occasions sustainedfederal statutes containing standards as loose as, orlooser than, that in the CAA governing NAAQS setting. A few justices voting for at least some resuscitation of thenondelegation doctrine was widely deemed apossibility on the ground that the Court in other areas recently has revealed an interest in cabining congressionalpower and discretion - and not only when thefederal-state balance is implicated. Though it has been 66 years since the last successful nondelegation-doctrinechallenge, the Court has not hesitated to reverselongstanding patterns in its constitutional jurisprudence when doing so furthered the agenda of a contingent of thejustices. Very likely, the Court's refusal tobring back the nondelegation doctrine stemmed in part from the Court's view that "in our increasingly complexsociety, replete with ever changing and moretechnical problems, Congress simply cannot do its job absent an ability to delegate power under broad generaldirectives." (10) Moreover, the task of rewritingfederal statutes necessitated by a reinvigorated nondelegation doctrine might be one of daunting magnitude. Fromthe environmental area to antitrust to civilrights, federal laws abound that give agencies only the broadest of guidance. Likewise, EPA had the better argument when it came to inclusion of costs in the setting of NAAQSs. As a critic of using legislative history to divine the meaningof statutes, Justice Scalia restricted his analysis for the Court to examination of the statutory text itself. But theconcurrence by Justice Breyer reveals a legislativehistory from the enactment of the 1970 CAA that sides unambiguously with EPA's keep-costs-out position. For the foregoing issues then, the American Trucking decision largely restores the status quo ante. As before the filing of this case, a lax jurisprudence under thenondelegation doctrine and the impermissibility of considering costs in setting NAAQSs are once again regardedas relatively settled law. Some commentatorshave noted the Court's refusal in American Trucking to defer to EPA's interpretation of the CAA onozone standard implementation, and speculated that thelegacy of the decision may lie in its signalling a desire by the Court to lessen the degree of judicial deference toagency decisionmaking. But it is premature as yetto draw this conclusion. The Court's decision in Whitman v. American Trucking is not the end of court proceedings in the case. Numerous issues remain before the D.C. Circuit, andEPA's next attempt at an implementation plan may also be subject to court challenge. Developing an implementation plan that embodies a "reasonable interpretation"of the Act's nonattainment implementation provisions for the revised ozoneNAAQS, as the Court mandated, is not an easy task. EPA has said that an 8-hour standard of 0.09 ppm would have"generally represent[ed] the continuation ofthe [old] level of protection," (11) but the newstandard is, in fact, set at a more stringent level of 0.08 ppm. Thus, the statute's classification system contains a gapthat does not address areas with readings of 0.08 - 0.09 ppm. A second problem relates to the setting of attainment dates. In the 1990 amendments, Congress was specific in setting dates of attainment that ranged from 3-20years from the date of enactment, depending on the severity of an area's ozone pollution. The Court read thisspecificity as denying EPA its previous broaddiscretion in setting attainment dates. But three of the Act's deadlines (for Marginal, Moderate, and Serious ozonenonattainment areas) have already passed. How EPA is to respond to this is unclear. The Court itself noted that the Act's method for calculating attainmentdates "seems to make no sense for areas that arefirst classified under a new standard after November 15, 1990." (12) In these circumstances, it would seem likely that whatever approach EPA may take will be subject to challenge by parties opposed to the new standards, with thepotential for several additional years of litigation before the issues are resolved. Whether Congress should interveneto settle these matters is a possibility that fewhave discussed. While logical in many respects, such a legislative clarification would open a number of issuesregarding the level of the new standards, theimplementation measures to be required, and the nature of EPA's standard-setting authority that interested partiesmay not wish to have legislated. | On February 27, 2001, the Supreme Court handed down its decision in Whitman v. American Trucking Associations, achallenge to EPA's promulgation in 1997 of revised national ambient air quality standards for ozone and particulatesunder the Clean Air Act. On the broaderissues, the Court ruled that (1) the Act's provisions governing the setting of primary (health-protective) ambientstandards did not transgress the "nondelegationdoctrine," a moribund constitutional principle that the court below had resurrected, and (2) the Act bars EPA fromconsidering implementation costs when it setsprimary national ambient standards. On a narrow issue, the Court held that EPA had not been justified, inpromulgating its ozone implementation plan, inapplying only the Act's nonattainment-area subpart of general application, rather than a subpart specific to ozonenonattainment. As a result, the Court chargedthe agency with developing a "reasonable interpretation" accommodating both subparts. Such accommodation islikely to prove a difficult task, however, andalmost certainly once adopted will generate further legal challenges. |